设为首页 收藏本站
查看: 710|回复: 0

[经验分享] 基于python的web应用(四)

[复制链接]

尚未签到

发表于 2018-8-12 14:30:46 | 显示全部楼层 |阅读模式
7.3.6 开启认证
  #sed -i “s/#keyFile/keyFile/”/opt/mongodb/security/*.conf
  重启mongod,mongios服务:
  l Server1 :
  # pkill mongod
  # pkill mongios
  #/opt/mongodb/bin/mongod -f /opt/mongodb/security/shard10001.conf
  #/opt/mongodb/bin/mongod -f /opt/mongodb/security/shard20001.conf
  #/opt/mongodb/bin/mongod -f /opt/mongodb/security/shard30001.conf
  #/opt/mongodb/bin/mongod -f /opt/mongodb/security/config1.conf
  #/opt/mongodb/bin/mongod -f /opt/mongodb/security/mongos1.conf
  #/opt/mongodb/bin/mongos -f /opt/mongodb/security/mongos1.conf
  l Server2:
  # pkill mongod
  # pkill mongios
  #/opt/mongodb/bin/mongod -f /opt/mongodb/security/shard10002.conf
  #/opt/mongodb/bin/mongod -f /opt/mongodb/security/shard20002.conf
  #/opt/mongodb/bin/mongod -f /opt/mongodb/security/shard30002.conf
  #/opt/mongodb/bin/mongod -f /opt/mongodb/security/config2.conf
  #/opt/mongodb/bin/mongod -f /opt/mongodb/security/mongos2.conf
  #/opt/mongodb/bin/mongos -f /opt/mongodb/security/mongos2.conf
  l Server3 :
  # pkill mongod
  # pkill mongios
  #/opt/mongodb/bin/mongod -f /opt/mongodb/security/shard10003.conf
  #/opt/mongodb/bin/mongod -f /opt/mongodb/security/shard20003.conf
  #/opt/mongodb/bin/mongod -f /opt/mongodb/security/shard30003.conf
  #/opt/mongodb/bin/mongod -f /opt/mongodb/security/config3.conf
  #/opt/mongodb/bin/mongod -f /opt/mongodb/security/mongos3.conf
  #/opt/mongodb/bin/mongos -f /opt/mongodb/security/mongos3.conf
7.3.7测试分片实例:
  root@debian:~# /opt/mongodb/bin/mongo 10.15.62.202:30000/admin
  MongoDB shell version: 2.4.6
  connecting to: 10.15.62.202:30000/admin
  > db.auth("clusterAdmin","pwd")
  1
  mongos> show collections
  system.indexes
  system.users
  mongos> use test
  switched to db test
  mongos> show collections
  #对test数据库新建用户:
  mongos> db.addUser({user:"test",pwd:"123456",roles:["readWrite","dbAdmin"]})
  {
  "user" : "test",
  "pwd" : "c8ef9e7ab00406e84cfa807ec082f59e",
  "roles" : [
  "readWrite",
  "dbAdmin"
  ],
  "_id" : ObjectId("5241e3ffdaf821e8d4c5b9e7")
  }
  mongos>
  mongos> show collections
  system.indexes
  System.users
  #查看users集合中的内容:
  mongos> db.system.users.find()
  { "_id" : ObjectId("5241e3ffdaf821e8d4c5b9e7"), "user" : "test", "pwd" : "c8ef9e7ab00406e84cfa807ec082f59e", "roles" : [  "readWrite",  "dbAdmin" ] }
  mongos> db
  test
  mongos> use admin
  switched to db admin
  #对数据库分片,test代表要分片的数据库
  mongos> db.runCommand({enablesharding:"test"})
  { "ok" : 1 }
  # 对集合分片,在test数据库的users表上分片,同时指明shard key是id这一列
  mongos> db.runCommand({shardcollection:"test.users",key:{_id:1}})
  { "collectionsharded" : "test.users", "ok" : 1 }
  mongos> db
  admin
  mongos> use test
  switched to db test
  mongos> show collections
  system.indexes
  system.profile
  system.users
  Users
  #循环向test.users表中插入30万条数据,然后使用命令db.users.stats()查看表的分片情况.
  mongos> for(var i=1;i<300000;i++) db.users.insert({name:"user"+i,age:i,email:"yanxiaoming@qq.com"})
  mongos> show collections
  system.indexes
  system.profile
  system.users
  Users
  #查看结果集
  mongos> db.users.find()
  { "_id" : ObjectId("5241e618daf821e8d4c707a5"), "name" : "user85438", "age" : 85438, "email" : "yanxiaoming@qq.com" }
  { "_id" : ObjectId("5241e600daf821e8d4c5d45a"), "name" : "user6771", "age" : 6771, "email" : "yanxiaoming@qq.com" }
  { "_id" : ObjectId("5241e5ffdaf821e8d4c5b9e8"), "name" : "user1", "age" : 1, "email" : "yanxiaoming@qq.com" }
  { "_id" : ObjectId("5241e618daf821e8d4c707a6"), "name" : "user85439", "age" : 85439, "email" : "yanxiaoming@qq.com" }
  { "_id" : ObjectId("5241e600daf821e8d4c5d45b"), "name" : "user6772", "age" : 6772, "email" : "yanxiaoming@qq.com" }
  { "_id" : ObjectId("5241e5ffdaf821e8d4c5b9e9"), "name" : "user2", "age" : 2, "email" : "yanxiaoming@qq.com" }
  { "_id" : ObjectId("5241e618daf821e8d4c707a7"), "name" : "user85440", "age" : 85440, "email" : "yanxiaoming@qq.com" }
  { "_id" : ObjectId("5241e600daf821e8d4c5d45c"), "name" : "user6773", "age" : 6773, "email" : "yanxiaoming@qq.com" }
  { "_id" : ObjectId("5241e5ffdaf821e8d4c5b9ea"), "name" : "user3", "age" : 3, "email" : "yanxiaoming@qq.com" }
  { "_id" : ObjectId("5241e618daf821e8d4c707a8"), "name" : "user85441", "age" : 85441, "email" : "yanxiaoming@qq.com" }
  { "_id" : ObjectId("5241e600daf821e8d4c5d45d"), "name" : "user6774", "age" : 6774, "email" : "yanxiaoming@qq.com" }
  { "_id" : ObjectId("5241e5ffdaf821e8d4c5b9eb"), "name" : "user4", "age" : 4, "email" : "yanxiaoming@qq.com" }
  { "_id" : ObjectId("5241e618daf821e8d4c707a9"), "name" : "user85442", "age" : 85442, "email" : "yanxiaoming@qq.com" }
  { "_id" : ObjectId("5241e600daf821e8d4c5d45e"), "name" : "user6775", "age" : 6775, "email" : "yanxiaoming@qq.com" }
  { "_id" : ObjectId("5241e5ffdaf821e8d4c5b9ec"), "name" : "user5", "age" : 5, "email" : "yanxiaoming@qq.com" }
  { "_id" : ObjectId("5241e618daf821e8d4c707aa"), "name" : "user85443", "age" : 85443, "email" : "yanxiaoming@qq.com" }
  { "_id" : ObjectId("5241e600daf821e8d4c5d45f"), "name" : "user6776", "age" : 6776, "email" : "yanxiaoming@qq.com" }
  { "_id" : ObjectId("5241e5ffdaf821e8d4c5b9ed"), "name" : "user6", "age" : 6, "email" : "yanxiaoming@qq.com" }
  { "_id" : ObjectId("5241e618daf821e8d4c707ab"), "name" : "user85444", "age" : 85444, "email" : "yanxiaoming@qq.com" }
  { "_id" : ObjectId("5241e600daf821e8d4c5d460"), "name" : "user6777", "age" : 6777, "email" : "yanxiaoming@qq.com" }
  Type "it" for more
  mongos> db.users.stats()
  {
  "sharded" : true,
  "ns" : "test.users",
  "count" : 299999,
  "numExtents" : 20,
  "size" : 26399928,
  "storageSize" : 60162048,
  "totalIndexSize" : 10808672,
  "indexSizes" : {
  "_id_" : 10808672
  },
  "avgObjSize" : 88.00005333351112,
  "nindexes" : 1,
  "nchunks" : 14,
  "shards" : {
  "shard1" : {
  "ns" : "test.users",
  "count" : 60185,
  "size" : 5296288,
  "avgObjSize" : 88.00013292348592,
  "storageSize" : 11182080,
  "numExtents" : 6,
  "nindexes" : 1,
  "lastExtentSize" : 8388608,
  "paddingFactor" : 1,
  "systemFlags" : 1,
  "userFlags" : 0,
  "totalIndexSize" : 2984240,
  "indexSizes" : {
  "_id_" : 2984240
  },
  "ok" : 1
  },
  "shard2" : {
  "ns" : "test.users",
  "count" : 200481,
  "size" : 17642328,
  "avgObjSize" : 88,
  "storageSize" : 37797888,
  "numExtents" : 8,
  "nindexes" : 1,
  "lastExtentSize" : 15290368,
  "paddingFactor" : 1,
  "systemFlags" : 1,
  "userFlags" : 0,
  "totalIndexSize" : 6540800,
  "indexSizes" : {
  "_id_" : 6540800
  },
  "ok" : 1
  },
  "shard3" : {
  "ns" : "test.users",
  "count" : 39333,
  "size" : 3461312,
  "avgObjSize" : 88.00020339155417,
  "storageSize" : 11182080,
  "numExtents" : 6,
  "nindexes" : 1,
  "lastExtentSize" : 8388608,
  "paddingFactor" : 1,
  "systemFlags" : 1,
  "userFlags" : 0,
  "totalIndexSize" : 1283632,
  "indexSizes" : {
  "_id_" : 1283632
  },
  "ok" : 1
  }
  },
  "ok" : 1
  }
  mongos>
7.4增加移除sharding节点
7.4.1  增加sharding节点:
  1   查看当前分分片信息:

  可以看到:存在3个切片,每个切片中存在3个节点
  2 删除运行中的节点:
  这里操作切片为:shard1 中的节点:
  shard1:PRIMARY> rs.remove("10.15.62.203:10001")
  Fri Sep 27 09:20:30.768 DBClientCursor::init call() failed
  Fri Sep 27 09:20:30.770 Error: error doing query: failed at src/mongo/shell/query.js:78
  Fri Sep 27 09:20:30.772 trying reconnect to 127.0.0.1:10001
  Fri Sep 27 09:20:30.775 reconnect 127.0.0.1:10001 ok
  shard1:PRIMARY>
  3 查看日志:
  Fri Sep 27 09:20:35.784 [conn6] SocketException handling request, closing client connection: 9001 socket exception [SEND_ERROR] server [10.15.62.203:24188]
  Fri Sep 27 09:20:35.785 [conn29] SocketException handling request, closing client connection: 9001 socket exception [SEND_ERROR] server [10.15.62.205:3114]
  4 再次查看当前分片信息验证:

  发现节点 10.15.62.203:10001 已经被移除
  5   关闭10.15.62.203:10001 上的mongod进程即可
7.4.2  增加节点:
  这里已刚才刚才删除的节点10.15.62.203:10001为例
  1  启动新增的节点的mongo进程
  2  将新增的节点加入到切片中:
  当前切片的节点信息:

  3   增加节点操作:
  shard1:PRIMARY> rs.add("10.15.62.203:10001")
  { "down" : [ "10.15.62.203:10001" ], "ok" : 1 }
  shard1:PRIMARY>
  4   查看后台日志:

  可以看到已经添加到切片中了
  5  查看切片状态

  6  查看所有切片信息:

7.5增加移除切片rep-set:
  目前40w条数据存储在shard1,shard2,shard3上,考虑到数据分散不均匀,移除rep-set需要就行数据转移,这里选择数据较少的shard3:
  1   查看test.user集合中的数据状态:
  mongos> db.users.stats();
  {
  "sharded" : true,
  "ns" : "test.users",
  "count" : 399998,
  "numExtents" : 20,
  "size" : 35141840,
  "storageSize" : 60162048,
  "totalIndexSize" : 14406112,
  "indexSizes" : {
  "_id_" : 14406112
  },
  "avgObjSize" : 87.85503927519638,
  "nindexes" : 1,
  "nchunks" : 17,
  "shards" : {
  "shard1" : {
  "ns" : "test.users",
  "count" : 73297,
  "size" : 6450144,
  "avgObjSize" : 88.00010914498547,
  "storageSize" : 11182080,
  "numExtents" : 6,
  "nindexes" : 1,
  "lastExtentSize" : 8388608,
  "paddingFactor" : 1,
  "systemFlags" : 1,
  "userFlags" : 0,
  "totalIndexSize" : 3752784,
  "indexSizes" : {
  "_id_" : 3752784
  },
  "ok" : 1
  },
  "shard2" : {
  "ns" : "test.users",
  "count" : 274257,
  "size" : 24076616,
  "avgObjSize" : 87.78851952730469,
  "storageSize" : 37797888,
  "numExtents" : 8,
  "nindexes" : 1,
  "lastExtentSize" : 15290368,
  "paddingFactor" : 1,
  "systemFlags" : 1,
  "userFlags" : 0,
  "totalIndexSize" : 8944544,
  "indexSizes" : {
  "_id_" : 8944544
  },
  "ok" : 1
  },
  "shard3" : {
  "ns" : "test.users",
  "count" : 52444,
  "size" : 4615080,
  "avgObjSize" : 88.00015254366562,
  "storageSize" : 11182080,
  "numExtents" : 6,
  "nindexes" : 1,
  "lastExtentSize" : 8388608,
  "paddingFactor" : 1,
  "systemFlags" : 1,
  "userFlags" : 0,
  "totalIndexSize" : 1708784,
  "indexSizes" : {
  "_id_" : 1708784
  },
  "ok" : 1
  }
  },
  "ok" : 1
  }
  2   执行移除操作:
  mongos> db.runCommand({"removeshard":"shard3"});
  {
  "msg" : "removeshard completed successfully",
  "state" : "started",
  "shard" : "shard3",
  "ok" : 1
  }
  3   查看迁移状态:
  我们可以反复执行上面语句,查看执行结果。
  mongos> db.runCommand({"removeshard":"shard3"});
  {
  "msg" : "draining ongoing",
  "state" : "ongoing",
  "remaining" : {
  "chunks" : NumberLong(5),
  "dbs" : NumberLong(0)
  },
  "ok" : 1
  }
  从上面可以看到,正在迁移,还剩下5块没迁移完。
  当remain为0之后,这一步就结束了。
  mongos> db.runCommand({"removeshard":"shard3"});
  {
  "msg" : "draining ongoing",
  "state" : "ongoing",
  "remaining" : {
  "chunks" : NumberLong(2),
  "dbs" : NumberLong(0)
  },
  "ok" : 1
  }
  mongos> db.runCommand({"removeshard":"shard3"});
  {
  "msg" : "removeshard completed successfully",
  "state" : "completed",
  "shard" : "shard3",
  "ok" : 1
  }
  4 查看所有的sharding:

  Shard3 已经不在了
  5 查看数据是否被切换到另外2个sharding上:
  mongos> db.users.stats()
  {
  "sharded" : true,
  "ns" : "test.users",
  "count" : 399998,
  "numExtents" : 20,
  "size" : 35141832,
  "storageSize" : 60162048,
  "totalIndexSize" : 15738800,
  "indexSizes" : {
  "_id_" : 15738800
  },
  "avgObjSize" : 87.85501927509638,
  "nindexes" : 1,
  "nchunks" : 17,
  "shards" : {
  "shard1" : {
  "ns" : "test.users",
  "count" : 99519,
  "size" : 8757680,
  "avgObjSize" : 88.00008038665983,
  "storageSize" : 11182080,
  "numExtents" : 6,
  "nindexes" : 1,
  "lastExtentSize" : 8388608,
  "paddingFactor" : 1,
  "systemFlags" : 1,
  "userFlags" : 0,
  "totalIndexSize" : 5257168,
  "indexSizes" : {
  "_id_" : 5257168
  },
  "ok" : 1
  },
  "shard2" : {
  "ns" : "test.users",
  "count" : 300479,
  "size" : 26384152,
  "avgObjSize" : 87.806974863468,
  "storageSize" : 37797888,
  "numExtents" : 8,
  "nindexes" : 1,
  "lastExtentSize" : 15290368,
  "paddingFactor" : 1,
  "systemFlags" : 1,
  "userFlags" : 0,
  "totalIndexSize" : 10473456,
  "indexSizes" : {
  "_id_" : 10473456
  },
  "ok" : 1
  },
  "shard3" : {
  "ns" : "test.users",
  "count" : 0,
  "size" : 0,
  "storageSize" : 11182080,
  "numExtents" : 6,
  "nindexes" : 1,
  "lastExtentSize" : 8388608,
  "paddingFactor" : 1,
  "systemFlags" : 1,
  "userFlags" : 0,
  "totalIndexSize" : 8176,
  "indexSizes" : {
  "_id_" : 8176
  },
  "ok" : 1
  }
  },
  "ok" : 1
  }
  mongos>
  6 关闭对应的mongod进程
  7  注意:
  ①  在操作之前确保三台服务器同步时间一致,刚开始操作的时候没注意到这个问题,造成在删除切片的时候一直卡着不动,后台日志显示:
  caught exception while doing balance: error checking clock skew of cluster 10.15.62.202:20
  000,10.15.62.203:20000,10.15.62.205:20000 :: caused by :: 13650 clock skew of the cluster 10.15.62.202:20000,10.15.62.203:200
  00,10.15.62.205:20000 is too far out of bounds to allow distributed locking.
  ②  如果该sharding中存在非切片数据,需要先将非sharding数据移除:
  db.runCommand( { movePrimary: "非切片数据", to: "sharding" })至于非切片数据在进行移除切片数据的时候会报错,看报错信息就可以知道了
  ③ 增加sharding同上面一致
7.6 Replica Set 节点切换和 failover
  MongoDB的sharding属性增加了数据的安全性和可靠性,而rep-set增加的则是数据的可用性,水平扩展了数据库的性能,replica set 特性支持自动 failover,当主节点由于某种原因掉线时,replica set 的其它节点可通过竞选产生新的主节点,因为同一个切片中sharding节点之间存在的数据都是一样的,默认情况下主节点primary具有可读写属性,secondary没有读写权限,当primary断掉后会进行重新选举出primary接管该sharding,至于如何重新选举,如何控制选举时间可以作为mongo性能优化的一个在后续说明,后面的mongodb维护文档中做过这方面的测试,这里就不特别指出了。
7.7关于sharding的几点说明:
  ①  关于keyfile和auth参数的问题
  keyfile包含了auth参数,而且优先级更高,只有你设定了keyfile,auth参数被忽略,也就是开启了用户认证
  ② 对于shard和config实例,没添加用户前可以本地无验证登录,一旦添加用户,认证即可生效,连当前的session都会失效,需要从新登录
  对于mongos实例由于它的一切信息来自config实例,必须要通过用户认证
  ③ 注意:keyfile参数必须子所有实例中指定,而且必须是同一个key
  ④  admin数据库的问题
  admin是一个特殊的数据库,它不在整个sharding中复制
  shard实例中的admin数据库是本地的,当然在同一个replica set是一样的,它仅限于登录shard实例
  config实例中的admin同时给config实例和mongos实例用于实例认证
  在mongos中添加的用户会同步到所有的config实例中
  在config中添加的用户不会同步到其它config实例中,因为它不知道其它实例的存在

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-550722-1-1.html 上篇帖子: python学习笔记(基础) 下篇帖子: IIS 部署 python web框架 Flask
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表