赤色烙印 发表于 2018-10-24 12:07:03

centos7部署Mongodb复制集结合分片(超详细)

  Mongodb复制集结合分片
  重点:概述、原理、实施案例
  一、概述:
  概述:
  分片(sharding)是指将数据库拆分,将其分散在不同的机器上的过程。分片集群(sharded cluster)是一种水平扩展数据库系统性能的方法,能够将数据集分布式存储在不同的分片(shard)上,每个分片只保存数据集的一部分,MongoDB保证各个分片之间不会有重复的数据,所有分片保存的数据之和就是完整的数据集。分片集群将数据集分布式存储,能够将负载分摊到多个分片上,每个分片只负责读写一部分数据,充分利用了各个shard的系统资源,提高数据库系统的吞吐量。
  注:mongodb3.2版本后,分片技术必须结合复制集完成;
  应用场景:
  1.单台机器的磁盘不够用了,使用分片解决磁盘空间的问题。
  2.单个mongod已经不能满足写数据的性能要求。通过分片让写压力分散到各个分片上面,使用分片服务器自身的资源。
  3.想把大量数据放到内存里提高性能。和上面一样,通过分片使用分片服务器自身的资源。
  二、原理:
  存储方式:数据集被拆分成数据块(chunk),每个数据块包含多个doc,数据块分布式存储在分片集群中。
  角色:
  Config server:MongoDB负责追踪数据块在shard上的分布信息,每个分片存储哪些数据块,叫做分片的元数据,保存在config server上的数据库 config中,一般使用3台config server,所有config server中的config数据库必须完全相同(建议将config server部署在不同的服务器,以保证稳定性);
  Shard server:将数据进行分片,拆分成数据块(chunk),数据块真正存放的单位;
  Mongos server:数据库集群请求的入口,所有的请求都通过mongos进行协调,查看分片的元数据,查找chunk存放位置,mongos自己就是一个请求分发中心,在生产环境通常有多mongos作为请求的入口,防止其中一个挂掉所有的mongodb请求都没有办法操作。
  总结:
  应用请求mongos来操作mongodb的增删改查,配置服务器存储数据库元信息,并且和mongos做同步,数据最终存入在shard(分片)上,为了防止数据丢失,同步在副本集中存储了一份,仲裁节点在数据存储到分片的时候决定存储到哪个节点。
  三、案例实施:
  实验环境:
  192.168.100.101
  config.benet.com    192.168.100.102
  shard1.benet.com    192.168.100.103
  shard2.benet.com
  Mongos:27025    mongos:27025    mongos:27025
  config(configs):27017   shard(shard1):27017 shard(shard2):27017
  config(configs):27018   shard(shard1):27018 shard(shard2):27018
  config(configs):27019   shard(shard1):27019 shard(shard2):27019
  实验步骤:
     安装mongodb服务;
     配置config节点的实例;
     配置shard1的实例:
     配置shard2实例:
     配置分片并验证:
     安装mongodb服务:
  192.168.100.101、192.168.100.102、192.168.100.103:
  # tar zxvf mongodb-linux-x86_64-rhel70-3.6.3.tgz
  # mv mongodb-linux-x86_64-rhel70-3.6.3 /usr/local/mongodb
  # echo "export PATH=/usr/local/mongodb/bin:\$PATH" >>/etc/profile
  # source /etc/profile
  # ulimit -n 25000
  # ulimit -u 25000
  # echo 0 >/proc/sys/vm/zone_reclaim_mode
  # sysctl -w vm.zone_reclaim_mode=0
  # echo never >/sys/kernel/mm/transparent_hugepage/enabled
  # echo never >/sys/kernel/mm/transparent_hugepage/defrag
  # cd /usr/local/mongodb/bin/
  # mkdir {../mongodb1,../mongodb2,../mongodb3}
  # mkdir ../logs
  # touch ../logs/mongodb{1..3}.log
  # chmod 777 ../logs/mongodb*
     配置config节点的实例:
  192.168.100.101:
  # cat /usr/local/mongodb/bin/mongodb1.conf
  bind_ip=192.168.100.101
  port=27017
  dbpath=/usr/local/mongodb/mongodb1/
  logpath=/usr/local/mongodb/logs/mongodb1.log
  logappend=true
  fork=true
  maxConns=5000
  replSet=configs
  #replication name
  configsvr=true
  END
  # cat /usr/local/mongodb/bin/mongodb2.conf
  bind_ip=192.168.100.101
  port=27018
  dbpath=/usr/local/mongodb/mongodb2/
  logpath=/usr/local/mongodb/logs/mongodb2.log
  logappend=true
  fork=true
  maxConns=5000
  replSet=configs
  configsvr=true
  END
  # cat /usr/local/mongodb/bin/mongodb3.conf
  bind_ip=192.168.100.101
  port=27019
  dbpath=/usr/local/mongodb/mongodb3/
  logpath=/usr/local/mongodb/logs/mongodb3.log
  logappend=true
  fork=true
  maxConns=5000
  replSet=configs
  configsvr=true
  END
  # cd
  # mongod -f/usr/local/mongodb/bin/mongodb1.conf
  # mongod -f/usr/local/mongodb/bin/mongodb2.conf
  # mongod -f/usr/local/mongodb/bin/mongodb3.conf
  # netstat -utpln |grep mongod
  tcp      0      0 192.168.100.101:27019   0.0.0.0:               LISTEN      2271/mongod            
  tcp      0      0 192.168.100.101:27017   0.0.0.0:               LISTEN      2440/mongod
  tcp      0      0 192.168.100.101:27018   0.0.0.0:*               LISTEN      1412/mongod
  # echo -e "/usr/local/mongodb/bin/mongod -f/usr/local/mongodb/bin/mongodb1.conf \n/usr/local/mongodb/bin/mongod -f/usr/local/mongodb/bin/mongodb2.conf\n/usr/local/mongodb/bin/mongod -f/usr/local/mongodb/bin/mongodb3.conf">>/etc/rc.local
  # chmod +x /etc/rc.local
  # cat /etc/init.d/mongodb
  #!/bin/bash
  INSTANCE=\$1
  ACTION=\$2
  case "\$ACTION" in
  'start')
  /usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/"\$INSTANCE".conf;;
  'stop')
  /usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/"\$INSTANCE".conf --shutdown;;
  'restart')
  /usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/"\$INSTANCE".conf --shutdown
  /usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/"\$INSTANCE".conf;;
  esac
  END
  # chmod +x /etc/init.d/mongodb
  # mongo --port 27017 --host 192.168.100.101

  cfg={"_id":"configs","members":[{"_id":0,"host":"192.168.100.101:27017"},{"_id":1,"host":"192.168.100.101:27018"},{"_id":2,"host":"192.168.100.101:27019"}]}
  rs.initiate(cfg)
  configs:PRIMARY> rs.status()
  {
  "set" : "configs",
  "date" : ISODate("2018-04-24T18:53:44.375Z"),
  "myState" : 1,
  "term" : NumberLong(1),
  "configsvr" : true,
  "heartbeatIntervalMillis" : NumberLong(2000),
  "optimes" : {
  "lastCommittedOpTime" : {
  "ts" : Timestamp(1524596020, 1),
  "t" : NumberLong(1)
  },
  "readConcernMajorityOpTime" : {
  "ts" : Timestamp(1524596020, 1),
  "t" : NumberLong(1)
  },
  "appliedOpTime" : {
  "ts" : Timestamp(1524596020, 1),
  "t" : NumberLong(1)
  },
  "durableOpTime" : {
  "ts" : Timestamp(1524596020, 1),
  "t" : NumberLong(1)
  }
  },
  "members" : [
  {
  "_id" : 0,
  "name" : "192.168.100.101:27017",
  "health" : 1,
  "state" : 1,
  "stateStr" : "PRIMARY",
  "uptime" : 6698,
  "optime" : {
  "ts" : Timestamp(1524596020, 1),
  "t" : NumberLong(1)
  },
  "optimeDate" : ISODate("2018-04-24T18:53:40Z"),
  "electionTime" : Timestamp(1524590293, 1),
  "electionDate" : ISODate("2018-04-24T17:18:13Z"),
  "configVersion" : 1,
  "self" : true
  },
  {
  "_id" : 1,
  "name" : "192.168.100.101:27018",
  "health" : 1,
  "state" : 2,
  "stateStr" : "SECONDARY",
  "uptime" : 5741,
  "optime" : {
  "ts" : Timestamp(1524596020, 1),
  "t" : NumberLong(1)
  },
  "optimeDurable" : {
  "ts" : Timestamp(1524596020, 1),
  "t" : NumberLong(1)
  },
  "optimeDate" : ISODate("2018-04-24T18:53:40Z"),
  "optimeDurableDate" : ISODate("2018-04-24T18:53:40Z"),
  "lastHeartbeat" : ISODate("2018-04-24T18:53:42.992Z"),
  "lastHeartbeatRecv" : ISODate("2018-04-24T18:53:43.742Z"),
  "pingMs" : NumberLong(0),
  "syncingTo" : "192.168.100.101:27017",
  "configVersion" : 1
  },
  {
  "_id" : 2,
  "name" : "192.168.100.101:27019",
  "health" : 1,
  "state" : 2,
  "stateStr" : "SECONDARY",
  "uptime" : 5741,
  "optime" : {
  "ts" : Timestamp(1524596020, 1),
  "t" : NumberLong(1)
  },
  "optimeDurable" : {
  "ts" : Timestamp(1524596020, 1),
  "t" : NumberLong(1)
  },
  "optimeDate" : ISODate("2018-04-24T18:53:40Z"),
  "optimeDurableDate" : ISODate("2018-04-24T18:53:40Z"),
  "lastHeartbeat" : ISODate("2018-04-24T18:53:42.992Z"),
  "lastHeartbeatRecv" : ISODate("2018-04-24T18:53:43.710Z"),
  "pingMs" : NumberLong(0),
  "syncingTo" : "192.168.100.101:27017",
  "configVersion" : 1
  }
  ],
  "ok" : 1,
  "operationTime" : Timestamp(1524596020, 1),
  "$gleStats" : {
  "lastOpTime" : Timestamp(0, 0),
  "electionId" : ObjectId("7fffffff0000000000000001")
  },
  "$clusterTime" : {
  "clusterTime" : Timestamp(1524596020, 1),
  "signature" : {
  "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
  "keyId" : NumberLong(0)
  }
  }
  }
  configs:PRIMARY> show dbs
  admin   0.000GB
  config0.000GB
  local   0.000GB
  configs:PRIMARY> exit
  # cat /usr/local/mongodb/bin/mongos.conf
  bind_ip=192.168.100.101
  port=27025
  logpath=/usr/local/mongodb/logs/mongodbs.log
  fork=true
  maxConns=5000
  configdb=configs/192.168.100.101:27017,192.168.100.101:27018,192.168.100.101:27019
  END
  注:mongos的configdb参数只能指定一个(复制集中的primary)或多个(复制集中的全部节点);
  # touch ../logs/mongos.log
  # chmod 777 ../logs/mongos.log
  # mongos -f /usr/local/mongodb/bin/mongos.conf
  about to fork child process, waiting until server is ready for connections.
  forked process: 1562
  child process started successfully, parent exiting
  # netstat -utpln |grep mongo
  tcp      0      0 192.168.100.101:27019   0.0.0.0:               LISTEN      1601/mongod         
  tcp      0      0 192.168.100.101:27020   0.0.0.0:               LISTEN      1345/mongod
  tcp      0      0 192.168.100.101:27025   0.0.0.0:               LISTEN      1822/mongos         
  tcp      0      0 192.168.100.101:27017   0.0.0.0:               LISTEN      1437/mongod
  tcp      0      0 192.168.100.101:27018   0.0.0.0:*               LISTEN      1541/mongod

     配置shard1的实例:
  192.168.100.102:
  # cat /usr/local/mongodb/bin/mongodb1.conf
  bind_ip=192.168.100.102
  port=27017
  dbpath=/usr/local/mongodb/mongodb1/
  logpath=/usr/local/mongodb/logs/mongodb1.log
  logappend=true
  fork=true
  maxConns=5000
  replSet=shard1
  #replication name
  shardsvr=true
  END
  # cat /usr/local/mongodb/bin/mongodb2.conf
  bind_ip=192.168.100.102
  port=27018
  dbpath=/usr/local/mongodb/mongodb2/
  logpath=/usr/local/mongodb/logs/mongodb2.log
  logappend=true
  fork=true
  maxConns=5000
  replSet=shard1
  shardsvr=true
  END
  # cat /usr/local/mongodb/bin/mongodb3.conf
  bind_ip=192.168.100.102
  port=27019
  dbpath=/usr/local/mongodb/mongodb3/
  logpath=/usr/local/mongodb/logs/mongodb3.log
  logappend=true
  fork=true
  maxConns=5000
  replSet=shard1
  shardsvr=true
  END
  # cd
  # mongod -f/usr/local/mongodb/bin/mongodb1.conf
  # mongod -f/usr/local/mongodb/bin/mongodb2.conf
  # mongod -f/usr/local/mongodb/bin/mongodb3.conf
  # netstat -utpln |grep mongod
  tcp      0      0 192.168.100.101:27019   0.0.0.0:               LISTEN      2271/mongod            
  tcp      0      0 192.168.100.101:27017   0.0.0.0:               LISTEN      2440/mongod
  tcp      0      0 192.168.100.101:27018   0.0.0.0:*               LISTEN      1412/mongod
  # echo -e "/usr/local/mongodb/bin/mongod -f/usr/local/mongodb/bin/mongodb1.conf \n/usr/local/mongodb/bin/mongod -f/usr/local/mongodb/bin/mongodb2.conf\n/usr/local/mongodb/bin/mongod -f/usr/local/mongodb/bin/mongodb3.conf">>/etc/rc.local
  # chmod +x /etc/rc.local
  # cat /etc/init.d/mongodb
  #!/bin/bash
  INSTANCE=\$1
  ACTION=\$2
  case "\$ACTION" in
  'start')
  /usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/"\$INSTANCE".conf;;
  'stop')
  /usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/"\$INSTANCE".conf --shutdown;;
  'restart')
  /usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/"\$INSTANCE".conf --shutdown
  /usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/"\$INSTANCE".conf;;
  esac
  END
  # chmod +x /etc/init.d/mongodb
  # mongo --port 27017 --host 192.168.100.102

  cfg={"_id":"shard1","members":[{"_id":0,"host":"192.168.100.102:27017"},{"_id":1,"host":"192.168.100.102:27018"},{"_id":2,"host":"192.168.100.102:27019"}]}
  rs.initiate(cfg)
  { "ok" : 1 }
  shard1:PRIMARY> rs.status()
  {
  "set" : "shard1",
  "date" : ISODate("2018-04-24T19:06:53.160Z"),
  "myState" : 1,
  "term" : NumberLong(1),
  "heartbeatIntervalMillis" : NumberLong(2000),
  "optimes" : {
  "lastCommittedOpTime" : {
  "ts" : Timestamp(1524596810, 1),
  "t" : NumberLong(1)
  },
  "readConcernMajorityOpTime" : {
  "ts" : Timestamp(1524596810, 1),
  "t" : NumberLong(1)
  },
  "appliedOpTime" : {
  "ts" : Timestamp(1524596810, 1),
  "t" : NumberLong(1)
  },
  "durableOpTime" : {
  "ts" : Timestamp(1524596810, 1),
  "t" : NumberLong(1)
  }
  },
  "members" : [
  {
  "_id" : 0,
  "name" : "192.168.100.102:27017",
  "health" : 1,
  "state" : 1,
  "stateStr" : "PRIMARY",
  "uptime" : 6648,
  "optime" : {
  "ts" : Timestamp(1524596810, 1),
  "t" : NumberLong(1)
  },
  "optimeDate" : ISODate("2018-04-24T19:06:50Z"),
  "electionTime" : Timestamp(1524590628, 1),
  "electionDate" : ISODate("2018-04-24T17:23:48Z"),
  "configVersion" : 1,
  "self" : true
  },
  {
  "_id" : 1,
  "name" : "192.168.100.102:27018",
  "health" : 1,
  "state" : 2,
  "stateStr" : "SECONDARY",
  "uptime" : 6195,
  "optime" : {
  "ts" : Timestamp(1524596810, 1),
  "t" : NumberLong(1)
  },
  "optimeDurable" : {
  "ts" : Timestamp(1524596810, 1),
  "t" : NumberLong(1)
  },
  "optimeDate" : ISODate("2018-04-24T19:06:50Z"),
  "optimeDurableDate" : ISODate("2018-04-24T19:06:50Z"),
  "lastHeartbeat" : ISODate("2018-04-24T19:06:52.176Z"),
  "lastHeartbeatRecv" : ISODate("2018-04-24T19:06:52.626Z"),
  "pingMs" : NumberLong(0),
  "syncingTo" : "192.168.100.102:27017",
  "configVersion" : 1
  },
  {
  "_id" : 2,
  "name" : "192.168.100.102:27019",
  "health" : 1,
  "state" : 2,
  "stateStr" : "SECONDARY",
  "uptime" : 6195,
  "optime" : {
  "ts" : Timestamp(1524596810, 1),
  "t" : NumberLong(1)
  },
  "optimeDurable" : {
  "ts" : Timestamp(1524596810, 1),
  "t" : NumberLong(1)
  },
  "optimeDate" : ISODate("2018-04-24T19:06:50Z"),
  "optimeDurableDate" : ISODate("2018-04-24T19:06:50Z"),
  "lastHeartbeat" : ISODate("2018-04-24T19:06:52.177Z"),
  "lastHeartbeatRecv" : ISODate("2018-04-24T19:06:52.626Z"),
  "pingMs" : NumberLong(0),
  "syncingTo" : "192.168.100.102:27017",
  "configVersion" : 1
  }
  ],
  "ok" : 1
  }
  shard1:PRIMARY> show dbs
  admin   0.000GB
  config0.000GB
  local   0.000GB
  shard1:PRIMARY> exit
  # cat /usr/local/mongodb/bin/mongos.conf
  bind_ip=192.168.100.102
  port=27025
  logpath=/usr/local/mongodb/logs/mongodbs.log
  fork=true
  maxConns=5000
  configdb=configs/192.168.100.101:27017,192.168.100.101:27018,192.168.100.101:27019
  END
  # touch ../logs/mongos.log
  # chmod 777 ../logs/mongos.log
  # mongos -f /usr/local/mongodb/bin/mongos.conf
  about to fork child process, waiting until server is ready for connections.
  forked process: 1562
  child process started successfully, parent exiting
  # netstat -utpln| grep mongo
  tcp      0      0 192.168.100.102:27019   0.0.0.0:               LISTEN      1098/mongod         
  tcp      0      0 192.168.100.102:27020   0.0.0.0:               LISTEN      1125/mongod
  tcp      0      0 192.168.100.102:27025   0.0.0.0:               LISTEN      1562/mongos         
  tcp      0      0 192.168.100.102:27017   0.0.0.0:               LISTEN      1044/mongod
  tcp      0      0 192.168.100.102:27018   0.0.0.0:*               LISTEN      1071/mongod

     配置shard2实例:
  192.168.100.103:
  # cat /usr/local/mongodb/bin/mongodb1.conf
  bind_ip=192.168.100.103
  port=27017
  dbpath=/usr/local/mongodb/mongodb1/
  logpath=/usr/local/mongodb/logs/mongodb1.log
  logappend=true
  fork=true
  maxConns=5000
  replSet=shard2
  #replication name
  shardsvr=true
  END
  # cat /usr/local/mongodb/bin/mongodb2.conf
  bind_ip=192.168.100.103
  port=27018
  dbpath=/usr/local/mongodb/mongodb2/
  logpath=/usr/local/mongodb/logs/mongodb2.log
  logappend=true
  fork=true
  maxConns=5000
  replSet=shard2
  shardsvr=true
  END
  # cat /usr/local/mongodb/bin/mongodb3.conf
  bind_ip=192.168.100.103
  port=27019
  dbpath=/usr/local/mongodb/mongodb3/
  logpath=/usr/local/mongodb/logs/mongodb3.log
  logappend=true
  fork=true
  maxConns=5000
  replSet=shard2
  shardsvr=true
  END
  # cd
  # mongod -f/usr/local/mongodb/bin/mongodb1.conf
  # mongod -f/usr/local/mongodb/bin/mongodb2.conf
  # mongod -f/usr/local/mongodb/bin/mongodb3.conf
  # netstat -utpln |grep mongod
  tcp      0      0 192.168.100.101:27019   0.0.0.0:               LISTEN      2271/mongod            
  tcp      0      0 192.168.100.101:27017   0.0.0.0:               LISTEN      2440/mongod
  tcp      0      0 192.168.100.101:27018   0.0.0.0:*               LISTEN      1412/mongod
  # echo -e "/usr/local/mongodb/bin/mongod -f/usr/local/mongodb/bin/mongodb1.conf \n/usr/local/mongodb/bin/mongod -f/usr/local/mongodb/bin/mongodb2.conf\n/usr/local/mongodb/bin/mongod -f/usr/local/mongodb/bin/mongodb3.conf">>/etc/rc.local
  # chmod +x /etc/rc.local
  # cat /etc/init.d/mongodb
  #!/bin/bash
  INSTANCE=\$1
  ACTION=\$2
  case "\$ACTION" in
  'start')
  /usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/"\$INSTANCE".conf;;
  'stop')
  /usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/"\$INSTANCE".conf --shutdown;;
  'restart')
  /usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/"\$INSTANCE".conf --shutdown
  /usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/"\$INSTANCE".conf;;
  esac
  END
  # chmod +x /etc/init.d/mongodb
  # mongo --port 27017 --host 192.168.100.103

  cfg={"_id":"shard2","members":[{"_id":0,"host":"192.168.100.103:27017"},{"_id":1,"host":"192.168.100.103:27018"},{"_id":2,"host":"192.168.100.103:27019"}]}
  rs.initiate(cfg)
  { "ok" : 1 }
  shard2:PRIMARY> rs.status()
  {
  "set" : "shard2",
  "date" : ISODate("2018-04-24T19:06:53.160Z"),
  "myState" : 1,
  "term" : NumberLong(1),
  "heartbeatIntervalMillis" : NumberLong(2000),
  "optimes" : {
  "lastCommittedOpTime" : {
  "ts" : Timestamp(1524596810, 1),
  "t" : NumberLong(1)
  },
  "readConcernMajorityOpTime" : {
  "ts" : Timestamp(1524596810, 1),
  "t" : NumberLong(1)
  },
  "appliedOpTime" : {
  "ts" : Timestamp(1524596810, 1),
  "t" : NumberLong(1)
  },
  "durableOpTime" : {
  "ts" : Timestamp(1524596810, 1),
  "t" : NumberLong(1)
  }
  },
  "members" : [
  {
  "_id" : 0,
  "name" : "192.168.100.103:27017",
  "health" : 1,
  "state" : 1,
  "stateStr" : "PRIMARY",
  "uptime" : 6648,
  "optime" : {
  "ts" : Timestamp(1524596810, 1),
  "t" : NumberLong(1)
  },
  "optimeDate" : ISODate("2018-04-24T19:06:50Z"),
  "electionTime" : Timestamp(1524590628, 1),
  "electionDate" : ISODate("2018-04-24T17:23:48Z"),
  "configVersion" : 1,
  "self" : true
  },
  {
  "_id" : 1,
  "name" : "192.168.100.103:27018",
  "health" : 1,
  "state" : 2,
  "stateStr" : "SECONDARY",
  "uptime" : 6195,
  "optime" : {
  "ts" : Timestamp(1524596810, 1),
  "t" : NumberLong(1)
  },
  "optimeDurable" : {
  "ts" : Timestamp(1524596810, 1),
  "t" : NumberLong(1)
  },
  "optimeDate" : ISODate("2018-04-24T19:06:50Z"),
  "optimeDurableDate" : ISODate("2018-04-24T19:06:50Z"),
  "lastHeartbeat" : ISODate("2018-04-24T19:06:52.176Z"),
  "lastHeartbeatRecv" : ISODate("2018-04-24T19:06:52.626Z"),
  "pingMs" : NumberLong(0),
  "syncingTo" : "192.168.100.103:27017",
  "configVersion" : 1
  },
  {
  "_id" : 2,
  "name" : "192.168.100.103:27019",
  "health" : 1,
  "state" : 2,
  "stateStr" : "SECONDARY",
  "uptime" : 6195,
  "optime" : {
  "ts" : Timestamp(1524596810, 1),
  "t" : NumberLong(1)
  },
  "optimeDurable" : {
  "ts" : Timestamp(1524596810, 1),
  "t" : NumberLong(1)
  },
  "optimeDate" : ISODate("2018-04-24T19:06:50Z"),
  "optimeDurableDate" : ISODate("2018-04-24T19:06:50Z"),
  "lastHeartbeat" : ISODate("2018-04-24T19:06:52.177Z"),
  "lastHeartbeatRecv" : ISODate("2018-04-24T19:06:52.626Z"),
  "pingMs" : NumberLong(0),
  "syncingTo" : "192.168.100.103:27017",
  "configVersion" : 1
  }
  ],
  "ok" : 1
  }
  shard2:PRIMARY> show dbs
  admin   0.000GB
  config0.000GB
  local   0.000GB
  shard2:PRIMARY> exit
  # cat /usr/local/mongodb/bin/mongos.conf
  bind_ip=192.168.100.103
  port=27025
  logpath=/usr/local/mongodb/logs/mongodbs.log
  fork=true
  maxConns=5000
  configdb=configs/192.168.100.101:27017,192.168.100.101:27018,192.168.100.101:27019
  END
  # touch ../logs/mongos.log
  # chmod 777 ../logs/mongos.log
  # mongos -f /usr/local/mongodb/bin/mongos.conf
  about to fork child process, waiting until server is ready for connections.
  forked process: 1562
  child process started successfully, parent exiting
  # netstat -utpln |grep mongo
  tcp      0      0 192.168.100.103:27019   0.0.0.0:               LISTEN      1095/mongod         
  tcp      0      0 192.168.100.103:27020   0.0.0.0:               LISTEN      1122/mongod
  tcp      0      0 192.168.100.103:27025   0.0.0.0:               LISTEN      12122/mongos      
  tcp      0      0 192.168.100.103:27017   0.0.0.0:               LISTEN      1041/mongod
  tcp      0      0 192.168.100.103:27018   0.0.0.0:*               LISTEN      1068/mongod

     配置分片并验证:
  192.168.100.101(随意选择mongos进行设置分片,三台mongos会同步以下操作):
  # mongo --port 27025 --host 192.168.100.101
  mongos> use admin;
  switched to db admin
  mongos> sh.status()
  --- Sharding Status ---
  sharding version: {
  "_id" : 1,
  "minCompatibleVersion" : 5,
  "currentVersion" : 6,
  "clusterId" : ObjectId("5adf66d7518b3e5b3aad4e77")
  }
  shards:
  active mongoses:
  "3.6.3" : 1
  autosplit:
  Currently enabled: yes
  balancer:
  Currently enabled:yes
  Currently running:no
  Failed balancer rounds in last 5 attempts:0
  Migration Results for the last 24 hours:
  No recent migrations
  databases:
  {"_id" : "config","primary" : "config","partitioned" : true }
  mongos>
  sh.addShard("shard1/192.168.100.102:27017,192.168.100.102:27018,192.168.100.102:27019")
  {
  "shardAdded" : "shard1",
  "ok" : 1,
  "$clusterTime" : {
  "clusterTime" : Timestamp(1524598580, 9),
  "signature" : {
  "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
  "keyId" : NumberLong(0)
  }
  },
  "operationTime" : Timestamp(1524598580, 9)
  }
  mongos> sh.addShard("shard2/192.168.100.103:27017,192.168.100.103:27018,192.168.100.103:27019")
  {
  "shardAdded" : "shard2",
  "ok" : 1,
  "$clusterTime" : {
  "clusterTime" : Timestamp(1524598657, 7),
  "signature" : {
  "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
  "keyId" : NumberLong(0)
  }
  },
  "operationTime" : Timestamp(1524598657, 7)
  }
  mongos> sh.status()
  --- Sharding Status ---
  sharding version: {
  "_id" : 1,
  "minCompatibleVersion" : 5,
  "currentVersion" : 6,
  "clusterId" : ObjectId("5adf66d7518b3e5b3aad4e77")
  }
  shards:
  {"_id" : "shard1","host" : "shard1/192.168.100.102:27017,192.168.100.102:27018,192.168.100.102:27019","state" : 1 }
  {"_id" : "shard2","host" : "shard2/192.168.100.103:27017,192.168.100.103:27018,192.168.100.103:27019","state" : 1 }
  active mongoses:
  "3.6.3" : 1
  autosplit:
  Currently enabled: yes
  balancer:
  Currently enabled:yes
  Currently running:no
  Failed balancer rounds in last 5 attempts:0
  Migration Results for the last 24 hours:
  No recent migrations
  databases:
  {"_id" : "config","primary" : "config","partitioned" : true }
  注:目前配置服务、路由服务、分片服务、副本集服务都已经串联起来了,但我们的目的是希望插入数据,数据能够自动分片。连接在mongos上,准备让指定的数据库、指定的集合分片生效。
  # mongo --port 27025 --host 192.168.100.101
  mongos> use admin
  mongos> sh.enableSharding("testdb")   ##开启数据库的分片
  {
  "ok" : 1,
  "$clusterTime" : {
  "clusterTime" : Timestamp(1524599672, 13),
  "signature" : {
  "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
  "keyId" : NumberLong(0)
  }
  },
  "operationTime" : Timestamp(1524599672, 13)
  mongos> sh.status()
  --- Sharding Status ---
  sharding version: {
  "_id" : 1,
  "minCompatibleVersion" : 5,
  "currentVersion" : 6,
  "clusterId" : ObjectId("5adf66d7518b3e5b3aad4e77")
  }
  shards:
  {"_id" : "shard1","host" : "shard1/192.168.100.102:27017,192.168.100.102:27018,192.168.100.102:27019","state" : 1 }
  {"_id" : "shard2","host" : "shard2/192.168.100.103:27017,192.168.100.103:27018,192.168.100.103:27019","state" : 1 }
  active mongoses:
  "3.6.3" : 1
  autosplit:
  Currently enabled: yes
  balancer:
  Currently enabled:yes
  Currently running:no
  Failed balancer rounds in last 5 attempts:0
  Migration Results for the last 24 hours:
  No recent migrations
  databases:
  {"_id" : "config","primary" : "config","partitioned" : true }
  config.system.sessions
  shard key: { "_id" : 1 }
  unique: false
  balancing: true
  chunks:
  shard11
  { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0)
  {"_id" : "testdb","primary" : "shard2","partitioned" : true }
  mongos> db.runCommand({shardcollection:"testdb.table1", key:{_id:1}});          ##开启数据库中集合的分片
  {
  "collectionsharded" : "testdb.table1",
  "collectionUUID" : UUID("883bb1e2-b218-41ab-8122-6a5cf4df5e7b"),
  "ok" : 1,
  "$clusterTime" : {
  "clusterTime" : Timestamp(1524601471, 14),
  "signature" : {
  "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
  "keyId" : NumberLong(0)
  }
  },
  "operationTime" : Timestamp(1524601471, 14)
  }
  mongos> use testdb;
  mongos> for(i=1;i show collections
  table1
  mongos> db.table1.count()
  10000
  mongos> sh.status()
  --- Sharding Status ---
  sharding version: {
  "_id" : 1,
  "minCompatibleVersion" : 5,
  "currentVersion" : 6,
  "clusterId" : ObjectId("5adf66d7518b3e5b3aad4e77")
  }
  shards:
  {"_id" : "shard1","host" : "shard1/192.168.100.102:27017,192.168.100.102:27018,192.168.100.102:27019","state" : 1 }
  {"_id" : "shard2","host" : "shard2/192.168.100.103:27017,192.168.100.103:27018,192.168.100.103:27019","state" : 1 }
  active mongoses:
  "3.6.3" : 1
  autosplit:
  Currently enabled: yes
  balancer:
  Currently enabled:yes
  Currently running:no
  Failed balancer rounds in last 5 attempts:0
  Migration Results for the last 24 hours:
  No recent migrations
  databases:
  {"_id" : "config","primary" : "config","partitioned" : true }
  config.system.sessions
  shard key: { "_id" : 1 }
  unique: false
  balancing: true
  chunks:
  shard11
  { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0)
  {"_id" : "testdb","primary" : "shard2","partitioned" : true }
  testdb.table1
  shard key: { "_id" : 1 }
  unique: false
  balancing: true
  chunks:
  shard21
  { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard2 Timestamp(1, 0)
  mongos> use admin
  switched to db admin
  mongos> sh.enableSharding("testdb2")
  {
  "ok" : 1,
  "$clusterTime" : {
  "clusterTime" : Timestamp(1524602371, 7),
  "signature" : {
  "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
  "keyId" : NumberLong(0)
  }
  },
  "operationTime" : Timestamp(1524602371, 7)
  }
  mongos> db.runCommand({shardcollection:"testdb2.table1", key:{_id:1}});
  mongos> use testdb2
  switched to db testdb2
  mongos> for(i=1;i sh.status()
  --- Sharding Status ---
  sharding version: {
  "_id" : 1,
  "minCompatibleVersion" : 5,
  "currentVersion" : 6,
  "clusterId" : ObjectId("5adf66d7518b3e5b3aad4e77")
  }
  shards:
  {"_id" : "shard1","host" : "shard1/192.168.100.102:27017,192.168.100.102:27018,192.168.100.102:27019","state" : 1 }
  {"_id" : "shard2","host" : "shard2/192.168.100.103:27017,192.168.100.103:27018,192.168.100.103:27019","state" : 1 }
  active mongoses:
  "3.6.3" : 1
  autosplit:
  Currently enabled: yes
  balancer:
  Currently enabled:yes
  Currently running:no
  Failed balancer rounds in last 5 attempts:0
  Migration Results for the last 24 hours:
  No recent migrations
  databases:
  {"_id" : "config","primary" : "config","partitioned" : true }
  config.system.sessions
  shard key: { "_id" : 1 }
  unique: false
  balancing: true
  chunks:
  shard11
  { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0)
  {"_id" : "testdb","primary" : "shard2","partitioned" : true }
  testdb.table1
  shard key: { "_id" : 1 }
  unique: false
  balancing: true
  chunks:
  shard21
  { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard2 Timestamp(1, 0)
  

    {"_id" : "testdb2","primary" : "shard1","partitioned" : true }  testdb2.table1
  shard key: { "_id" : 1 }
  unique: false
  balancing: true
  chunks:
  shard11
  { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0)
  

  mongos> db.table1.stats()         ##查看集合的分片情况
  {
  "sharded" : true,
  "capped" : false,
  "ns" : "testdb2.table1",
  "count" : 10000,
  "size" : 490000,
  "storageSize" : 167936,
  "totalIndexSize" : 102400,
  "indexSizes" : {
  "id" : 102400
  },
  "avgObjSize" : 49,
  "nindexes" : 1,
  "nchunks" : 1,
  "shards" : {
  "shard1" : {
  "ns" : "testdb2.table1",
  "size" : 490000,
  "count" : 10000,
  "avgObjSize" : 49,
  "storageSize" : 167936,
  "capped" : false,
  "wiredTiger" : {
  "metadata" : {
  "formatVersion" : 1
  },
  "creationString" :
  ...
  在192.168.100.102和192.168.100.103上登录mongos节点查看上述配置,发现已经同步;


页: [1]
查看完整版本: centos7部署Mongodb复制集结合分片(超详细)