在MongoDB里面存在另一种集群,就是分片技术,可以满足MongoDB数据量大量增长的需求。
在MongoDB存储海量数据时,一台机器可能不足以存储数据,也可能不足以提供可接受的读写吞吐量。这时,我们就可以通过在多台机器上分割数据,使得数据库系统能存储和处理更多的数据。
分片就是把数据分成块,再把块存储到不同的服务器上,MongoDB的分片是自动分片的,当用户发送读写数据请求的时候,先经过mongos这个路由层,mongos路由层去配置服务器请求分片的信息,再来判断这个请求应该去哪一台服务器上读写数据。

1.创建相应目录(三台服务器执行相同操作)
安装教程(1)mongo-config配置文件
vim /mongo/apps/conf/mongo-config.ymlsystemLog:destination: file#日志路径path: "/mongo/logs/mongo-config.log"logAppend: truestorage:journal:enabled: true#数据存储路径dbPath: "/mongo/data/config"engine: wiredTigerwiredTiger:engineConfig: cacheSizeGB: 12processManagement:fork: truepidFilePath: "/mongo/run/mongo-config.pid"net:#这里ip可以设置为对应主机ipbindIp: 0.0.0.0#端口port: 27017setParameter:enableLocalhostAuthBypass: truereplication:#复制集名称replSetName: "mgconfig"sharding:#作为配置服务clusterRole: configsvr
(2)mongo-shard1配置文件
vim /mongo/apps/conf/mongo-shard1.ymlsystemLog:destination: filepath: "/mongo/logs/mongo-shard1.log"logAppend: truestorage:journal:enabled: truedbPath: "/mongo/data/shard1"processManagement:fork: truepidFilePath: "/mongo/run/mongo-shard1.pid"net:bindIp: 0.0.0.0#注意修改端口port: 40001setParameter:enableLocalhostAuthBypass: truereplication:#复制集名称replSetName: "shard1"sharding:#作为分片服务clusterRole: shardsvr
(3)mongo-shard2配置文件
vim /mongo/apps/conf/mongo-shard2.ymlsystemLog:destination: filepath: "/mongo/logs/mongo-shard2.log"logAppend: truestorage:journal:enabled: truedbPath: "/mongo/data/shard2"processManagement:fork: truepidFilePath: "/mongo/run/mongo-shard2.pid"net:bindIp: 0.0.0.0#注意修改端口port: 40002setParameter:enableLocalhostAuthBypass: truereplication:#复制集名称replSetName: "shard2"sharding:#作为分片服务clusterRole: shardsvr
(4)mongo-shard3配置文件
vim /mongo/apps/conf/mongo-shard3.ymlsystemLog:destination: filepath: "/mongo/logs/mongo-shard3.log"logAppend: truestorage:journal:enabled: truedbPath: "/mongo/data/shard3"processManagement:fork: truepidFilePath: "/mongo/run/mongo-shard3.pid"net:bindIp: 0.0.0.0#注意修改端口port: 40003setParameter:enableLocalhostAuthBypass: truereplication:#复制集名称replSetName: "shard3"sharding:#作为分片服务clusterRole: shardsvr
(5)mongo-route配置文件
vim /mongo/apps/conf/mongo-route.ymlsystemLog:destination: file#注意修改路径path: "/mongo/logs/mongo-route.log"logAppend: trueprocessManagement:fork: truepidFilePath: "/mongo/run/mongo-route.pid"net:bindIp: 0.0.0.0#注意修改端口port: 30000setParameter:enableLocalhostAuthBypass: truereplication:localPingThresholdMs: 15sharding:#关联配置服务configDB: mgconfig/10.0.0.56:27017,10.0.0.57:27017,10.0.0.58:27018
3.启动mongo-config服务(三台服务器执行相同操作)
#关闭之前yum安装的MongoDBsystemctl stop mongodcd /mongo/apps/conf/mongod --config mongo-config.yml#查看端口27017是否启动netstat -ntplActive Internet connections (only servers)Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program nametcp00 0.0.0.0:220.0.0.0:* LISTEN1129/sshdtcp00 127.0.0.1:631 0.0.0.0:* LISTEN1131/cupsdtcp00 127.0.0.1:60100.0.0.0:* LISTEN2514/sshd: root@ptstcp00 127.0.0.1:60110.0.0.0:* LISTEN4384/sshd: root@ptstcp00 0.0.0.0:27017 0.0.0.0:* LISTEN4905/mongodtcp00 0.0.0.0:111 0.0.0.0:* LISTEN1/systemdtcp6 00 :::22 :::*LISTEN1129/sshdtcp6 00 ::1:631 :::*LISTEN1131/cupsdtcp6 00 ::1:6010:::*LISTEN2514/sshd: root@ptstcp6 00 ::1:6011:::*LISTEN4384/sshd: root@ptstcp6 00 :::111:::*LISTEN1/systemd
4.连接一台实例,创建初始化复制集
#连接mongomongo 10.0.0.56:27017#配置初始化复制集,这里的mgconfig要和配置文件里的replSet的名称一致config={_id:"mgconfig",members:[ {_id:0,host:"10.0.0.56:27017"},{_id:1,host:"10.0.0.57:27017"},{_id:2,host:"10.0.0.58:27017"}, ]}rs.initiate(config)#ok返回1便是初始化成功{ "ok" : 1, "$gleStats" : { "lastOpTime" : Timestamp(1634710950, 1), "electionId" : ObjectId("000000000000000000000000") }, "lastCommittedOpTime" : Timestamp(0, 0)}#检查状态rs.status(){ "set" : "mgconfig", "date" : ISODate("2021-10-20T06:24:24.277Z"), "myState" : 1, "term" : NumberLong(1), "syncSourceHost" : "", "syncSourceId" : -1, "configsvr" : true, "heartbeatIntervalMillis" : NumberLong(2000), "majorityVoteCount" : 2, "writeMajorityCount" : 2, "votingMembersCount" : 3, "writableVotingMembersCount" : 3, "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1634711063, 1), "t" : NumberLong(1) }, "lastCommittedWallTime" : ISODate("2021-10-20T06:24:23.811Z"), "readConcernMajorityOpTime" : { "ts" : Timestamp(1634711063, 1), "t" : NumberLong(1) }, "readConcernMajorityWallTime" : ISODate("2021-10-20T06:24:23.811Z"), "appliedOpTime" : { "ts" : Timestamp(1634711063, 1), "t" : NumberLong(1) }, "durableOpTime" : { "ts" : Timestamp(1634711063, 1), "t" : NumberLong(1) }, "lastAppliedWallTime" : ISODate("2021-10-20T06:24:23.811Z"), "lastDurableWallTime" : ISODate("2021-10-20T06:24:23.811Z") }, "lastStableRecoveryTimestamp" : Timestamp(1634711021, 1), "electionCandidateMetrics" : { "lastElectionReason" : "electionTimeout", "lastElectionDate" : ISODate("2021-10-20T06:22:41.335Z"), "electionTerm" : NumberLong(1), "lastCommittedOpTimeAtElection" : { "ts" : Timestamp(0, 0), "t" : NumberLong(-1) }, "lastSeenOpTimeAtElection" : { "ts" : Timestamp(1634710950, 1), "t" : NumberLong(-1) }, "numVotesNeeded" : 2, "priorityAtElection" : 1, "electionTimeoutMillis" : NumberLong(10000), "numCatchUpOps" : NumberLong(0), "newTermStartDate" : ISODate("2021-10-20T06:22:41.509Z"), "wMajorityWriteAvailabilityDate" : ISODate("2021-10-20T06:22:42.322Z") }, "members" : [ { "_id" : 0, "name" : "10.0.0.56:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 530, "optime" : { "ts" : Timestamp(1634711063, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2021-10-20T06:24:23Z"), "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "electionTime" : Timestamp(1634710961, 1), "electionDate" : ISODate("2021-10-20T06:22:41Z"), "configVersion" : 1, "configTerm" : 1, "self" : true, "lastHeartbeatMessage" : "" }, { "_id" : 1, "name" : "10.0.0.57:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 113, "optime" : { "ts" : Timestamp(1634711061, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1634711061, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2021-10-20T06:24:21Z"), "optimeDurableDate" : ISODate("2021-10-20T06:24:21Z"), "lastHeartbeat" : ISODate("2021-10-20T06:24:22.487Z"), "lastHeartbeatRecv" : ISODate("2021-10-20T06:24:22.906Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncSourceHost" : "10.0.0.56:27017", "syncSourceId" : 0, "infoMessage" : "", "configVersion" : 1, "configTerm" : 1 }, { "_id" : 2, "name" : "10.0.0.58:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 113, "optime" : { "ts" : Timestamp(1634711062, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1634711062, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2021-10-20T06:24:22Z"), "optimeDurableDate" : ISODate("2021-10-20T06:24:22Z"), "lastHeartbeat" : ISODate("2021-10-20T06:24:23.495Z"), "lastHeartbeatRecv" : ISODate("2021-10-20T06:24:22.514Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncSourceHost" : "10.0.0.56:27017", "syncSourceId" : 0, "infoMessage" : "", "configVersion" : 1, "configTerm" : 1 } ], "ok" : 1, "$gleStats" : { "lastOpTime" : Timestamp(1634710950, 1), "electionId" : ObjectId("7fffffff0000000000000001") }, "lastCommittedOpTime" : Timestamp(1634711063, 1), "$clusterTime" : { "clusterTime" : Timestamp(1634711063, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } }, "operationTime" : Timestamp(1634711063, 1)}5.配置部署shard1分片集群,启动shard1实例(三台执行同样操作)
cd /mongo/apps/confmongod --config mongo-shard1.yml#查看端口40001是否启动netstat -ntplActive Internet connections (only servers)Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program nametcp00 0.0.0.0:40001 0.0.0.0:* LISTEN5742/mongodtcp00 0.0.0.0:27017 0.0.0.0:* LISTEN5443/mongodtcp00 0.0.0.0:111 0.0.0.0:* LISTEN1/systemdtcp00 0.0.0.0:220.0.0.0:* LISTEN1139/sshdtcp00 127.0.0.1:631 0.0.0.0:* LISTEN1133/cupsdtcp00 127.0.0.1:60100.0.0.0:* LISTEN2490/sshd: root@ptstcp00 127.0.0.1:60110.0.0.0:* LISTEN5189/sshd: root@ptstcp6 00 :::111:::*LISTEN1/systemdtcp6 00 :::22 :::*LISTEN1139/sshdtcp6 00 ::1:631 :::*LISTEN1133/cupsdtcp6 00 ::1:6010:::*LISTEN2490/sshd: root@ptstcp6 00 ::1:6011:::*LISTEN5189/sshd: root@pts
6.连接一台实例,创建复制集
#连接mongomongo 10.0.0.56:40001#配置初始化复制集config={_id:"shard1",members:[ {_id:0,host:"10.0.0.56:40001",priority:2},{_id:1,host:"10.0.0.57:40001",priority:1},{_id:2,host:"10.0.0.58:40001",arbiterOnly:true}, ]}rs.initiate(config)#检查状态rs.status()7.配置部署shard2分片集群,启动shard1实例(三台执行同样操作)
cd /mongo/apps/confmongod --config mongo-shard2.yml#查看端口40002是否启动netstat -ntplActive Internet connections (only servers)Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program nametcp00 0.0.0.0:40001 0.0.0.0:* LISTEN5742/mongodtcp00 0.0.0.0:40002 0.0.0.0:* LISTEN5982/mongodtcp00 0.0.0.0:27017 0.0.0.0:* LISTEN5443/mongodtcp00 0.0.0.0:111 0.0.0.0:* LISTEN1/systemdtcp00 0.0.0.0:220.0.0.0:* LISTEN1139/sshdtcp00 127.0.0.1:631 0.0.0.0:* LISTEN1133/cupsdtcp00 127.0.0.1:60100.0.0.0:* LISTEN2490/sshd: root@ptstcp00 127.0.0.1:60110.0.0.0:* LISTEN5189/sshd: root@ptstcp6 00 :::111:::*LISTEN1/systemdtcp6 00 :::22 :::*LISTEN1139/sshdtcp6 00 ::1:631 :::*LISTEN1133/cupsdtcp6 00 ::1:6010:::*LISTEN2490/sshd: root@ptstcp6 00 ::1:6011:::*LISTEN5189/sshd: root@pts
8.连接第二个节点创建复制集
因为我们规划的shard2的主节点是10.0.0.57:40002,仲裁节点不能写数据,所以要连接10.0.0.57主机
#连接mongomongo 10.0.0.57:40002#创建初始化复制集config={_id:"shard2",members:[{_id:0,host:"10.0.0.56:40002",arbiterOnly:true}, {_id:1,host:"10.0.0.57:40002",priority:2}, {_id:2,host:"10.0.0.58:40002",priority:1}, ]}rs.initiate(config)#查看状态rs.status()9.配置部署shard3分片集群,启动shard3实例(三台执行同样操作)
cd /mongo/apps/conf/mongod --config mongo-shard3.yml##查看端口40003是否启动netstat -ntplActive Internet connections (only servers)Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program nametcp00 0.0.0.0:40001 0.0.0.0:* LISTEN5742/mongodtcp00 0.0.0.0:40002 0.0.0.0:* LISTEN5982/mongodtcp00 0.0.0.0:40003 0.0.0.0:* LISTEN6454/mongodtcp00 0.0.0.0:27017 0.0.0.0:* LISTEN5443/mongodtcp00 0.0.0.0:111 0.0.0.0:* LISTEN1/systemdtcp00 0.0.0.0:220.0.0.0:* LISTEN1139/sshdtcp00 127.0.0.1:631 0.0.0.0:* LISTEN1133/cupsdtcp00 127.0.0.1:60100.0.0.0:* LISTEN2490/sshd: root@ptstcp00 127.0.0.1:60110.0.0.0:* LISTEN5189/sshd: root@ptstcp6 00 :::111:::*LISTEN1/systemdtcp6 00 :::22 :::*LISTEN1139/sshdtcp6 00 ::1:631 :::*LISTEN1133/cupsdtcp6 00 ::1:6010:::*LISTEN2490/sshd: root@ptstcp6 00 ::1:6011:::*LISTEN5189/sshd: root@pts
10.连接第三个节点(10.0.0.58:40003)创建复制集
#连接mongomongo 10.0.0.58:40003#创建初始化复制集config={_id:"shard3",members:[ {_id:0,host:"10.0.0.56:40003",priority:1}, {_id:1,host:"10.0.0.57:40003",arbiterOnly:true}, {_id:2,host:"10.0.0.58:40003",priority:2}, ]}rs.initiate(config)#查看状态rs.status()11.配置部署路由节点
#路由节点启动登录用mongosmongos --config mongo-route.yml#连接添加分片到集群中mongo 10.0.0.56:30000sh.addShard("shard1/10.0.0.56:40001,10.0.0.57:40001,10.0.0.58:40001")sh.addShard("shard2/10.0.0.56:40002,10.0.0.57:40002,10.0.0.58:40002")sh.addShard("shard3/10.0.0.56:40003,10.0.0.57:40003,10.0.0.58:40003")#查看分片状态sh.status()#查看所有库mongos> show dbsadmin 0.000GBconfig0.003GB#进入configuse config#这里默认的chunk大小是64M,db.settings.find()可以看到这个值,这里为了测试看的清楚,把chunk调整为1Mdb.settings.save({"_id":"chunksize","value":1})模拟写入数据
#在tydb库的tyuser表中循环写入6万条数据mongos> use tydbmongos> show tablesmongos> for(i=1;i<=60000;i++){db.tyuser.insert({"id":i,"name":"ty"+i})}启用数据库分片
mongos> sh.enableSharding("tydb")#ok返回1{ "ok" : 1, "operationTime" : Timestamp(1634716737, 2), "$clusterTime" : { "clusterTime" : Timestamp(1634716737, 2), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } }}启用表分片
mongos> sh.shardCollection(”tydb.tyuser",{"id":1})查看分片情况
mongos> sh.status()
查看开启关闭平衡器
#开启mongos> sh.startBalancer() #或者sh.startBalancer(true)#关闭mongos> sh.stopBalancer() #或者sh.stopBalancer(false)#查看是否关闭mongos> sh.getBalancerState() #返回flase表示关闭
到此这篇关于CentOS 8 搭建MongoDB4.4分片集群的文章就介绍到这了,更多相关MongoDB分片集群内容请搜索以前的文章或继续浏览下面的相关文章希望大家以后多多支持!