你的位置:首页 > 软件开发 > 数据库 > MongoDB:集群部署实践,包含复制集,分片

MongoDB:集群部署实践,包含复制集,分片

发布时间:2016-07-16 21:00:13
注: 刚开始学习MongoDB,写的有点麻烦了,网上教程都是很少的代码就完成了集群的部署,   纯属个人实践,错误之处望指正!有好的建议和资料请联系我QQ:1176479642 集群架构: 2mongos ...

 注: 刚开始学习MongoDB.aspx' target='_blank'>MongoDB,写的有点麻烦了,网上教程都是很少的代码就完成了集群的部署,

   纯属个人实践,错误之处望指正!有好的建议和资料请联系我QQ:1176479642

 

集群架构:

    2mongos                最好多台路由服务器以上1.mkdir 创建数据文件、日志文件的存放目录,mogos路由服务器因为不存储数据,可不创建/data/db    #config servers     port:    3001,3002    #mogodb shards rs    #rs2    #rs3

//集群文件夹/usr/local/mongodbcluster,这部分其实无关紧要

MongoDB:集群部署实践,包含复制集,分片MongoDB:集群部署实践,包含复制集,分片
[pheonix@localhost mongodbcluster]$ ls -R.:mongoconfigs mongos shards./mongoconfigs:mongoconfig1 mongoconfig2 mongoconfig3./mongoconfigs/mongoconfig1:bin          GNU-AGPL-3.0          READMEconfigsvr1-3001.conf~ mongodb_configsvr1_3001.conf  THIRD-PARTY-NOTICESdata          mongodb_configsvr1_3001.conf~./mongoconfigs/mongoconfig1/bin:bsondump mongodump  mongoimport mongorestore mongotopmongo   mongoexport mongooplog  mongosmongod  mongofiles  mongoperf  mongostat./mongoconfigs/mongoconfig1/data:db log mongodb_configsvr1_3001.pid./mongoconfigs/mongoconfig1/data/db:config.0 config.ns journal local.0 local.1 local.2 local.ns mongod.lock./mongoconfigs/mongoconfig1/data/db/journal:prealloc.0 prealloc.1 prealloc.2./mongoconfigs/mongoconfig1/data/log:mongodb_configsvr1_3001.log./mongoconfigs/mongoconfig2:bin      mongodb_configsvr2_3002.conf  THIRD-PARTY-NOTICESdata     mongodb_configsvr2_3002.conf~GNU-AGPL-3.0 README./mongoconfigs/mongoconfig2/bin:bsondump mongodump  mongoimport mongorestore mongotopmongo   mongoexport mongooplog  mongosmongod  mongofiles  mongoperf  mongostat./mongoconfigs/mongoconfig2/data:db log mongodb_configsvr2_3002.pid./mongoconfigs/mongoconfig2/data/db:config.0 config.ns journal local.0 local.1 local.2 local.ns mongod.lock./mongoconfigs/mongoconfig2/data/db/journal:prealloc.0 prealloc.1 prealloc.2./mongoconfigs/mongoconfig2/data/log:mongodb_configsvr2_3002.log./mongoconfigs/mongoconfig3:bin      mongodb_configsvr3_3003.conf  THIRD-PARTY-NOTICESdata     mongodb_configsvr3_3003.conf~GNU-AGPL-3.0 README./mongoconfigs/mongoconfig3/bin:bsondump mongodump  mongoimport mongorestore mongotopmongo   mongoexport mongooplog  mongosmongod  mongofiles  mongoperf  mongostat./mongoconfigs/mongoconfig3/data:db log mongodb_configsvr3_3003.pid./mongoconfigs/mongoconfig3/data/db:config.0 config.ns journal local.0 local.1 local.2 local.ns mongod.lock./mongoconfigs/mongoconfig3/data/db/journal:prealloc.0 prealloc.1 prealloc.2./mongoconfigs/mongoconfig3/data/log:mongodb_configsvr3_3003.log./mongos:mongos1 mongos2./mongos/mongos1:bin      mongdb_mongossvr1_2001.conf~ THIRD-PARTY-NOTICESdata     mongodb_mongossvr1_2001.confGNU-AGPL-3.0 README./mongos/mongos1/bin:bsondump mongodump  mongoimport mongorestore mongotopmongo   mongoexport mongooplog  mongosmongod  mongofiles  mongoperf  mongostat./mongos/mongos1/data:db  mongodb_mongossvr1_20001.pid mongodb_mongossvr2_2002.pidlog mongodb_mongossvr1_2001.pid./mongos/mongos1/data/db:./mongos/mongos1/data/log:mongodb_mongossvr1_2001.log mongodb_mongossvr2_2002.log./mongos/mongos2:bin      mongdb_mongossvr1_2001.conf~ THIRD-PARTY-NOTICESdata     mongodb_mongossvr2_2002.confGNU-AGPL-3.0 README./mongos/mongos2/bin:bsondump mongodump  mongoimport mongorestore mongotopmongo   mongoexport mongooplog  mongosmongod  mongofiles  mongoperf  mongostat./mongos/mongos2/data:db log mongodb_mongossvr2_2002.pid./mongos/mongos2/data/db:./mongos/mongos2/data/log:mongodb_mongossvr2_2002.log./shards:shard1 shard2 shard3./shards/shard1:rs1_mongodb1 rs1_mongodb2 rs1_mongodb3./shards/shard1/rs1_mongodb1:bin      mongodb_rs1_mongodb1_4011.conf  THIRD-PARTY-NOTICESdata     mongodb_rs1_mongodb1_4011.conf~GNU-AGPL-3.0 README./shards/shard1/rs1_mongodb1/bin:bsondump mongodump  mongoimport mongorestore mongotopmongo   mongoexport mongooplog  mongosmongod  mongofiles  mongoperf  mongostat./shards/shard1/rs1_mongodb1/data:db log mongodb_rs1_mongodb1_4011.pid./shards/shard1/rs1_mongodb1/data/db:journal local.1  mongod.lock test.0 test.nslocal.0 local.ns moveChunk  test.1./shards/shard1/rs1_mongodb1/data/db/journal:./shards/shard1/rs1_mongodb1/data/db/moveChunk:test.users./shards/shard1/rs1_mongodb1/data/db/moveChunk/test.users:post-cleanup.2016-07-16T10-11-51.1.bsonpost-cleanup.2016-07-16T10-13-57.2.bson./shards/shard1/rs1_mongodb1/data/log:mongodb_rs1_mongodb1_4011.log./shards/shard1/rs1_mongodb2:bin      mongodb_rs1_mongodb2_4012.conf  THIRD-PARTY-NOTICESdata     mongodb_rs1_mongodb2_4012.conf~GNU-AGPL-3.0 README./shards/shard1/rs1_mongodb2/bin:bsondump mongodump  mongoimport mongorestore mongotopmongo   mongoexport mongooplog  mongosmongod  mongofiles  mongoperf  mongostat./shards/shard1/rs1_mongodb2/data:db log mongodb_rs1_mongodb2_4012.pid./shards/shard1/rs1_mongodb2/data/db:journal local.0 local.1 local.ns mongod.lock test.0 test.1 test.ns./shards/shard1/rs1_mongodb2/data/db/journal:./shards/shard1/rs1_mongodb2/data/log:mongodb_rs1_mongodb2_4012.log./shards/shard1/rs1_mongodb3:bin      mongodb_rs1_mongodb3_4013.conf  THIRD-PARTY-NOTICESdata     mongodb_rs1_mongodb3_4013.conf~GNU-AGPL-3.0 README./shards/shard1/rs1_mongodb3/bin:bsondump mongodump  mongoimport mongorestore mongotopmongo   mongoexport mongooplog  mongosmongod  mongofiles  mongoperf  mongostat./shards/shard1/rs1_mongodb3/data:db log mongodb_rs1_mongodb3_4013.pid./shards/shard1/rs1_mongodb3/data/db:journal local.0 local.ns mongod.lock./shards/shard1/rs1_mongodb3/data/db/journal:./shards/shard1/rs1_mongodb3/data/log:mongodb_rs1_mongodb3_4013.log./shards/shard2:rs2_mongodb1 rs2_mongodb2 rs2_mongodb3./shards/shard2/rs2_mongodb1:bin      mongodb_rs2_mongodb1_4021.conf  THIRD-PARTY-NOTICESdata     mongodb_rs2_mongodb1_4021.conf~GNU-AGPL-3.0 README./shards/shard2/rs2_mongodb1/bin:bsondump mongodump  mongoimport mongorestore mongotopmongo   mongoexport mongooplog  mongosmongod  mongofiles  mongoperf  mongostat./shards/shard2/rs2_mongodb1/data:db log mongodb_rs2_mongodb1_4021.pid./shards/shard2/rs2_mongodb1/data/db:journal local.1  mongod.lock test.0 test.nslocal.0 local.ns moveChunk  test.1./shards/shard2/rs2_mongodb1/data/db/journal:./shards/shard2/rs2_mongodb1/data/db/moveChunk:test.users./shards/shard2/rs2_mongodb1/data/db/moveChunk/test.users:post-cleanup.2016-07-16T10-29-21.4.bson./shards/shard2/rs2_mongodb1/data/log:mongodb_rs2_mongodb1_4021.log./shards/shard2/rs2_mongodb2:bin      mongodb_rs2_mongodb1_4022.conf~ READMEdata     mongodb_rs2_mongodb2_4022.conf  THIRD-PARTY-NOTICESGNU-AGPL-3.0 mongodb_rs2_mongodb2_4022.conf~./shards/shard2/rs2_mongodb2/bin:bsondump mongodump  mongoimport mongorestore mongotopmongo   mongoexport mongooplog  mongosmongod  mongofiles  mongoperf  mongostat./shards/shard2/rs2_mongodb2/data:db log mongodb_rs2_mongodb2_4022.pid./shards/shard2/rs2_mongodb2/data/db:journal local.0 local.1 local.ns mongod.lock test.0 test.1 test.ns./shards/shard2/rs2_mongodb2/data/db/journal:./shards/shard2/rs2_mongodb2/data/log:mongodb_rs2_mongodb2_4022.log./shards/shard2/rs2_mongodb3:bin      mongodb_rs2_mongodb3_4023.conf  THIRD-PARTY-NOTICESdata     mongodb_rs2_mongodb3_4023.conf~GNU-AGPL-3.0 README./shards/shard2/rs2_mongodb3/bin:bsondump mongodump  mongoimport mongorestore mongotopmongo   mongoexport mongooplog  mongosmongod  mongofiles  mongoperf  mongostat./shards/shard2/rs2_mongodb3/data:db log mongodb_rs2_mongodb3_4023.pid./shards/shard2/rs2_mongodb3/data/db:journal local.0 local.ns mongod.lock./shards/shard2/rs2_mongodb3/data/db/journal:./shards/shard2/rs2_mongodb3/data/log:mongodb_rs2_mongodb3_4023.log./shards/shard3:rs3_mongodb1 rs3_mongodb2 rs3_mongodb3./shards/shard3/rs3_mongodb1:data./shards/shard3/rs3_mongodb1/data:db log./shards/shard3/rs3_mongodb1/data/db:./shards/shard3/rs3_mongodb1/data/log:./shards/shard3/rs3_mongodb2:data./shards/shard3/rs3_mongodb2/data:db log./shards/shard3/rs3_mongodb2/data/db:./shards/shard3/rs3_mongodb2/data/log:./shards/shard3/rs3_mongodb3:data./shards/shard3/rs3_mongodb3/data:db log./shards/shard3/rs3_mongodb3/data/db:./shards/shard3/rs3_mongodb3/data/log:

---------------------------------------------------------------文本---------------------------------------------------------------------------

 

#使用admin数据库        #使用admin数据库        //将test数据库启用分片功能.        //对test数据库的users集合进行分片,片键是uid        如果现在对refactor集合添加数据,就会依据"name"的值自动分散到各个片上.

 

 ------------------------------------------------------------------附录--------------------------------------------------------------------------

1.本次测试的MongoDB是2.6.3版本(https://yunpan.cn/cBXLkrRNVWbSk  访问密码 651d)

    其他版本:

        Linux 64位3.2.0    (https://yunpan.cn/cBX3jj6BHIxHr  访问密码 396a)

        Linux 32 i686 3.0.2-1 (https://yunpan.cn/cBX3HKJRAInp6  访问密码 31ff)

2.本次测试的配置文件和脚本(https://yunpan.cn/cBXLeknTK7gDw  访问密码 5f8e)

3.本次测试的所有截图(https://yunpan.cn/cBXLGYee6E37L  访问密码 09fb)

    如果你想图方便的话,也可以全部下载喔^_^么么哒!(https://yunpan.cn/cBXL7eHBBWWFw  访问密码 984b)

3.参考他人的教程(http://blog.sina.com.cn/s/blog_498e79cc0101115v.html)

MongoDB:集群部署实践,包含复制集,分片MongoDB:集群部署实践,包含复制集,分片
MongoDB分片集群搭建步骤详解Kelly 发表于 2015/9/2 10:14:00 | 分类标签: MongoDB 分片集群 一、概念:分片(sharding)是指将数据库拆分,将其分散在不同的机器上的过程。将数据分散到不同的机器上,不需要功能强大的服务器就可以存储更多的数据和处理更大的负载。基本思想就是将集合切成小块,这些块分散到若干片里,每个片只负责总数据的一部分,最后通过一个均衡器来对各个分片进行均衡(数据迁移)。通过一个名为mongos的路由进程进行操作,mongos知道数据和片的对应关系(通过配置服务器)。大部分使用场景都是解决磁盘空间的问题,对于写入有可能会变差(+++里面的说明+++),查询则尽量避免跨分片查询。使用分片的时机:1,机器的磁盘不够用了。使用分片解决磁盘空间的问题。2,单个mongod已经不能满足写数据的性能要求。通过分片让写压力分散到各个分片上面,使用分片服务器自身的资源。3,想把大量数据放到内存里提高性能。和上面一样,通过分片使用分片服务器自身的资源。二、部署安装: 前提是安装了mongodb(本文用3.0测试)在搭建分片之前,先了解下分片中各个角色的作用。① 配置服务器。是一个独立的mongod进程,保存集群和分片的元数据,即各分片包含了哪些数据的信息。最先开始建立,启用日志功能。像启动普通的mongod一样启动配置服务器,指定configsvr选项。不需要太多的空间和资源,配置服务器的1KB空间相当于真是数据的200MB。保存的只是数据的分布表。当服务不可用,则变成只读,无法分块、迁移数据。② 路由服务器。即mongos,起到一个路由的功能,供程序连接。本身不保存数据,在启动时从配置服务器加载集群信息,开启mongos进程需要知道配置服务器的地址,指定configdb选项。③ 分片服务器。是一个独立普通的mongod进程,保存数据信息。可以是一个副本集也可以是单独的一台服务器。部署环境:3台机子A:配置(3)、路由1、分片1;B:分片2,路由2;C:分片3   在部署之前先明白片键的意义,一个好的片键对分片至关重要。片键必须是一个索引,数据根据这个片键进行拆分分散。通过sh.shardCollection加会自动创建索引。一个自增的片键对写入和数据均匀分布就不是很好,因为自增的片键总会在一个分片上写入,后续达到某个阀值可能会写到别的分片。但是按照片键查询会非常高效。随机片键对数据的均匀分布效果很好。注意尽量避免在多个分片上进行查询。在所有分片上查询,mongos会对结果进行归并排序。启动上面这些服务,因为在后台运行,所以用配置文件启动,配置文件说明。1)配置服务器的启动。(A上开启3个,Port:20000、21000、22000) 配置服务器是一个普通的mongod进程,所以只需要新开一个实例即可。配置服务器必须开启1个或则3个,开启2个则会报错:BadValue need either 1 or 3 configdbs因为要放到后台用用配置文件启动,需要修改配置文件:/etc/mongod_20000.conf#数据目录dbpath=/usr/local/config/#日志文件logpath=/var/log/mongodb/mongodb_config.log#日志追加logappend=true#端口port = 20000#最大连接数maxConns = 50pidfilepath = /var/run/mongo_20000.pid#日志,redo logjournal = true#刷写提交机制journalCommitInterval = 200#守护进程模式fork = true#刷写数据到日志的频率syncdelay = 60#storageEngine = wiredTiger#操作日志,单位MoplogSize = 1000#命名空间的文件大小,默认16M,最大2G。nssize = 16noauth = trueunixSocketPrefix = /tmpconfigsvr = true/etc/mongod_21000.conf数据目录dbpath=/usr/local/config1/#日志文件logpath=/var/log/mongodb/mongodb_config1.log#日志追加logappend=true#端口port = 21000#最大连接数maxConns = 50pidfilepath = /var/run/mongo_21000.pid#日志,redo logjournal = true#刷写提交机制journalCommitInterval = 200#守护进程模式fork = true#刷写数据到日志的频率syncdelay = 60#storageEngine = wiredTiger#操作日志,单位MoplogSize = 1000#命名空间的文件大小,默认16M,最大2G。nssize = 16noauth = trueunixSocketPrefix = /tmpconfigsvr = true开启配置服务器:root@mongo1:~# mongod -f /etc/mongod_20000.conf about to fork child process, waiting until server is ready for connections.forked process: 8545child process started successfully, parent exitingroot@mongo1:~# mongod -f /etc/mongod_21000.conf about to fork child process, waiting until server is ready for connections.forked process: 8595child process started successfully, parent exiting同理再起一个22000端口的配置服务器。#数据目录dbpath=/usr/local/config2/#日志文件logpath=/var/log/mongodb/mongodb_config2.log#日志追加logappend=true#端口port = 22000#最大连接数maxConns = 50pidfilepath = /var/run/mongo_22000.pid#日志,redo logjournal = true#刷写提交机制journalCommitInterval = 200#守护进程模式fork = true#刷写数据到日志的频率syncdelay = 60#storageEngine = wiredTiger#操作日志,单位MoplogSize = 1000#命名空间的文件大小,默认16M,最大2G。nssize = 16noauth = trueunixSocketPrefix = /tmpconfigsvr = true2)路由服务器的启动。(A、B上各开启1个,Port:30000)路由服务器不保存数据,把日志记录一下即可。# mongos#日志文件logpath=/var/log/mongodb/mongodb_route.log#日志追加logappend=true#端口port = 30000#最大连接数maxConns = 100#绑定地址#bind_ip=192.168.200.*,...,pidfilepath = /var/run/mongo_30000.pidconfigdb=192.168.200.A:20000,192.168.200.A:21000,192.168.200.A:22000 #必须是1个或则3个配置 。#configdb=127.0.0.1:20000 #报错#守护进程模式 fork = true其中最重要的参数是configdb,不能在其后面带的配置服务器的地址写成localhost或则127.0.0.1,需要设置成其他分片也能访问的地址,即192.168.200.A:20000/21000/22000。否则在addshard的时候会报错:{"ok" : 0,"errmsg" : "can't use localhost as a shard since all shards need to communicate. either use all shards and configdbs in localhost or all in actual IPs host: 172.16.5.104:20000 isLocalHost:0"}开启mongos:root@mongo1:~# mongos -f /etc/mongod_30000.conf 2015-07-10T14:42:58.741+0800 W SHARDING running with 1 config server should be done only for testing purposes and is not recommended for productionabout to fork child process, waiting until server is ready for connections.forked process: 8965child process started successfully, parent exiting3)分片服务器的启动:就是一个普通的mongod进程:root@mongo1:~# mongod -f /etc/mongod_40000.conf note: noprealloc may hurt performance in many applicationsabout to fork child process, waiting until server is ready for connections.forked process: 9020child process started successfully, parent exitingA服务器上面的服务开启完毕root@mongo1:~# ps -ef | grep mongoroot   9020   1 0 14:47 ?    00:00:06 mongod -f /etc/mongod_40000.confroot   9990   1 0 15:14 ?    00:00:02 mongod -f /etc/mongod_20000.confroot   10004   1 0 15:14 ?    00:00:01 mongod -f /etc/mongod_21000.confroot   10076   1 0 15:20 ?    00:00:00 mongod -f /etc/mongod_22000.confroot   10096   1 0 15:20 ?    00:00:00 mongos -f /etc/mongod_30000.conf按照上面的方法再到B上开启分片服务和路由服务(配置文件一样),以及在C上开启分片服务。到此分片的配置服务器、路由服务器、分片服务器都已经部署完成。三、配置分片:下面的操作都是在mongodb的命令行里执行1)添加分片:sh.addShard("IP:Port") 登陆路由服务器mongos 操作:root@mongo1:~# mongo --port=30000MongoDB shell version: 3.0.4connecting to: 127.0.0.1:30000/testmongos> 添加分片:mongos> sh.status()  #查看集群的信息--- Sharding Status ---  sharding version: {  "_id" : 1,  "minCompatibleVersion" : 5,  "currentVersion" : 6,  "clusterId" : ObjectId("559f72470f93270ba60b26c6")} shards: balancer:  Currently enabled: yes  Currently running: no  Failed balancer rounds in last 5 attempts: 0  Migration Results for the last 24 hours:     No recent migrations databases:  { "_id" : "admin", "partitioned" : false, "primary" : "config" }mongos> sh.addShard("192.168.200.A:40000") #添加分片{ "shardAdded" : "shard0000", "ok" : 1 }mongos> sh.addShard("192.168.200.B:40000") #添加分片{ "shardAdded" : "shard0001", "ok" : 1 }mongos> sh.addShard("192.168.200.C:40000") #添加分片{ "shardAdded" : "shard0002", "ok" : 1 }mongos> sh.status()  #查看集群信息--- Sharding Status ---  sharding version: {  "_id" : 1,  "minCompatibleVersion" : 5,  "currentVersion" : 6,  "clusterId" : ObjectId("559f72470f93270ba60b26c6")} shards: #分片信息  { "_id" : "shard0000", "host" : "192.168.200.A:40000" }  { "_id" : "shard0001", "host" : "192.168.200.B:40000" }  { "_id" : "shard0002", "host" : "192.168.200.C:40000" } balancer:  Currently enabled: yes  Currently running: no  Failed balancer rounds in last 5 attempts: 0  Migration Results for the last 24 hours:     No recent migrations databases:  { "_id" : "admin", "partitioned" : false, "primary" : "config" }2)开启分片功能:sh.enableSharding("库名")、sh.shardCollection("库名.集合名",{"key":1})mongos> sh.enableSharding("dba") #首先对数据库启用分片{ "ok" : 1 }mongos> sh.status()        #查看分片信息--- Sharding Status ---...... databases:  { "_id" : "admin", "partitioned" : false, "primary" : "config" }  { "_id" : "test", "partitioned" : false, "primary" : "shard0000" }  { "_id" : "dba", "partitioned" : true, "primary" : "shard0000" }mongos> sh.shardCollection("dba.account",{"name":1})  #再对集合进行分片,name字段是片键。片键的选择:利于分块、分散写请求、查询数据。{ "collectionsharded" : "dba.account", "ok" : 1 }mongos> sh.status()--- Sharding Status ---... shards:  { "_id" : "shard0000", "host" : "192.168.200.51:40000" }  { "_id" : "shard0001", "host" : "192.168.200.52:40000" }  { "_id" : "shard0002", "host" : "192.168.200.53:40000" }... databases:  { "_id" : "admin", "partitioned" : false, "primary" : "config" }  { "_id" : "test", "partitioned" : false, "primary" : "shard0000" }  { "_id" : "dba", "partitioned" : true, "primary" : "shard0000" } #库    dba.account      shard key: { "name" : 1 }                  #集合      chunks:        shard0000  1      { "name" : { "$minKey" : 1 } } -->> { "name" : { "$maxKey" : 1 } } on : shard0000 Timestamp(1, 0) 上面加粗部分表示分片信息已经配置完成。要是出现:too many chunks to print, use verbose if you want to force print想要看到详细的信息则需要执行:mongos> sh.status({"verbose":1})或则mongos> db.printShardingStatus("vvvv")或则mongos> printShardingStatus(db.getSisterDB("config"),1)四、测试 :对dba库的account集合进行测试,随机写入,查看是否分散到3个分片中。判断是否为shard:db.runCommand({isdbgrid:1})mongos> db.runCommand({isdbgrid:1}){ "isdbgrid" : 1, "hostname" : "mongo3c", "ok" : 1 }通过一个python脚本进行随机写入:分别向A、B 2个mongos各写入10万条记录。#!/usr/bin/env python#-*- coding:utf-8 -*-#随即写MongoDB Shard 测试import pymongoimport timefrom random import Randomdef random_str(randomlength=8):  str = ''  chars = 'AaBbCcDdEeFfGgHhIiJjKkLlMmNnOoPpQqRrSsTtUuVvWwXxYyZz0123456789'  length = len(chars) - 1  random = Random()  for i in range(randomlength):    str+=chars[random.randint(0, length)]    return strdef inc_data(conn):  db = conn.dba#  db = conn.test  collection = db.account  for i in range(100000):    str = ''    chars = 'AaBbCcDdEeFfGgHhIiJjKkLlMmNnOoPpQqRrSsTtUuVvWwXxYyZz0123456789'    length = len(chars) - 1    random = Random()    for i in range(15):      str+=chars[random.randint(0, length)]      string = str    collection.insert({"name" : string, "age" : 123+i, "address" : "hangzhou"+string})if __name__ =='__main__':  conn = pymongo.MongoClient(host='192.168.200.A/B',port=30000)  StartTime = time.time()  print "===============$inc==============="  print "StartTime : %s" %StartTime  inc_data(conn)  EndTime = time.time()  print "EndTime : %s" %EndTime  CostTime = round(EndTime-StartTime)  print "CostTime : %s" %CostTime查看是否分片:db.collection.stats()mongos> db.account.stats() #查看集合的分布情况......  "shards" : {    "shard0000" : {      "ns" : "dba.account",      "count" : 89710,      "size" : 10047520,......    "shard0001" : {      "ns" : "dba.account",      "count" : 19273,      "size" : 2158576,......    "shard0002" : {      "ns" : "dba.account",      "count" : 91017,      "size" : 10193904,......上面加粗部分为集合的基本信息,可以看到分片成功,各个分片都有数据(count)。到此MongoDB分片集群搭建成功。

原标题:MongoDB:集群部署实践,包含复制集,分片

关键词:MongoDB

*特别声明:以上内容来自于网络收集,著作权属原作者所有,如有侵权,请联系我们: admin#shaoqun.com (#换成@)。

可能感兴趣文章

我的浏览记录