你的位置:首页 > 数据库

[数据库]mysql mmm高可用架构设计

项目概述:搭建主从,双主,安装Perl模块  安装配置mmm软件  测试

硬件环境:4台虚拟PC 

软件环境:rehl6.5 MySQL-5.6.26 percona-xtrabackup-2.3.4 mysql-mmm-2.2.1 

实施步骤:

  搭建主从,双主:

1.搭建4台主机与数据库:m1 m2 slave deve

2.修改m1 m2 slave参数文件 

[mysqld]
basedir=/usr/local/mysql
datadir=/usr/local/mysql/data
port=3306
socket=/tmp/mysql.sock
pid-file=/var/run/mysqld/mysql.pid
user=mysql

server-id=3
log_bin = mysqlserver
binlog_format=row

[mysql]
socket=/tmp/mysql.sock


 m1 m2参数文件在[mysqld]追加

auto_increment_increment=1auto_increment_offset=2
log_slave_updates=1

3.重启

service mysqld stopnetstat -auplnt |grep 3306service mysqld startgrep -v "#" /etc/my.cnf 过滤#开头行查看文件

4.使用xtrabackup备份主库4.1 安装工具m1 m2 slave都要安装xtrabackup# yum install cmake gcc gcc-c++ libaio libaio-devel automake autoconf bison libtool ncurses-devel libgcrypt-devel libev-devel libcurl-devel vim-common -y下面两个软件需要单独安装rpm -ivh libev4-4.15-7.1.x86_64.rpmrpm -ivh libev-devel-4.15-21.1.x86_64.rpm          # tar xf percona-xtrabackup-2.3.4.tar.gz # cd percona-xtrabackup-2.3.4#cmake -DBUILD_CONFIG=xtrabackup_release -DWITH_MAN_PAGES=OFF && make -j4# make install修改PATH变量# vim /etc/profile文件最后追加export PATH="$PATH:/usr/local/xtrabackup/bin" # source /etc/profile# echo $PATH/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/usr/local/mysql/bin:/usr/local/xtrabackup/bin          # innobackupex --help 验证安装成功与否建立备份目录 /backup/chown -R mysql:mysql /backup注意:3台pc时间一样    参数文件一致 其中server-id不一样date -s '2016/06/30 16:05'

4.2 备份恢复

主库m1备份:# innobackupex --user=root --password=123123 --socket=/var/run/mysqld/mysql.sock --no-timestamp /backup/# innobackupex --apply-log /backup/主库m1上:建用户mysql> create user 'repl'@'%' identified by 'repl';mysql> grant replication slave on *.* to 'root'@'%';将主库的备份拷贝到备库scp安装百度# scp -r * 172.16.145.38:/backup/          在m2 slave 上:关闭数据库如果备库已经有数据库,删除数据库即可datadir目录下面的内容 //删除sock文件、pid文件 //删除,数据库关闭情况下,这两个文件很可能已经消失binlog文件 //最好删除          恢复从库:innobackupex --move-back /backup/修改权限[root@mysql5 data]# chown -R mysql:mysql .启动从库mysqld_safe --defaults-file=/etc/my.cnf --skip-slave-start &

5.搭建主从

查看position# cat xtrabackup_info          在m2 slave库上执行:mysql>change master tomaster_host='172.16.145.37',master_port=3306,master_user='root',master_password='123123',master_log_file='mysqlserver.000004',master_log_pos=1454启动从库start slave;          查看备库状态mysql> show slave status\GLast_IO_Error,这个需要为空show slave status \G       Slave_IO_Running: Yes      Slave_SQL_Running: Yes验证:主库 建表 查看从库至此 主从 备份恢复搭建完毕

6.搭建双主

在m1上:mysql> stop slave;CHANGE MASTER TO master_host = '172.16.145.38', master_port=3306, master_user='root',master_password='123123', master_log_file='mysqlserver.000003', master_log_pos=301;IP地址是m2的master_log_file='mysqlserver.000003', master_log_pos=301;这两个值随便从M2上show master status得到就行          start slave;        至此 双主搭建成功

7.搭建3M 

7.1配置网络 保证m1 m2 slave通信上网

3台服务器加网卡 使用NTA转换配置[root@mysql5 network-scripts]# cat ifcfg-eth1DEVICE=eth1TYPE=EthernetONBOOT=yesBOOTPROTO=dhcp          重启网卡 

7.2安装Perl模块

可以选用网络在线安装的方式,注意一定要选用国内的服务器,最好的就是163perl -MCPAN -e shell 一种在线安装方式:cpancpan命令:o config init 重新配置o config commit 保存配置          安装步骤:[root@mysql5 ~]# perl -MCPAN -e shell        配置cpan 使用163作为Perl安装源# yum install cpan -y# perl -MCPAN -e shell输入yes是表示自动配置 生成了cpan的默认文件cpan[1]> o conf commit 保存配置cpan[3]> exit[root@mysql5 ~]# perl -MCPAN -e shellcpan[1]> o conf init输出no一路回车/root/.cpan/sources/MIRRORED.BY cpan服务器文件到亚洲中国选择163cpan[3]> o conf commit安装在cpan上安装,需要选择就回车 install YAMLinstall Algorithm::Diffinstall Class::Singletoninstall DBIinstall DBD::mysql  安装这个模块以前,需要做如下工作,另外开一个窗口  MySQL编译安装的   [root@mysql5 network-scripts]# cd /etc/ld.so.conf.d/  [root@mysql5 ld.so.conf.d]# ll  total 8  -r--r--r--. 1 root root 324 Nov 11 2013 kernel-2.6.32-  431.el6.x86_64.conf  -rw-r--r--. 1 root root 17 Aug 9 2013 mysql-x86_64.conf  [root@mysql5 ld.so.conf.d]# vim mysql-x86_64.conf(文件随机选就行)  追加 :/usr/local/mysql/lib  ldconfig 刷新/etc/ld.so.conf.d/下的动态库install File::Tempinstall Log::Dispatchinstall Log::Log4perlinstall Mail::Sendinstall Net::ARPforce install Net::Ping #force表示只安装不测试 强制安装install Proc::Daemoninstall Thread::Queueinstall Time::HiRes注意:保证每个模块安装上。可以在安装一遍查看是否安装上。安装不上可以下载tar包编译安装

7.3安装配置mmm

安装mmm# tar xf mysql-mmm-2.2.1.tar.gz # cd mysql-mmm-2.2.1# make install在m1建用户grant replication client on *.* to 'mmm_monitor'@'172.16.145.%' identified by 'mmm_monitor';grant super,replication client,process on *.* to 'mmm_agent'@'172.16.145.%' identified by 'mmm_agent';grant replication slave on *.* to 'repl'@'172.16.145.%' identified by 'repl';grant all on *.* to 'root'@'172.16.145.%' identified by '123123';测试连通性 测试用户建立的是否得当 在每一台服务器上mysql -uroot -prootroot -h192.168.56.219mysql -uroot -prootroot -h192.168.56.101mysql -uroot -prootroot -h192.168.56.102            mysql -urepl -prepl -h192.168.56.219mysql -urepl -prepl -h192.168.56.101
mysql -urepl -prepl -h192.168.56.102 mysql -ummm_agent -pmmm_agent -h192.168.56.219mysql -ummm_agent -pmmm_agent -h192.168.56.101mysql -ummm_agent -pmmm_agent -h192.168.56.102

配置mmm文件以下文件在m1 m2 slave配置  /etc/mysql-mmm/mmm_common.conf   active_master_role  writer #主作用 写<host default>  cluster_interface    eth1 #通信的网卡  pid_path        /var/run/mmm_agentd.pid  bin_path        /usr/lib/mysql-mmm/  replication_user    repl #主从复制用户  replication_password  repl  agent_user        mmm_agent#通信的用户  agent_password      mmm_agent</host><host m1>#主机名  ip            172.16.145.37  mode          master#角色  peer          m2 # m1对应的</host><host m2>  ip            172.16.145.38  mode          master  peer          m1</host><host slave_mon>  ip            172.16.145.39  mode          slave</host><role writer> #写角色  hosts          m1,m2  ips          172.16.145.24 vip 同一网段不一样  mode          exclusive</role><role reader>  hosts          m2,slave_mon  ips          172.16.145.25, 172.16.145.26  mode          balanced</role>mmm_agent.conf文件this slave_mon(三台主机的主机名)[root@m1 ~]# vim /etc/hosts[root@m1 ~]# cat /etc/hosts172.16.145.37 m1172.16.145.38 m2172.16.145.39 slave_mon127.0.0.1  localhost localhost.localdomain localhost4 localhost4.localdomain4::1     localhost localhost.localdomain localhost6 localhost6.localdomain6

monitor在salve上 在slave配置# vim mmm_mon.conf 修改IP地址  注意monitor用户名与密码  include mmm_common.conf   <monitor>         ip                       127.0.0.1         pid_path                /var/run/mmm_mond.pid         bin_path                /usr/lib/mysql-mmm/         status_path               /var/lib/misc/mmm_mond.status         ping_ips                192.168.0.1, 192.168.0.2, 192.168.0.3(m1 m2 slave ip地址)   </monitor>             <host default>        monitor_user          mmm_monitor        monitor_password        mmm_monotor   </host>              debug 0

mmm配置完毕

 

8.启动与测试

启动mmm、测试mmm状态各个服务器的MySQL slave 启动  每一个服务器启动agent/etc/init.d/mysql-mmm-agent startmonitor服务器启动monitor/etc/init.d/mysql-mmm-monitor start[root@slave_mon ~]# /etc/init.d/mysql-mmm-monitor start          Daemon bin: '/usr/sbin/mmm_mond'          Daemon pid: '/var/run/mmm_mond.pid'          Starting MMM Monitor daemon: Ok          用deve登陆read和write vip,在write上建表,在read查看。m1模拟宕机,write vip飘到m2上# mmm_control helpValid commands are:  help               - show this message  ping               - ping monitor  show               - show status 查看状态  checks [<host>|all [<check>|all]] - show checks status 检查  set_online <host>         - set host <host> online 开启状态  set_offline <host>        - set host <host> offline  关闭  mode               - print current mode.  set_active            - switch into active mode.  set_manual            - switch into manual mode.  set_passive            - switch into passive mode.  move_role [--force] <role> <host> - move exclusive role <role> to host <host> 移动角色                    (Only use --force if you know what you are doing!)  set_ip <ip> <host>        - set role with ip <ip> to host <host>