此MGR+ProxySql MySql集群架构,目的是为了解决目前DMS系统数据库单机环境对高并发支撑不足,数据库经常出现阻塞,导致系统卡顿的问题。此次搭建可选择在凌晨业务不繁忙时间,预计搭建时间3个小时。
MySQL是目前较流行的关系型开源数据库之一,支持完整的事务支持,支持标准的SQL以及支持跨平台部署。Proxysql是基于MySQL的一款开源的中间件的产品,是一个灵活的MySQL代理层,可以实现读写分离,支持Query路由功能,支持动态指定某个SQL进行缓存,支持动态加载(无需重启ProxySQL服务),故障切换和SQL过滤功能。
序号 | 软件 | 版本 |
2 | MySQL | 5.7.28 |
3 | ProyxSql | 2.0.12 |
本次搭建共需要在生产环境新增主机2台,以下为需要的具体配置信息:
序号 | 操作系统 | CPU | 内存 | 磁盘 | 部署软件 | 数量 | 备注 |
1 | CentOS7.4 | 32c | 128G | 1T | ProxySql | 1台 | |
2 | CentOS7.4 | 32c | 128G | 2T | MySQL | 1台 |
需要在其中proxysql主机开放6032、6033端口供应用程序访问及监控使用。
环境检查,主要检查硬件信息是否符合申请的指标以及软件版本信息
主机 | CPU | 磁盘 | 内存 | 网络 | 软件版本 |
NODE | 满足 | 满足 | 满足 | 满足 | 满足 |
MGR集群实现数据库复制功能及高可用。Proxysql对应用程序提供访问,对MGR集群进行读写分离,集群状态检测,实现故障切换。
先将新增的一台服务器搭建MySQL数据库,将DMS主数据库的备份数据传送到服务器上,搭建主从使目前3台数据库的数据保持一致,为搭建MGR作准备。
156:
server-id = 156 #以服务器ip设置,
log-bin = /usr/local/mysql/log/mysql-bin.log --开启binlog
###MGR setting####
master_info_repository=TABLE
relay_log_info_repository=TABLE
slave_preserve_commit_order=ON
binlog_checksum=NONE
transaction_write_set_extraction=XXHASH64
loose-group_replication_group_name=0f4ecd37-afa1-11ea-80a8-00505683effffd
loose-group_replication_start_on_boot=off
loose-group_replication_local_address="11.11.11.156:33061"
loose-group_replication_group_seeds="11.11.11.156:33061,11.11.11.157:33062,11.11.11.244:33063"
loose-group_replication_ip_whitelist="11.11.11.156,11.11.11.157,11.11.11.244"
loose-group_replication_bootstrap_group= off
loose-group_replication_single_primary_mode=true
loose-group_replication_enforce_update_everywhere_checks=false
157:
server-id = 157 #以服务器ip设置,
log-bin = /usr/local/mysql/log/mysql-bin.log --开启binlog
###MGR setting####
master_info_repository=TABLE
relay_log_info_repository=TABLE
slave_preserve_commit_order=ON
binlog_checksum=NONE
transaction_write_set_extraction=XXHASH64
loose-group_replication_group_name=0f4ecd37-afa1-11ea-80a8-00505683effffd
loose-group_replication_start_on_boot=off
loose-group_replication_local_address="11.11.11.157:33062"
loose-group_replication_group_seeds="11.11.11.156:33061,11.11.11.157:33062,11.11.11.244:33063"
loose-group_replication_ip_whitelist="11.11.11.156,11.11.11.157,11.11.11.244"
loose-group_replication_bootstrap_group= off
loose-group_replication_single_primary_mode=true
loose-group_replication_enforce_update_everywhere_checks=false
244:
server-id = 244 #以服务器ip设置,
log-bin = /usr/local/mysql/log/mysql-bin.log
server-id = 156 #以服务器ip设置,
log-bin = /usr/local/mysql/log/mysql-bin.log --开启binlog
###MGR setting####
master_info_repository=TABLE
relay_log_info_repository=TABLE
slave_preserve_commit_order=ON
binlog_checksum=NONE
transaction_write_set_extraction=XXHASH64
loose-group_replication_group_name=0f4ecd37-afa1-11ea-80a8-00505683effffd
loose-group_replication_start_on_boot=off
loose-group_replication_local_address="11.11.11.156:33061"
loose-group_replication_group_seeds="11.11.11.156:33061,11.11.11.157:33062,11.11.11.244:33063"
loose-group_replication_ip_whitelist="11.11.11.156,11.11.11.157,11.11.11.244"
loose-group_replication_bootstrap_group= off
loose-group_replication_single_primary_mode=true
loose-group_replication_enforce_update_everywhere_checks=false
修改主机名
hostnamectl set-hostname mgr_node1
hostnamectl set-hostname mgr_node2
hostnamectl set-hostname mgr_node3
hostnamectl set-hostname proxysql
修改hosts文件
11.11.11.156 mgr_node1
11.11.11.157 mgr_node2
11.11.11.244 mgr_node3
开启MGR
主节点:
CHANGE MASTER TO MASTER_USER=repl, MASTER_PASSWORD=Rest!101 FORCHANNEL group_replication_recovery;
installplugin group_replication soname group_replication.so; --安装gr插件
#设置group_replication_bootstrap_group为ON是为了标示以后加入集群的服务器以这台服务器为基准,以后加入的就不需要设置。
setglobal group_replication_bootstrap_group=on;
开启组复制
startgroup_replication;
setglobal group_replication_bootstrap_group=off;
节点1:
CHANGE MASTER TO MASTER_USER=repl, MASTER_PASSWORD=Rest!101 FORCHANNEL group_replication_recovery;
installplugin group_replication soname group_replication.so;
setglobal group_replication_allow_local_disjoint_gtids_join=ON;
startgroup_replication;
节点2:
CHANGE MASTER TO MASTER_USER=repl, MASTER_PASSWORD=Rest!101 FORCHANNEL group_replication_recovery;
installplugin group_replication soname group_replication.so;
setglobal group_replication_allow_local_disjoint_gtids_join=ON;
startgroup_replication;
查看是否搭建成功
#查询组成员
select* from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
|CHANNEL_NAME | MEMBER_ID |MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
|group_replication_applier | 38ed8610-aca0-11ea-8482-00505683effffd |mgr_node1 | 3306 | ONLINE |
|group_replication_applier | 9d5c531c-b075-11ea-9d27-005056839787 |mgr_node2 | 3306 | ONLINE |
|group_replication_applier | dc4bd6bb-b076-11ea-96f6-005056835c02 |mgr_node3 | 3306 | ONLINE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
#查询主节点
selectvariable_value from performance_schema.global_status wherevariable_name=group_replication_primary_member;
+--------------------------------------+
| variable_value |
+--------------------------------------+
| 38ed8610-aca0-11ea-8482-00505683effffd |
+--------------------------------------+
在mgr集群上创建proxysql所需的账号
#proxysql的监控账户
createuser monitor@% identified by Monitor@123;
grantall privileges on *.* to monitor@% with grant option;
#proxysql的对外访问账户
createuser proxysql@% identified by Proxysql@123;
grantall privileges on *.* to proxysql@% with grant option;
搭建yum环境,安装proxysql软件
搭建yum源
cat<
[proxysql_repo]
name=ProxySQL YUM repository
baseurl=http://repo.proxysql.com/ProxySQL/proxysql-2.0.x/centos/$releasever
gpgcheck=1
gpgkey=http://repo.proxysql.com/ProxySQL/repo_pub_key
EOF
安装依赖
yum-y install perl.x86_64
yuminstall -y libaio.x86_64
yum-y install net-tools.x86_64
yuminstall perl-DBD-MySQL -y
安装Proxysql
Yuminstall -y proxysql
启动proxySql
systemctlstart proxysql
netstat-anlp | grep proxysql
6032是ProxySQL的管理端口号,6033是对外服务的端口号
ProxySQL的用户名和密码都是默认的admin
配置proxySql:配置访问账号及监控监控,在mgr主节点执行监控脚本。
管理员登录ProxySQL
/usr/local/mysql/bin/mysql-uadmin -padmin -h 127.0.0.1 -P 6032
#配置监控账号
setmysql-monitor_username=monitor;
setmysql-monitor_password=Monitor@123;
#配置默认组信息
insertintomysql_group_replication_hostgroups(writer_hostgroup,backup_writer_hostgroup,reader_hostgroup,offline_hostgroup,active,writer_is_also_reader)values(10,20,30,40,1,1);
#配置用户(主要是添加程序端的这个用户,也就是run,将其设置到写组10里面)
insertinto mysql_users(username,password,default_hostgroup)values(proxysql,Proxysql@123,10);
主节点定义为写组10,从节点定义为只读组30
insertintomysql_servers(hostgroup_id,hostname,port,weight,max_connections,max_replication_lag,comment)values (10,10.2.159.35,3306,1,3000,10,mgr_node1);
insertintomysql_servers(hostgroup_id,hostname,port,weight,max_connections,max_replication_lag,comment)values (30,10.2.159.36,3306,2,3000,10,mgr_node2);
insertintomysql_servers(hostgroup_id,hostname,port,weight,max_connections,max_replication_lag,comment)values (30,10.2.159.46,3306,2,3000,10,mgr_node3);
insertintomysql_servers(hostgroup_id,hostname,port,weight,max_connections,max_replication_lag,comment)values (30,10.2.159.35,3306,1,3000,10,mgr_node1);
规划读写组,添加节点设置读写分离规则,查看节点状态。
#配置读写分离参数
insertintomysql_query_rules(rule_id,active,match_digest,destination_hostgroup,apply)values(1,1,^SELECT.*FORUPDATE$,10,1);
insertintomysql_query_rules(rule_id,active,match_digest,destination_hostgroup,apply)values(2,1,^SELECT,30,1);
save使内存数据永久存储到磁盘,load使内存数据加载到runtime生效:
savemysql users to disk;
savemysql servers to disk;
savemysql query rules to disk;
savemysql variables to disk;
saveadmin variables to disk;
loadmysql users to runtime;
loadmysql servers to runtime;
loadmysql query rules to runtime;
loadmysql variables to runtime;
loadadmin variables to runtime;
在MGR主节点执行监控脚本
USEsys;
DELIMITER$$
CREATEFUNCTION IFZERO(a INT, b INT)
RETURNSINT
DETERMINISTIC
RETURNIF(a = 0, b, a)$$
CREATEFUNCTION LOCATE2(needle TEXT(10000), haystack TEXT(10000), offsetINT)
RETURNSINT
DETERMINISTIC
RETURNIFZERO(LOCATE(needle, haystack, offset), LENGTH(haystack) + 1)$$
CREATEFUNCTION GTID_NORMALIZE(g TEXT(10000))
RETURNSTEXT(10000)
DETERMINISTIC
RETURNGTID_SUBTRACT(g, )$$
CREATEFUNCTION GTID_COUNT(gtid_set TEXT(10000))
RETURNSINT
DETERMINISTIC
BEGIN
DECLAREresult BIGINT DEFAULT 0;
DECLAREcolon_pos INT;
DECLAREnext_dash_pos INT;
DECLAREnext_colon_pos INT;
DECLAREnext_comma_pos INT;
SETgtid_set = GTID_NORMALIZE(gtid_set);
SETcolon_pos = LOCATE2(:, gtid_set, 1);
WHILEcolon_pos != LENGTH(gtid_set) + 1 DO
SETnext_dash_pos = LOCATE2(-, gtid_set, colon_pos + 1);
SETnext_colon_pos = LOCATE2(:, gtid_set, colon_pos + 1);
SETnext_comma_pos = LOCATE2(,, gtid_set, colon_pos + 1);
IFnext_dash_pos < next_colon_pos AND next_dash_pos
SETresult = result +
SUBSTR(gtid_set,next_dash_pos + 1,
LEAST(next_colon_pos,next_comma_pos) - (next_dash_pos + 1)) -
SUBSTR(gtid_set,colon_pos + 1, next_dash_pos - (colon_pos + 1)) + 1;
ELSE
SETresult = result + 1;
ENDIF;
SETcolon_pos = next_colon_pos;
ENDWHILE;
RETURNresult;
END$$
CREATEFUNCTION gr_applier_queue_length()
RETURNSINT
DETERMINISTIC
BEGIN
RETURN(SELECT sys.gtid_count( GTID_SUBTRACT( (SELECT
Received_transaction_setFROM performance_schema.replication_connection_status
WHEREChannel_name = group_replication_applier ), (SELECT
@@global.GTID_EXECUTED))));
END$$
CREATEFUNCTION gr_member_in_primary_partition()
RETURNSVARCHAR(3)
DETERMINISTIC
BEGIN
RETURN(SELECT IF( MEMBER_STATE=ONLINE AND ((SELECT COUNT(*) FROM
performance_schema.replication_group_membersWHERE MEMBER_STATE != ONLINE) >=
((SELECTCOUNT(*) FROM performance_schema.replication_group_members)/2) = 0),
YES,NO ) FROM performance_schema.replication_group_members JOIN
performance_schema.replication_group_member_statsUSING(member_id));
END$$
CREATEVIEW gr_member_routing_candidate_status AS SELECT
sys.gr_member_in_primary_partition()as viable_candidate,
IF((SELECT (SELECT GROUP_CONCAT(variable_value) FROM
performance_schema.global_variablesWHERE variable_name IN (read_only,
super_read_only))!= OFF,OFF), YES, NO) as read_only,
sys.gr_applier_queue_length()as transactions_behind, Count_Transactions_in_queue astransactions_to_cert fromperformance_schema.replication_group_member_stats;$$
DELIMITER;
查看各节点状态
SELECT* FROM sys.gr_member_routing_candidate_status;
selecthostname,port,viable_candidate,read_only,transactions_behind,errorfrom mysql_server_group_replication_log order by time_start_us desclimit 6;
SELECT* FROM monitor.mysql_server_connect_log ORDER BY time_start_us DESCLIMIT 10 ;
SELECT* FROM monitor.mysql_server_ping_log ORDER BY time_start_us DESCLIMIT 10;
在应用端验证读写分离是否可用;验证mgr故障转移是否可用
测试读负载均衡
fori in `seq 1 10`; do /usr/local/mysql/bin/mysql -uproxysql-pProxysql@123 -h127.0.0.1 -P6033 -e "select * fromperformance_schema.global_variables where variable_name=server_id;"; done | grep server
fori in `seq 1 10`; do /usr/local/mysql/bin/mysql -uproxysql-pProxysql@123 -h10.2.159.47 -P6033 -e "select * fromperformance_schema.global_variables where variable_name=server_id;"; done | grep server
验证系统各功能是否正常
本次集群搭建中若出现问题,或搭建成功后系统功能不可用,可采用应用端保持连接原始数据库,或切换回原始数据库链接,以此确保架构改造失败的回退安全性。
文章版权归作者所有,未经允许请勿转载,若此文章存在违规行为,您可以联系管理员删除。
转载请注明本文地址:https://www.ucloud.cn/yun/130200.html
摘要:利用快速构建系统。构建系统和的安装本文不再赘述,直接开始动手构建系统。分别为和,用于读写组,用于只读组。最后配置的监控服务可选,非必须至此,一个全部基于开源应用的简易系统就构建好了。利用ProxySQL、MySQL、ClickHouse快速构建HTAP系统。1. 关于ClickHouse企业里随着数据量的增加,以及日趋复杂的分析性业务需求,主要适用于OLTP场景的MySQL压力越来越大。多年...
摘要:我这里的实验环境单机单主,已经配置好了读写分离策略。这些都不是本文的重点,就一带而过吧。我下面只贴基于指纹的阻断的配置。我这里的实验环境:单机proxysql+mgr单主,已经配置好了读写分离策略。这些都不是本文的重点,就一带而过吧。我下面只贴基于sql指纹的阻断的配置。我们这里先查看下当前proxysql的 query rule表nonerule_idactivedigestmatch_p...
摘要:我这里的实验环境单机单主,已经配置好了读写分离策略。这些都不是本文的重点,就一带而过吧。我下面只贴基于指纹的阻断的配置。我这里的实验环境:单机proxysql+mgr单主,已经配置好了读写分离策略。这些都不是本文的重点,就一带而过吧。我下面只贴基于sql指纹的阻断的配置。我们这里先查看下当前proxysql的 query rule表nonerule_idactivedigestmatch_p...
摘要:读写分离中间件具有独立的。变量语句将被广播考虑到节点间数据一致性问题,只会分发到主节点。节点健康检查,提升数据库系统可用性。UCloud MySQL云数据库读写分离 背景 数据显示,关系型数据库在OLTP业务下96.87%都在等待读I/O,而处理器计算仅仅占了5.3%,这说明要提高数据库的QPS性能,关键的一点是提高系统的IO能力。 另一个数据表明, 大多数业务对数据库的访...
摘要:第一次接触集群,感谢官方的指导文档和许多网友提供的教程,糊糊涂涂算是把集群部署起来了。应该为每个运行的机器添加一个,否则集群处于状态。至此的集群搭建算是完成了,下一步会进行块设备的搭建。参考分布式存储部署手册如何在中安装存储集群部署版 第一次接触ceph集群,感谢官方的指导文档和许多网友提供的教程,糊糊涂涂算是把集群部署起来了。由于Luminous12.2刚发布不久,部署起来跟旧版本还...
MySQL集群MGR升级实施测试方案 img{ display:block; margin:0 auto !important; width:100%; } body{ width:75%; ...
阅读 1249·2023-01-11 13:20
阅读 1557·2023-01-11 13:20
阅读 1011·2023-01-11 13:20
阅读 1680·2023-01-11 13:20
阅读 3971·2023-01-11 13:20
阅读 2519·2023-01-11 13:20
阅读 1310·2023-01-11 13:20
阅读 3486·2023-01-11 13:20