设为首页 收藏本站
查看: 1060|回复: 0

[经验分享] Mysql集群架构MHA应用实战

[复制链接]
累计签到:1 天
连续签到:1 天
发表于 2016-8-12 09:07:08 | 显示全部楼层 |阅读模式

      MHA(Master High Availability)目前在MySQL高可用方面是一个相对成熟的解决方案,它由日本DeNA公司youshimaton(现就职于Facebook公司)开发,
是一套优秀的作为MySQL高可用性环境下故障切换和主从提升的高可用软件。在MySQL故障切换过程中,MHA能做到在0~30秒之内自动完成数据库的故障切换操作,
并且在进行故障切换的过程中,MHA能在最大程度上保证数据的一致性,以达到真正意义上的高可用。
该软件由两部分组成:MHA Manager(管理节点)和MHA Node(数据节点)。MHA Manager可以单独部署在一台独立的机器上管理多个master-slave集群,
也可以部署在一台slave节点上。MHA Node运行在每台MySQL服务器上,MHA Manager会定时探测集群中的master节点,当master出现故障时,
它可以自动将最新数据的slave提升为新的master,然后将所有其他的slave重新指向新的master。整个故障转移过程对应用程序完全透明。
在MHA自动故障切换过程中,MHA试图从宕机的主服务器上保存二进制日志,最大程度的保证数据的不丢失,但这并不总是可行的。
例如,如果主服务器硬件故障或无法通过ssh访问,MHA没法保存二进制日志,只进行故障转移而丢失了最新的数据。
使用MySQL 5.5的半同步复制,可以大大降低数据丢失的风险。MHA可以与半同步复制结合起来。如果只有一个slave已经收到了最新的二进制日志
,MHA可以将最新的二进制日志应用于其他所有的slave服务器上,因此可以保证所有节点的数据一致性。
目前MHA主要支持一主多从的架构,要搭建MHA,要求一个复制集群中必须最少有三台数据库服务器,一主二从,即一台充当master,一台充当备用master,
另外一台充当从库,因为至少需要三台服务器,出于机器成本的考虑,淘宝也在该基础上进行了改造,目前淘宝TMHA已经支持一主一从。
官方介绍:https://code.google.com/p/mysql-master-ha/
下图展示了如何通过MHA Manager管理多组主从复制。可以将MHA工作原理总结为如下
wKiom1esaDahJKVYAAG-CF45n1M860.jpg

本次环境规划如下  (centos6.7)
wKiom1esZzLAI2u0AAAJaE0185Y063.jpg

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
1、配置三台服务器ssh互信
ssh-keygen -t rsa  一路回车即可
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
c7:2e:ca:e2:c2:3b:30:63:97:b4:62:81:dd:27:e3:f9 root@centos02
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|                 |
|.. .             |
|....+ .  .       |
|  o.o=  S o      |
|++ +o    o       |
|o=o  .  . .      |
|  + ..E. .       |
|  .=..o          |
+-----------------+

[iyunv@ansible mysql]#ssh-copy-id -i /root/.ssh/id_rsa.pub root@172.16.80.117
[iyunv@ansible mysql]#ssh-copy-id  -i /root/.ssh/id_rsa.pub root@172.16.80.128
[iyunv@ansible mysql]# ssh-copy-id -i /root/.ssh/id_rsa.pub 172.16.80.127
The authenticity of host '172.16.80.127 (172.16.80.127)' can't be established.
RSA key fingerprint is 05:89:5e:3d:2a:c1:ae:90:27:d9:a5:48:4a:ab:b9:79.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.16.80.127' (RSA) to the list of known hosts.
root@172.16.80.127's password:
Now try logging into the machine, with "ssh '172.16.80.127'", and check in:

  .ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.

测试
[iyunv@ansible mysql]# ssh 172.16.80.117 ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 00:0C:29:45:FE:30  
          inet addr:172.16.80.117  Bcast:172.16.80.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe45:fe30/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1220176 errors:0 dropped:0 overruns:0 frame:0
          TX packets:980887 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1198343068 (1.1 GiB)  TX bytes:1318688106 (1.2 GiB)

[iyunv@ansible mysql]# ssh 172.16.80.127 ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 00:0C:29:FF:58:D9  
          inet addr:172.16.80.127  Bcast:172.16.80.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:feff:58d9/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:162129 errors:0 dropped:0 overruns:0 frame:0
          TX packets:27546 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:225287420 (214.8 MiB)  TX bytes:1921228 (1.8 MiB)


2、三节点配置epel的yum源,安装相关依赖包
rpm -Uvh
rpm --import/etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
yum  -y install perl-DBD-MySQL  ncftp


在3个mysql节点做授权配置
mysql>  grant replication slave  on *.* to 'martin'@'172.16.80.%' identified by '123456';
Query OK, 0 rows affected (0.05 sec)

mysql> grant all on *.* to 'root'@'172.16.80.%' identified by '123456';
Query OK, 0 rows affected (0.00 sec)



wKiom1esI0GBDaM7AAAg8wro6OI852.jpg

1
2
3
4
5
6
7
8
查看主节点上的日志状态mysql> show master status;
mysql> show master status;
+------------------+----------+--------------+------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000001 |      107 |              |                  |
+------------------+----------+--------------+------------------+
1 row in set (0.01 sec)



wKiom1esJLOjvfnZAAAW1oNbu38889.jpg
1
2
3
4
5
6
7
8
9
10
11
12
3、在两个从节点上面执行如下操作
change master to \
master_host='172.16.80.117',\
master_user='martin',\
master_password='123456',\
master_log_file='mysql-bin.000001',\
master_log_pos=107;

mysql> start slave;
Query OK, 0 rows affected (0.06 sec)

mysql> show slave status \G;   可以看到主从同步状态正常




wKioL1esJemSePD-AAAoTcLus1o755.jpg

wKioL1esJfXz29jmAABwh7T83OM013.jpg
1
2
3
4
5
6
7
8
9
10
11
12
13
4、安装MHA软件MHA提供了源码和rpm包两种安装方式,如果是rpm包安装,方式如下:
1)在三个节点依次安装MHA的node
[iyunv@ansible tools]# rpm -ivh mha4mysql-node-0.56-0.el6.noarch.rpm
Preparing...########################################### [100%]   
1:mha4mysql-node ########################################### [100%]
2)最后在Slave/MHA Manager节点安装mha4mysql-manage:
yum install perl-Parallel-ForkManager perl-Time-HiRes \
perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch \
perl-Parallel-ForkManagerperl-Config-IniFilesperl-Time-HiRes
[iyunv@ansible tools]# rpm -ivh mha4mysql-manager-0.56-0.el6.noarch.rpm
Preparing...########################################### [100%]  
1:mha4mysql-manager ########################################### [100%]
[iyunv@ansible tools]# mkdir -p /etc/mha/scripts





1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
1、通过masterha_check_ssh验证ssh信任登录是否成功,
[iyunv@ansible scripts]# masterha_check_ssh  --conf=/etc/mha/app1.cnf
Thu Aug 11 19:29:03 2016 - [info] Reading default configuration from /etc/masterha_default.cnf..
Thu Aug 11 19:29:03 2016 - [info] Reading application default configuration from /etc/mha/app1.cnf..
Thu Aug 11 19:29:03 2016 - [info] Reading server configuration from /etc/mha/app1.cnf..
Thu Aug 11 19:29:03 2016 - [info] Starting SSH connection tests..
Thu Aug 11 19:29:04 2016 - [debug]
Thu Aug 11 19:29:03 2016 - [debug]  Connecting via SSH from root@172.16.80.117(172.16.80.117:22) to root@172.16.80.127(172.16.80.127:22)..
Thu Aug 11 19:29:03 2016 - [debug]   ok.
Thu Aug 11 19:29:03 2016 - [debug]  Connecting via SSH from root@172.16.80.117(172.16.80.117:22) to root@172.16.80.128(172.16.80.128:22)..
Thu Aug 11 19:29:04 2016 - [debug]   ok.
Thu Aug 11 19:29:04 2016 - [debug]
Thu Aug 11 19:29:03 2016 - [debug]  Connecting via SSH from root@172.16.80.127(172.16.80.127:22) to root@172.16.80.117(172.16.80.117:22)..
Thu Aug 11 19:29:04 2016 - [debug]   ok.
Thu Aug 11 19:29:04 2016 - [debug]  Connecting via SSH from root@172.16.80.127(172.16.80.127:22) to root@172.16.80.128(172.16.80.128:22)..
Thu Aug 11 19:29:04 2016 - [debug]   ok.
Thu Aug 11 19:29:04 2016 - [debug]
Thu Aug 11 19:29:04 2016 - [debug]  Connecting via SSH from root@172.16.80.128(172.16.80.128:22) to root@172.16.80.117(172.16.80.117:22)..
Thu Aug 11 19:29:04 2016 - [debug]   ok.
Thu Aug 11 19:29:04 2016 - [debug]  Connecting via SSH from root@172.16.80.128(172.16.80.128:22) to root@172.16.80.127(172.16.80.127:22)..
Thu Aug 11 19:29:04 2016 - [debug]   ok.
Thu Aug 11 19:29:04 2016 - [info] All SSH connection tests passed successfully.



2、masterha_check_repl验证mysql复制是否成功

masterha_check_repl --conf=/etc/mha/app1.cnf
[iyunv@ansible scripts]# masterha_check_repl --conf=/etc/mha/app1.cnf
Thu Aug 11 19:31:53 2016 - [info] Reading default configuration from /etc/masterha_default.cnf..
Thu Aug 11 19:31:53 2016 - [info] Reading application default configuration from /etc/mha/app1.cnf..
Thu Aug 11 19:31:53 2016 - [info] Reading server configuration from /etc/mha/app1.cnf..
Thu Aug 11 19:31:53 2016 - [info] MHA::MasterMonitor version 0.56.
Thu Aug 11 19:31:54 2016 - [info] GTID failover mode = 0
Thu Aug 11 19:31:54 2016 - [info] Dead Servers:
Thu Aug 11 19:31:54 2016 - [info] Alive Servers:
Thu Aug 11 19:31:54 2016 - [info]   172.16.80.117(172.16.80.117:3306)
Thu Aug 11 19:31:54 2016 - [info]   172.16.80.127(172.16.80.127:3306)
Thu Aug 11 19:31:54 2016 - [info]   172.16.80.128(172.16.80.128:3306)
Thu Aug 11 19:31:54 2016 - [info] Alive Slaves:
Thu Aug 11 19:31:54 2016 - [info]   172.16.80.127(172.16.80.127:3306)  Version=5.5.49-log (oldest major version between slaves) log-bin:enabled
Thu Aug 11 19:31:54 2016 - [info]     Replicating from 172.16.80.117(172.16.80.117:3306)
Thu Aug 11 19:31:54 2016 - [info]     Primary candidate for the new Master (candidate_master is set)
Thu Aug 11 19:31:54 2016 - [info]   172.16.80.128(172.16.80.128:3306)  Version=5.5.49-log (oldest major version between slaves) log-bin:enabled
Thu Aug 11 19:31:54 2016 - [info]     Replicating from 172.16.80.117(172.16.80.117:3306)
Thu Aug 11 19:31:54 2016 - [info]     Not candidate for the new Master (no_master is set)
Thu Aug 11 19:31:54 2016 - [info] Current Alive Master: 172.16.80.117(172.16.80.117:3306)
Thu Aug 11 19:31:54 2016 - [info] Checking slave configurations..
Thu Aug 11 19:31:54 2016 - [warning]  relay_log_purge=0 is not set on slave 172.16.80.127(172.16.80.127:3306).
Thu Aug 11 19:31:54 2016 - [warning]  relay_log_purge=0 is not set on slave 172.16.80.128(172.16.80.128:3306).
Thu Aug 11 19:31:54 2016 - [info] Checking replication filtering settings..
Thu Aug 11 19:31:54 2016 - [info]  binlog_do_db= , binlog_ignore_db=
Thu Aug 11 19:31:54 2016 - [info]  Replication filtering check ok.
Thu Aug 11 19:31:54 2016 - [info] GTID (with auto-pos) is not supported
Thu Aug 11 19:31:54 2016 - [info] Starting SSH connection tests..
Thu Aug 11 19:31:56 2016 - [info] All SSH connection tests passed successfully.
Thu Aug 11 19:31:56 2016 - [info] Checking MHA Node version..
Thu Aug 11 19:31:56 2016 - [info]  Version check ok.
Thu Aug 11 19:31:56 2016 - [info] Checking SSH publickey authentication settings on the current master..
Thu Aug 11 19:31:57 2016 - [info] HealthCheck: SSH to 172.16.80.117 is reachable.
Thu Aug 11 19:31:57 2016 - [info] Master MHA Node version is 0.56.
Thu Aug 11 19:31:57 2016 - [info] Checking recovery script configurations on 172.16.80.117(172.16.80.117:3306)..
Thu Aug 11 19:31:57 2016 - [info]   Executing command: save_binary_logs --command=test --start_pos=4 --binlog_dir=/application/mysql/data --output_file=/var/tmp/save_binary_logs_test --manager_version=0.56 --start_file=mysql-bin.000001
Thu Aug 11 19:31:57 2016 - [info]   Connecting to root@172.16.80.117(172.16.80.117:22)..
  Creating /var/tmp if not exists..    ok.
  Checking output directory is accessible or not..
   ok.
  Binlog found at /application/mysql/data, up to mysql-bin.000001
Thu Aug 11 19:31:57 2016 - [info] Binlog setting check done.
Thu Aug 11 19:31:57 2016 - [info] Checking SSH publickey authentication and checking recovery script configurations on all alive slave servers..
Thu Aug 11 19:31:57 2016 - [info]   Executing command : apply_diff_relay_logs --command=test --slave_user='root' --slave_host=172.16.80.127 --slave_ip=172.16.80.127 --slave_port=3306 --workdir=/var/tmp --target_version=5.5.49-log --manager_version=0.56 --relay_log_info=/application/mysql/data/relay-log.info  --relay_dir=/application/mysql/data/  --slave_pass=xxx
Thu Aug 11 19:31:57 2016 - [info]   Connecting to root@172.16.80.127(172.16.80.127:22)..
  Checking slave recovery environment settings..
    Opening /application/mysql/data/relay-log.info ... ok.
    Relay log found at /application/mysql/data, up to mysql-relay-bin.000002
    Temporary relay log file is /application/mysql/data/mysql-relay-bin.000002
    Testing mysql connection and privileges.. done.
    Testing mysqlbinlog output.. done.
    Cleaning up test file(s).. done.
Thu Aug 11 19:31:58 2016 - [info]   Executing command : apply_diff_relay_logs --command=test --slave_user='root' --slave_host=172.16.80.128 --slave_ip=172.16.80.128 --slave_port=3306 --workdir=/var/tmp --target_version=5.5.49-log --manager_version=0.56 --relay_log_info=/application/mysql/data/relay-log.info  --relay_dir=/application/mysql/data/  --slave_pass=xxx
Thu Aug 11 19:31:58 2016 - [info]   Connecting to root@172.16.80.128(172.16.80.128:22)..
  Checking slave recovery environment settings..
    Opening /application/mysql/data/relay-log.info ... ok.
    Relay log found at /application/mysql/data, up to mysql-relay-bin.000002
    Temporary relay log file is /application/mysql/data/mysql-relay-bin.000002
    Testing mysql connection and privileges.. done.
    Testing mysqlbinlog output.. done.
    Cleaning up test file(s).. done.
Thu Aug 11 19:31:58 2016 - [info] Slaves settings check done.
Thu Aug 11 19:31:58 2016 - [info]
172.16.80.117(172.16.80.117:3306) (current master)
+--172.16.80.127(172.16.80.127:3306)
+--172.16.80.128(172.16.80.128:3306)

Thu Aug 11 19:31:58 2016 - [info] Checking replication health on 172.16.80.127..
Thu Aug 11 19:31:58 2016 - [info]  ok.
Thu Aug 11 19:31:58 2016 - [info] Checking replication health on 172.16.80.128..
Thu Aug 11 19:31:58 2016 - [info]  ok.
Thu Aug 11 19:31:58 2016 - [info] Checking master_ip_failover_script status:
Thu Aug 11 19:31:58 2016 - [info]   /etc/mha/scripts/master_ip_failover --command=status --ssh_user=root --orig_master_host=172.16.80.117 --orig_master_ip=172.16.80.117 --orig_master_port=3306


IN SCRIPT TEST====/sbin/ifconfig eth0:1 down==/sbin/ifconfig eth0:1 172.16.80.200/24===

Checking the Status of the script.. OK
Thu Aug 11 19:31:58 2016 - [info]  OK.
Thu Aug 11 19:31:58 2016 - [warning] shutdown_script is not defined.
Thu Aug 11 19:31:58 2016 - [info] Got exit code 0 (Not master dead).

MySQL Replication Health is OK.


准备failover脚本用于vip切换

[iyunv@ansible ~]# cat /etc/mha/scripts/master_ip_failover
#!/usr/bin/env perl

use strict;
use warnings FATAL => 'all';

use Getopt::Long;

my (
    $command,          $ssh_user,        $orig_master_host, $orig_master_ip,
    $orig_master_port, $new_master_host, $new_master_ip,    $new_master_port
);

my $vip = '172.16.80.200/24';
my $key = '1';
my $ssh_start_vip = "/sbin/ifconfig eth0:$key $vip";
my $ssh_stop_vip = "/sbin/ifconfig eth0:$key down";

GetOptions(
    'command=s'          => \$command,
    'ssh_user=s'         => \$ssh_user,
    'orig_master_host=s' => \$orig_master_host,
    'orig_master_ip=s'   => \$orig_master_ip,
    'orig_master_port=i' => \$orig_master_port,
    'new_master_host=s'  => \$new_master_host,
    'new_master_ip=s'    => \$new_master_ip,
    'new_master_port=i'  => \$new_master_port,
);

exit &main();

sub main {

    print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";

    if ( $command eq "stop" || $command eq "stopssh" ) {

        my $exit_code = 1;
        eval {
            print "Disabling the VIP on old master: $orig_master_host \n";
            &stop_vip();
            $exit_code = 0;
        };
        if ($@) {
            warn "Got Error: $@\n";
            exit $exit_code;
        }
        exit $exit_code;
    }
    elsif ( $command eq "start" ) {

        my $exit_code = 10;
        eval {
            print "Enabling the VIP - $vip on the new master - $new_master_host \n";
            &start_vip();
            $exit_code = 0;
        };
        if ($@) {
            warn $@;
            exit $exit_code;
        }
        exit $exit_code;
    }
    elsif ( $command eq "status" ) {
        print "Checking the Status of the script.. OK \n";
        exit 0;
    }
    else {
        &usage();
        exit 1;
    }
}

sub start_vip() {
    `ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}
sub stop_vip() {
     return 0  unless  ($ssh_user);
    `ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
}

sub usage {
    print
    "Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
}



启动MHA
先执行如下命令:
/sbin/ifconfig eth0:1 172.16.80.200(只需第一次添加)
将vip绑定到目前的master上。

然后通过masterha_manager启动MHA监控:
[iyunv@ansible scripts]# mkdir /var/log//masterha/app1 -p
[iyunv@ansible scripts]# touch /var/log/masterha/app1/manager.log

[iyunv@ansible scripts]# nohup masterha_manager --conf=/etc/mha/app1.cnf  \
--remove_dead_master_conf--ignore_last_failover< /dev/null > \
/var/log/masterha/app1/manager.log 2>&1 &


然后通过masterha_check_status查看MHA状态
[iyunv@ansible scripts]# masterha_check_status --conf=/etc/mha/app1.cnf
app1 (pid:58184) is running(0:PING_OK), master:172.16.80.117

模拟主库 172.16.80.117 数据库挂掉
[iyunv@centos02 .ssh]# /etc/init.d/mysqld stop
Shutting down MySQL................               [  OK  ]



wKiom1esaazThD7dAACLfMh7I2Q793.jpg

可以看到这个从库自动连接到了新的主库 172.16.80.127上面
1
2
3
4
切换完成后,关注如下变化:
1、    vip自动从原来的master切换到新的master,同时,manager节点的监控进程自动退出。
2、    在日志目录(/var/log/masterha/app1)产生一个app1.failover.complete文件
3、    /etc/mha/app1.cnf配置文件中原来老的master配置被删除。




再截图之前ssh及mysql主从检查的过程
wKioL1esak2yU3qHAAC76lhrkoY729.jpg

wKiom1esatDhb6J4AADKJxNrXd4279.jpg


运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-256727-1-1.html 上篇帖子: mysql 增加数据文件 下篇帖子: MySQL5.6基本优化配置
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表