Tuesday, September 22, 2020

MySQL 8 Router - High Availability - with Pacemaker on CentOS 8

 
https://github.com/auntaru/vagrantfiles
https://www.youtube.com/watch?v=EVb_YhO3c7U
https://raw.githubusercontent.com/auntaru/vagrantfiles/master/MySQL-iNNoDB-CLuSTeR-pacemaker-RouTeR/provision.sh
https://github.com/auntaru/vagrantfiles/blob/master/MySQL-iNNoDB-CLuSTeR-pacemaker-RouTeR/innodb-cluster-status.js
https://github.com/auntaru/vagrantfiles/blob/master/MySQL-iNNoDB-CLuSTeR-pacemaker-RouTeR/Vagrantfile



GoaL of this article :
   MySQL Router shouldn't be the single point of failure between application and a MySQL CLuSTeR

On all Nodes, Install Pacemaker and configure some settings.
If [HighAvailability] repository is not in your CentOS 8, run [# dnf upgrade centos-repos] first.

The first step is to install pacemaker on all the machines we will use for our “MySQL Router Cluster”:

# # enable [HighAvailability] repo and install (it's disabled by default)
[root@hyperv-primary-cos8 ~]# dnf --enablerepo=HighAvailability -y install pacemaker pcs

# Now we need to start the pcsd service and enable it at boot (on all machines):
[root@hyperv-primary-cos8 ~]# systemctl enable --now pcsd

Setup authentication : operation is again executed on all machines part of the MySQL Router Cluster:
# # set cluster admin password
[root@hyperv-primary-cos8 ~]# echo MyStrongPassw0rd | passwd --stdin hacluster

And on one of the nodes, we can now execute the following command:

On CentOS7 : 

[root@mysql1 ~]# pcs cluster auth mysql1 mysql2 mysql3 -u hacluster -p MyStrongPassw0rd --force
mysql1: Authorized
mysql2: Authorized
mysql3: Authorized

On CentOS8 :
On a Node, Configure basic Cluster settings.
# authorize among nodes
[root@hyperv-primary-cos8 ~]# pcs host auth mysql01 mysql02 mysql03 -u hacluster -p MyStrongPassw0rd
mysql02: Authorized
mysql01: Authorized
mysql03: Authorized
[root@hyperv-primary-cos8 ~]#


# configure / create pacemaker-corosync cluster :
[root@hyperv-primary-cos8 ~]# pcs cluster setup routercluster mysql01 mysql02 mysql03
No addresses specified for host 'mysql01', using 'mysql01'
No addresses specified for host 'mysql02', using 'mysql02'
No addresses specified for host 'mysql03', using 'mysql03'
Destroying cluster on hosts: 'mysql01', 'mysql02', 'mysql03'...
mysql02: Successfully destroyed cluster
mysql03: Successfully destroyed cluster
mysql01: Successfully destroyed cluster
Requesting remove 'pcsd settings' from 'mysql01', 'mysql02', 'mysql03'
mysql01: successful removal of the file 'pcsd settings'
mysql02: successful removal of the file 'pcsd settings'
mysql03: successful removal of the file 'pcsd settings'
Sending 'corosync authkey', 'pacemaker authkey' to 'mysql01', 'mysql02', 'mysql03'
mysql01: successful distribution of the file 'corosync authkey'
mysql01: successful distribution of the file 'pacemaker authkey'
mysql02: successful distribution of the file 'corosync authkey'
mysql02: successful distribution of the file 'pacemaker authkey'
mysql03: successful distribution of the file 'corosync authkey'
mysql03: successful distribution of the file 'pacemaker authkey'
Sending 'corosync.conf' to 'mysql01', 'mysql02', 'mysql03'
mysql01: successful distribution of the file 'corosync.conf'
mysql02: successful distribution of the file 'corosync.conf'
mysql03: successful distribution of the file 'corosync.conf'
Cluster has been successfully set up.
[root@hyperv-primary-cos8 ~]#


We can now start the cluster (running the following command on one node):
[root@hyperv-primary-cos8 ~]# pcs cluster start --all
mysql02: Starting Cluster...
mysql01: Starting Cluster...
mysql03: Starting Cluster...
[root@hyperv-primary-cos8 ~]#

[root@hyperv-primary-cos8 ~]# pcs cluster status
Cluster Status:
 Cluster Summary:
   * Stack: corosync
   * Current DC: mysql01 (version 2.0.3-5.el8_2.1-4b1f869f0f) - partition with quorum
   * Last updated: Wed Sep 16 18:34:28 2020
   * Last change:  Wed Sep 16 18:19:10 2020 by hacluster via crmd on mysql01
   * 3 nodes configured
   * 0 resource instances configured
 Node List:
   * Online: [ mysql01 mysql02 mysql03 ]

PCSD Status:
  mysql01: Online
  mysql02: Online
  mysql03: Online
[root@hyperv-primary-cos8 ~]#


Now we can set some properties to our cluster:
[root@hyperv-primary-cos8 ~]# pcs property set stonith-enabled=false
[root@hyperv-primary-cos8 ~]# pcs property set no-quorum-policy=ignore
[root@hyperv-primary-cos8 ~]# pcs resource defaults migration-threshold=1


Adding resources

VIP

The first resource we will use is the VIP (Virtual IP) that the application will use to connect to MySQL Router:

[root@hyperv-primary-cos8 ~]# ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

       valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

    link/ether 00:15:5d:38:01:07 brd ff:ff:ff:ff:ff:ff

    inet 192.168.0.22/24 brd 192.168.0.255 scope global noprefixroute eth0

       valid_lft forever preferred_lft forever

    inet6 fc00::1:122/64 scope global noprefixroute

       valid_lft forever preferred_lft forever

    inet6 fe80::77b3:112:420b:3363/64 scope link noprefixroute

       valid_lft forever preferred_lft forever

[root@hyperv-primary-cos8 ~]#




[root@hyperv-primary-cos8 ~]# pcs resource create Router_VIP ocf:heartbeat:IPaddr2 \

>     ip=192.168.0.11 cidr_netmask=24 op monitor interval=5s


[root@hyperv-primary-cos8 ~]# ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

       valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

    link/ether 00:15:5d:38:01:07 brd ff:ff:ff:ff:ff:ff

    inet 192.168.0.22/24 brd 192.168.0.255 scope global noprefixroute eth0

       valid_lft forever preferred_lft forever

    inet 192.168.0.11/24 brd 192.168.0.255 scope global secondary eth0

       valid_lft forever preferred_lft forever

    inet6 fc00::1:122/64 scope global noprefixroute

       valid_lft forever preferred_lft forever

    inet6 fe80::77b3:112:420b:3363/64 scope link noprefixroute

       valid_lft forever preferred_lft forever

[root@hyperv-primary-cos8 ~]#


yum install -y mysql-router

[root@hyperv-primary-cos8 ~]# pcs resource create mysqlrouter systemd:mysqlrouter clone

pcs resource list | grep -i router
pcs resource list | grep -i mysql


[root@hyperv-primary-cos8 ~]# pcs resource list | grep -i postgres
ocf:heartbeat:pgsql - Manages a PostgreSQL database instance
[root@hyperv-primary-cos8 ~]#


Before adding the MySQL Router as resource, we need to be sure that the router is installed and bootstrapped on each nodes. As reminder, this is how to proceed:


MySQL-RouteR : BooTsTraP

password="Cluster^123"
mysql -e "create user 'mysqlrouter' identified by '$password'"

[root@hyperv-proxy-cos8 ~]# echo $password | mysqlrouter --bootstrap mycluster@mysql01 --user=mysqlrouter

mysql --host=192.168.0.22 --port=3306 --user=mycluster -p"Cluster^123"


Now we can add the resource that will manage MySQL Router in our pacemaker cluster. We will use systemd.

You can verify if systemd service for mysqlrouter is available using the following command:

[root@mysql1 ~]# pcs resource list | grep router
service:mysqlrouter - systemd unit file for mysqlrouter
systemd:mysqlrouter - systemd unit file for mysqlrouter


OK, so let’s add the resource and let’s it run on all nodes (clone), no need to start mysqlrouter only if the running one has issue:

[root@hyperv-primary-cos8 ~]# pcs resource create mysqlrouter systemd:mysqlrouter clone


Let’s verify if the resource is now added:

[root@hyperv-primary-cos8 ~]# crm_mon -1

Cluster Summary:

  * Stack: corosync

  * Current DC: mysql01 (version 2.0.3-5.el8_2.1-4b1f869f0f) - partition with quorum

  * Last updated: Wed Sep 16 19:00:31 2020

  * Last change:  Wed Sep 16 19:00:27 2020 by root via cibadmin on mysql01

  * 3 nodes configured

  * 4 resource instances configured


Node List:

  * Online: [ mysql01 mysql02 mysql03 ]


Active Resources:

  * Router_VIP  (ocf::heartbeat:IPaddr2):       Started mysql01

  * Clone Set: mysqlrouter-clone [mysqlrouter]:

    * Started: [ mysql01 mysql02 mysql03 ]


[root@hyperv-primary-cos8 ~]#



Now we need to tell the cluster that the VIP needs to stand where a router instance also runs:


[root@hyperv-primary-cos8 ~]# pcs constraint colocation add Router_VIP \

>                  with mysqlrouter-clone score=INFINITY

[root@hyperv-primary-cos8 ~]#


Now we will kill mysqlrouter process on mysql01 and see what the cluster does:


https://lefred.be/content/mysql-router-ha-with-pacemaker/
https://www.server-world.info/en/note?os=CentOS_8&p=pacemaker&f=1
https://raw.githubusercontent.com/behemius/gcp01/master/node1_install.sh
https://raw.githubusercontent.com/behemius/gcp01/master/router_install.sh


[root@hyperv-primary-cos8 ~]#

[root@hyperv-primary-cos8 ~]# mysqlsh --sql -e "select * from performance_schema.replication_group_members\G" --user=mycluster -p"Cluster^123"

WARNING: Using a password on the command line interface can be insecure.

*************************** 1. row ***************************

  CHANNEL_NAME: group_replication_applier

     MEMBER_ID: 3d17fb2b-f143-11ea-9c6b-00155d380107

   MEMBER_HOST: hyperv-primary-cos8

   MEMBER_PORT: 3306

  MEMBER_STATE: ONLINE

   MEMBER_ROLE: PRIMARY

MEMBER_VERSION: 8.0.17

*************************** 2. row ***************************

  CHANNEL_NAME: group_replication_applier

     MEMBER_ID: 6a766bf2-f184-11ea-a7ad-00155d380109

   MEMBER_HOST: hyperv-proxy-cos8

   MEMBER_PORT: 3306

  MEMBER_STATE: ONLINE

   MEMBER_ROLE: SECONDARY

MEMBER_VERSION: 8.0.17

*************************** 3. row ***************************

  CHANNEL_NAME: group_replication_applier

     MEMBER_ID: ef479c0a-f183-11ea-b35a-00155d380108

   MEMBER_HOST: hyperv-standby-cos8

   MEMBER_PORT: 3306

  MEMBER_STATE: ONLINE

   MEMBER_ROLE: SECONDARY

MEMBER_VERSION: 8.0.17

[root@hyperv-primary-cos8 ~]#

[root@hyperv-primary-cos8 ~]#

[root@hyperv-primary-cos8 ~]#

[root@hyperv-primary-cos8 ~]# mysqlsh --sql -e "SELECT @@global.read_only, @@global.super_read_only, @@global.group_replication_group_seeds\G" --user=mycluster -p"Cluster^123"

WARNING: Using a password on the command line interface can be insecure.

*************************** 1. row ***************************

                    @@global.read_only: 0

              @@global.super_read_only: 0

@@global.group_replication_group_seeds: 192.168.0.22:33061,192.168.0.24:33061,192.168.0.28:33061,hyperv-proxy-cos8:33061

[root@hyperv-primary-cos8 ~]#




[root@hyperv-primary-cos8 ~]# cat ./innodb-cluster-status.js
try {
   shell.connect('mycluster@hyperv-primary-cos8', "Cluster^123");
   print(dba.getCluster().status());
   print(session.runSql("SHOW DATABASES;"));
} catch(e) {
     print('\nThe InnoDB cluster could not be created.\n\nError: ' +
     + e.message + '\n');
}
[root@hyperv-primary-cos8 ~]#
[root@hyperv-primary-cos8 ~]#
[root@hyperv-primary-cos8 ~]#
[root@hyperv-primary-cos8 ~]# mysqlsh -f innodb-cluster-status.js
{
    "clusterName": "mycluster",
    "defaultReplicaSet": {
        "name": "default",
        "primary": "hyperv-primary-cos8:3306",
        "ssl": "REQUIRED",
        "status": "OK",
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
        "topology": {
            "hyperv-primary-cos8:3306": {
                "address": "hyperv-primary-cos8:3306",
                "mode": "R/W",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.17"
            },
            "hyperv-proxy-cos8:3306": {
                "address": "hyperv-proxy-cos8:3306",
                "mode": "R/O",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.17"
            },
            "hyperv-standby-cos8:3306": {
                "address": "hyperv-standby-cos8:3306",
                "mode": "R/O",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.17"
            }
        },
        "topologyMode": "Single-Primary"
    },
    "groupInformationSourceMember": "hyperv-primary-cos8:3306"
}<SqlResult>[root@hyperv-primary-cos8 ~]#


1 comment:

  1. Compre documentos en línea, documentos originales y registrados.
    Acerca de Permisodeespana, algunos dicen que somos los solucionadores de problemas, mientras que otros se refieren a nosotros como vendedores de soluciones. Contamos con cientos de clientes satisfechos a nivel mundial. Hacemos documentos falsos autorizados y aprobados como Permiso de Residencia Español, DNI, Pasaporte Español y Licencia de Conducir Española. Somos los fabricantes y proveedores de primer nivel de estos documentos, reconocidos a nivel mundial.

    Comprar permiso de residencia,
    permiso de residenciareal y falso en línea,
    Compre licencia de conducir en línea,
    Compre una licencia de conducir española falsa en línea,
    Comprar tarjeta de identificación,
    Licencia de conducir real y falsa,
    Compre pasaporte real en línea,

    Visit Here fpr more information. :- https://permisodeespana.com/licencia-de-conducir-espanola/
    Address: 56 Guild Street, London, EC4A 3WU (UK)
    Email: contact@permisodeespana.com
    WhatsApp: +443455280186

    ReplyDelete