Common HowTo



    Corosync-keygen not working


    Corosync-keygen not working

    Install Configure MySQL Cluster (Pacemaker, Corosync, DRBD, Stonith. Ok, after explain how install and configurate drbd and vmware stonith. it s the moment to start a new project mount a MySQL cluster. The Concept. The concept of an active/passive fail-over Cluster is the following. Two servers (nodes. They communicate over a cluster software (Heartbeat. Corosync. OpenAIS. They are running on DRBD failover storage system. MySQL is only running in MASTER node (active), the other is the PASIVE node. You reach MySQL over a Virtual IP (VIP. In case of a problem the cluster fail-over the resources including the VIP to the passive node. This fail-over is transparent for the application ( a lite SERVICEDOWN. This is the infraestructure. Network and Server settings. Before start with the pacemaker and corosync installation a pre-requisities are necessary. To simplificate the configuration use sort names. Add the two nodes in the /etc/hosts file. To reduce the risks of suffering a power outage, I ll configurate a bonding system. This is the configuration for the node1, the same standard for the node2. Configure the virtual bond interface. Now add the two slaves interfaces to the bond0. Configure the virtual bond interface. Now add the two slaves interfaces to the bond1. Add the bonding aliases. To apply the configuration reboot the system and check if all is correct. DRBD Installation Configuration. Download the software. To download the latest version, enter to this webpage or download with wget. Install DRBD. In the other node. The official documentation Copy the dist configuration. To secure and trust the communication we ll need a sha1 key, to generate one. The is the configuration file for drbd, here my configuration. As you can see I specify this parameters. RESOURCE: The name of the resource. PROTOCOL: In this case C means synchronous. NET: The SHA1 key, that have the same in the two nodes. after-sb-0pri . When a Split Brain ocurrs, and no data have changed, the two nodes connect normally. after-sb-1pri . If some data have been changed, discard the secondary data and synchronize with the primary. after-sb-2pri . If the previous option is impossible disconnect the two nodes, in this case manually Split-Brain solution is required. rr-conflict: In case that the previous statements don t apply and the drbd system have a role conflict, the system disconnect automatically. DEVICE: Virtual device, the patch to the fisical device. DISK: Fisical device. META-DISK: Meta data are stored in the same disk (sdc1. ON NODE : The nodes that form the cluster. Creating the resource. This commands in both nodes. Create partition. Create the partition without format. Create resource. Activate the Resource. Be sure that the drbd module is load (lsmod), if not load it. Now activate the resource DISK1. Only in the master node . we ll say that the node1 is the primary. We ll see that the disks synchronization are in progress, adn the state is UpToDate/Inconsistent. When this is over we will see that the state change to UpToDate/UpToDate. Format the Resource. Only in the master node. Mount the resource in the node1. Ok, now umount and mark the node1 like secondary. Mark Node2 like Primary and mount. MySQL Installation. After install and mount the drbd system install the MySQL software in both nodes. User and Group. MySQL INSTALLATION. DOWNLOAD here the last MySQL server version, in my case the 5.5.24. Now Install the database in the PRIMARY node, that have the drbd0 disk mounted in /usr/local/etc2/mysql/data. Post Installation. Copy the dist configuration to /etc. Test if all it s ok and start mysql Server. Copy the start mysql init.d script to /etc/init.d. Change the socket vim. Because the Cluster Software is the responsable to start Mysql service, disable the service. Configure the root Mysql Password. I prefer configure the file in a local directory, because if the drbd system fail or I need destroy the cluster (for update the system, or maitenance tasks), the configurate file must be in a local filesystem. vim. Create the logs files and directory. Activate the logrotate vim /etc/logrotate.d/mysql. Mysql Client. LRM or Local Resource Manage need mysql client binary to monitor if mysql server is running. Install in both nodes. Installation of Corosync Pacemaker. Configure YUM. Install the software. Configure Corosync. Corosync Key. In one node create the corosync security comunication key. Copy to the other node and maintains the permissions to 400. Now configure the. Activate service. Start service and check. The state of the cluster. Vmware Stonith. Like previous post I ll splain how install and configure stonith for virtual machines running under Vmware. Update PERL. Is recommendable update the perl software to the last version, to do this download and compile the software. Remplace the binary s. We need clusterglue to integrate stonith with the Cluster. Before run the installation. If the configuration is correct. VMware vSphere Perl. This package provides tools to interact with the Virtual Center and his respective Virtual Machines, download from here. Extract and Install it. If you server need proxy to connect to Internet export it. Run the installer script. Vcenter Credential. The stonith plugin need the vcenter credentials to connect to the vcenter and interactuate with the Virtual Machines (before this you ll need to created a new user that have Operate privileges to reset or shutdown both node s. Copy the result file to /etc. Certificate. Normally the access for the Virtual Center don t have a TRUST certificate to connect it and the plugin fail. To resolve it edit the and add a condition that connect without TRUST certificate. Resource Configuration. The CRM configuration only in one node, the other node copy it automatically. Enter to CRM. Configure the VIP. Now, configure the stonith resources The HOSTLIST names are reference to HOSTLIST= CRM node vcenter name , it s means that the local name and the Virtual Machine VCenter name need not be the same, depend your infrastructure. The locations: Logically the stonith resource for kill node1 must be in the node2, and vice versa. Now the filesystem, add DISK1 to the cluster. Define the mount point. Define only one Master node. Now the mysql server. Groups Colocations. With this group we ensure that the drbd, mysql and VIP are in the same node (master) and the order to stop and start is correctly: start: fs_mysql mysqld vip1 stop: vip1 mysqld fs_mysql. The group group_mysql allways in the MASTER node. Mysql start allways after drbd MASTER. Now some propertys, change general timeout, enable stonith and his action, etc. If all it s correct commit and check. Ok now it do the full battery of tests for testing the Mysql Cluster, but this I leave to you. Regards.


Copyright © 1999-2016, Apache Software Foundation