Introduction
This document describes a set of instructions on how to change the IP address of the failover network and public interface of Prime Cable Provisioning 6.1.5 secondary server in high availability (HA) mode. This procedure must be performed at the time of the maintenance window only.
Prerequisites
Requirements
Cisco recommends that you have knowledge of these topics:
- Redhat Linux networking knowledge and understanding.
- Knowledge of Linux DRBD file storage replication method and Corosync-pacemaker cluster concept.
Components Used
The information in this document is based on these software and hardware versions:
Platform: Red Hat Linux 7.4
Software: Prime Cable provisioning 6.1.5 image.
The information in this document was created from the devices in a specific lab environment. All of the devices used in this document started with a cleared (default) configuration. If your network is live, ensure that you understand the potential impact of any command.
Prime Cable Provisioning 6.1.5 Failover Node Network IP Address Modification
1. Steps to change the failover IP
2. Steps to change the public IP
- Performing failover IP and public IP address simultaneously on the secondary node.
- Here in this example, change the failover IP address from 10.106.36.225 to 10.106.36.235 and public IP from 10.106.41.64 to 10.106.41.68 on the secondary node.
- Ensure to perform ip address change on public address via console of server as would lose network connection and drop ssh connectivity if you ssh to the server via public ip address.
- Stop the cluster.
# pcs cluster stop all (execute in secondary machine)
(or)
Perform the following for stopping cluster service individually in correct order.
#pcs cluster stop 10.106.41.64. ----to stop cluster on secondary server
#pcs cluster stop 10.106.40.64 --force ---to stop cluster service on primary server
1. Steps to Change the Failover IP
- Update the DRBD resources configuration.
Note: DRBD block file sync performs via failover network. There is no need to change public IP changes into DRBD files. Since only the secondary failover IP is changing, change only this IP in the DRBD resource files.
- Check the DRBD current status.
# cat /proc/drbd
* In secondary, disconnect the resources
# drbdadm disconnect all
or
# drbdadm disconnect r0
# drbdadm disconnect r1
# drbdadm disconnect r2
- In secondary, change the failover interface IP Address and restart the interface.
# vi /etc/sysconfig/network-scripts/ifcfg-ens224
# systemctl restart network
- In the primary, ensure that the new failover IP is pinging.
# ping 10.106.36.225
- Update /etc/drbd.d/r0.res, r1.res, r2.res files with new secondary failover IP address in primary and secondary RDU.
# vi /etc/drbd.d/r0.res
resource r0 {
protocol A;
syncer {
rate 1024M;
}
on pcprduprimary {
device /dev/drbd0;
disk /dev/rdugroup/LVBPRHOME;
address 10.106.36.216:7788;
meta-disk internal;
}
on pcprdusecondary {
device /dev/drbd0;
disk /dev/rdugroup/LVBPRHOME;
address 10.106.36.158:7788;
meta-disk internal;
}
}
- Update the existing IP address with the new failover IP address shown in red above, in r1.res and r2.res too.
- Connect the DRBD resources on the secondary node and check the status on the secondary server.
# drbdadm adjust all
# cat /proc/drbd
version: 8.4.8-1 (api:1/proto:86-101)
GIT-hash: 22b4c802192646e433d3f7399d578ec7fecc6272 build by root@pcp-lnx-82, 2018-01-09 03:29:23
0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate A r-----
ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
1: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate A r-----
ns:0 nr:0 dw:40 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
2: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate A r-----
ns:0 nr:997 dw:3054 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
2. Steps to Change the Secondary Public IP
Update your network settings on the secondary node in order to reflect the desired IP address.
Update the /etc/hosts file in order to include the updated IP address of the secondary node.
Ensure that the hostnames can connect to each other and resolve each other's hostname with the use of a ping command from each node to ping all other nodes both by IP address and hostname.
- In secondary, change the failover interface IP Address and restart the interface.
# vi /etc/sysconfig/network-scripts/ ifcfg-ens192
# systemctl restart network
- In the primary, ensure that the new failover IP is pinging.
# ping 10.106.41.68
# ping <hostname>
- In Primary and secondary nodes, update /etc/hosts file with the new public IP address.
# vi /etc/hosts
pcprdusecondary.cisco.com pcprdusecondary
# vi /etc/hosts
pcprduprimary.cisco.com pcprduprimary
- Edit secondary public IP address in /etc/corosync/corosync.conf in both the nodes.
- Update the ring1_addr to the changed IP address in corosync.conf on both the nodes (please take a backup of the existing corosync.conf before you edit it and compare the edited corosync.conf with the backup to ensure only the intended change has gone in).
# vi /etc/corosync/corosync.conf
# pcs cluster corosync
totem {
version: 2
secauth: off
cluster_name: pcpcluster
transport: udpu
rrp_mode: passive
}
nodelist {
node {
ring0_addr: 10.106.40.64
ring1_addr: 10.106.36.216
nodeid: 1
}
node {
ring0_addr: 10.106.41.68
ring1_addr: 10.106.36.235
nodeid: 2
}
}
quorum {
provider: corosync_votequorum
two_node: 1
}
logging {
to_logfile: yes
logfile: /var/log/cluster/corosync.log
to_syslog: yes
}
- Bring cluster services back up with a run in the primary node. Execute this step if the pcs cluster is set up with the use of the node IP address instead of the node name.
# pcs cluster auth <primarynode-publicip> <secondarynode-publicip> -u hacluster -p <secret_key>
# pcs cluster auth 10.106.40.64 10.106.41.68 -u hacluster -p <secret_key>
10.106.40.64: Authorized
10.106.41.68: Authorized
# pcs cluster start –all
- Check the current ring status of corosync.
# corosync-cfgtool -s
* Printing ring status.
Local node ID 2
RING ID 0
id = 10.106.41.68
status = ring 0 active with no faults
RING ID 1
id = 10.106.36.235
status = ring 1 active with no faults
- Check the cluster resource state.
# pcs status
Cluster name: pcpcluster
WARNING: corosync and pacemaker node names do not match (IPs used in setup?)
Stack: corosync
Current DC: pcprdusecondary (version 1.1.16-12.el7_4.7-94ff4df) - partition with quorum
Last updated: Thu Jan 21 10:41:36 2021
Last change: Thu Jan 21 10:39:07 2021 by root via cibadmin on pcprduprimary
2 nodes configured
11 resources configured
Online: [ pcprduprimary pcprdusecondary ]
Full list of resources:
res_VIPArip (ocf::heartbeat:VIPArip): Started pcprduprimary
Master/Slave Set: ms_drbd_1 [res_drbd_1]
Masters: [ pcprduprimary ]
Slaves: [ pcprdusecondary ]
res_Filesystem_1 (ocf::heartbeat:Filesystem): Started pcprduprimary
Master/Slave Set: ms_drbd_2 [res_drbd_2]
Masters: [ pcprduprimary ]
Slaves: [ pcprdusecondary ]
res_Filesystem_2 (ocf::heartbeat:Filesystem): Started pcprduprimary
Master/Slave Set: ms_drbd_3 [res_drbd_3]
Masters: [ pcprduprimary ]
Slaves: [ pcprdusecondary ]
res_Filesystem_3 (ocf::heartbeat:Filesystem): Started pcprduprimary
res_bprAgent_1 (systemd:bpragent): Started pcprduprimary
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled