The current state of all connected nodes and services can be checked with the „crm_mon“ (Cluster Resource Manager Monitor) command. This will display the following self-updating screen: (Use CTRL-C to exit)

Stack: cman
Current DC: NONE
Last updated: Mon Feb 18 11:42:18 2019
Last change: Mon Feb 18 04:13:51 2019 by root via cibadmin on Strawberry_HA1

2 nodes configured
11 resources configured

Online: [ Strawberry_HA1 Strawberry_HA2 ]

Active resources:

Resource Group: complete_sb
     failover_ip        (ocf::heartbeat:IPaddr2):       Started Strawberry_HA1
     drbd_mount (ocf::heartbeat:Filesystem):    Started Strawberry_HA1
     postgresql (ocf::heartbeat:pgsql): Started Strawberry_HA1
     memcached  (lsb:memcached):        Started Strawberry_HA1
     nginxd     (lsb:nginx):    Started Strawberry_HA1
     redis-server       (lsb:redis-server):     Started Strawberry_HA1
     strawberry-cable  (lsb:strawberry-cable):        Started Strawberry_HA1
     strawberry-puma   (lsb:strawberry-puma): Started Strawberry_HA1
     strawberry-sidekiq        (lsb:strawberry-sidekiq):      Started Strawberry_HA1
 Master/Slave Set: masterdrbd [drbd]
     Masters: [ Strawberry_HA1 ]
     Slaves: [ Strawberry_HA2 ]

Top Section (General Information)

The top row gives information on the used stack. By default (depends on your particular install) this should be „cman“ and should not change.

The second row displays the current Designated Controller (DC). In a two cluster system, a quorum is not sensible, so there is no election of a DC.

The third row displays the last update’s time and date, which generally should be pretty close to the current time and date.

The fourth row displays the date and time of the last change to the cluster’s configuration. This would include a planned migration, adding or disabling of services and such.

The fifth row displays the version info.

The sixth row displays the amount of configured nodes (2 nodes by default)

The seventh row displays the amount of configured services.

Middle Section (Node Status)

The second section shows the availability of the nodes. The states can be „online“ (node is on and available), „standby“ (node is on but not available) and „offline“ (node is off or crashed).

Main Section (Service Status)

The next sections show the state of the services. The first service here is DRBD, a shared storage, where database and configuration files for Strawberry are saved. The current DC should be the master, the other node the slave. Should the standby node be online, but not be listed as a slave, a manual re-sync of the DRBD filesystem might be needed. Please refer to the troubleshooting section below.

The next resource is the main Strawberry resource group „complete_sb“ consisting of:

• failover_ip (floating IP address assigned to the current master machine)
• drbd_mount (the actual mount point for the DRBD block device)
• postgresql (PostgreSQL database service)
• memcached (Distributed memory object caching system)
• nginxd (Web server service)
• redis-server (Redis key-value store service)
• strawberry-cable (Application server service)
• strawberry-puma (Scheduler and tasker service)
• strawberry-sidekiq (Sphinx search service)

All services are started and stopped in sequential order and are all run on one node at a time. Should any one of the services fail to start for a defined amount of time, the whole group will be migrated to another running node.

The left side shows you the name of the resource (e.g. Strawberry), the middle part specifies which resource agent is defined for this resource (e.g. lsb:Strawberry) and the left side shows you on which node it is currently running (e.g. Started Strawberry_HA1).

Last modified: Mar 24, 2021

Need more help with this?
Visit the Projective Support Websites

Was this helpful?

Yes No
You indicated this topic was not helpful to you ...
Could you please leave a comment telling us why? Thank you!
Thanks for your feedback.