Scenario:
3-node cluster has access to one disk with GFS/GFS2 filesystem.
- node1 – offline
- node2 – offline
- node3 – online
Node3 waits for other nodes to connect. Unfortunately, node2 and node 3 are phisically damaged.
Problem:
Cluster cannot start and mount GFS/GFS2 filesystem.
Console shows errors:
can't connect to gfs_controld: Connection refused can't connect to gfs_controld: Connection refused gfs_controld not running error mounting lockproto lock_dlm or gfs_controld join connect error: Connection refused error mounting lockproto lock_dlm
Resulution:
Convert GFS/GFS2 cluster to work as a standalone server.
To do this we need to change superblock locking protocol of GFS/GFS2 filesystem.
Check what is current type of locking protocol:
gfs_tool sb <device> proto
gfs2_tool sb <device> protoex.:
gfs_tool sb /dev/sdb proto
gfs2_tool sb /dev/sdb proto
Change superblock locking protocol to lock_nolock:
gfs_tool sb <device> proto lock_nolock
gfs2_tool sb <device> proto lock_nolockex.:
gfs_tool sb /dev/sdb proto lock_nolock
gfs2_tool sb /dev/sdb proto lock_nolock
Restart cluster service or whole server. Disk should mount successfully.
Now you may start to run backup :)
To revert superblock to previous settings, lock_dml, set:
gfs_tool sb <device> proto lock_dlm
gfs2_tool sb <device> proto lock_dlmex.:
gfs_tool sb /dev/sdb proto lock_dlm
gfs2_tool sb /dev/sdb proto lock_dlm
Helpful commands:
mount -t gfs2 /dev/sdb /mnt/storage
service cman start
service gfs2 start