Archive for the ‘virtualization’ Category

Problem:

Recently I noticed running out of space on one of Hyper-V host. There was only ~10% free space. Couple of VM was placed out of clustered storage, on local RAID.

But what was strange, this VM has ~100 AVHDX files and 1 main VHDX file, but there was no checkpoints!

Automerge should start automatically, but it doesn’t. So let’s try on my own.

 there are about 100 AVHDX files!

 

but there is no any checkpoint in Hyper-V MMC

 

Resolution:

  1. Automerge : should start automatically. Delete checkpoints, then poweroff VM.
    In my situation, there was not any checkpoint, so automerge didn’t start.
  2. Merge files manually through Hyper-V : Hyper-V MMC -> edit disk -> locate latest AVHDX file (by date modification). Repeat procedure until you achieve one VHDX file.
    In my situation (~100 AVHDX files) it would take too long :) 
  3. Merge files manually – without Hyper-V :
    a) download VHDUtils from here
    b) extract makevhd.exe file to VM AVHDX files localization
    c) on cmd or PS run command:
    makevhd -d   merged-disk.vhd   0   latest-modified-avhdx-file.avhd
    d) wait, there will be no progress bar
    e) when merging has completed – attach VHDX to VM.
    You may have “Cannot change disk since a disk merging is pending” warning – just remove HDD from VM, apply VM settings, and then re-add newly merged VDHX file to VM.

That’s it.

Advertisements

Scenario:
3-node cluster has access to one disk with GFS/GFS2 filesystem.

  • node1 – offline
  • node2 – offline
  • node3 – online

Node3 waits for other nodes to connect. Unfortunately, node2 and node 3 are phisically damaged.


Problem:
Cluster cannot start and mount GFS/GFS2 filesystem.
Console shows errors:

can't connect to gfs_controld: Connection refused  
can't connect to gfs_controld: Connection refused  
gfs_controld not running  
error mounting lockproto lock_dlm

or

gfs_controld join connect error: Connection refused
error mounting lockproto lock_dlm


Resulution:
Convert GFS/GFS2 cluster to work as a standalone server.
To do this we need to change superblock locking protocol of GFS/GFS2 filesystem.


Check what is current type of locking protocol:

 

gfs_tool sb <device> proto
gfs2_tool sb <device> proto

ex.:
gfs_tool sb /dev/sdb proto
gfs2_tool sb /dev/sdb proto


Change superblock locking protocol to lock_nolock:

gfs_tool sb <device> proto lock_nolock
gfs2_tool sb <device> proto lock_nolock

ex.:
gfs_tool sb /dev/sdb proto lock_nolock
gfs2_tool sb /dev/sdb proto lock_nolock

Restart cluster service or whole server. Disk should mount successfully.
Now you may start to run backup :)


To revert superblock to previous settings, lock_dml, set:

gfs_tool sb <device> proto lock_dlm
gfs2_tool sb <device> proto lock_dlm

ex.:
gfs_tool sb /dev/sdb proto lock_dlm
gfs2_tool sb /dev/sdb proto lock_dlm

 

Helpful commands:

mount -t gfs2 /dev/sdb /mnt/storage
service cman start
service gfs2 start