Attempted re-size, reboot caused total loss of work


We just had our first major problem with CLIMB. We attempted to re-size an instance, which failed. It sent the instance into an error state and we re-booted to get it working again. When we logged back on to the instance, it was a ‘clean’ instance, similar to if we had just launched it.

This was also the case for the volume attached to the instance. Which was surprising and alarming.

The instance is catanscombe and the volume was Cat.

Is there anything that can be done to recover this? We would really appreciate it.



If anyone needs me, I’ll be backing up

Hi Phil, I’ve replied to your PM - more info as soon as I can get it.

Hi Phil,

Couple of questions, first ; did GVL remount the volume automatically onto /mnt ? If you have a single volume attached the GVL will autoremount it to /mnt.

Second, how did you create the filesystem? I ask because there are two ways of doing it (you can do mkfs directly on the volume or you can do fdisk on the volume and then mkfs), and I wonder if the issue might be related to that.

I know the others are looking into this, just wanted to check the above!



Hi Tom,

Thanks for the reply.

The Cat volume is not mounted onto /mnt as far as we can see, although I’m not an expert at such things. There is /mnt/galaxy and /mnt/gvl and a few others, but not our volume.

We followed the instructions on the ‘creating and attaching volumes’ post, so we did fdisk and then mkfs.

Thanks again!


Case is closed successfully, short summary follows:

  • Instance resize process for large VMs fails from time to time if there are no enough resources on a hypervisor.
  • For that reason, VM resizing by users has been disabled.
  • Openstack internal volume attachment record was erroneously duplicated due to VM been rebooted after failed resize.
  • No data on external volume have been lost. Operating system just was unable to find proper name of the volume to mount.

Thanks for all your efforts Maciek, you really saved the day!