Volume won't detach


#1

Hello,

My volume has been detaching from my instance for over 24 hours now. I think it might be stuck. Under ‘status’ it says ‘Detaching’ with a progress bar and under ‘action’ it says ‘Update Metadata’.

Ive tried rebooting the volume and shelving and un shelving it today, neither seem to have worked.

Any advice is much appreciated.


#2

Hi Karl - I’m going to need a bit more information about this instance/volume before I can help you. Could you let me know where the instance is hosted (Birmingham, Cardiff, Warwick), and what its name is?


#3

Hi,

I also may have the same problem. I’ve detached a volume of 2Tb and it’s taking a while. The instance is hosted in Swansea and the name is chmi

Thanks in advance!

Vicky


#4

Hi Matt,

Thank you for your reply. The instance is hosted in Birmingham. The instance is (imaginatively named) “Karl” and the volume is named “KarlVolume”.

Kind regards,
Karl


#5

Hi Karl - This should now be fixed and the volume detached.


#6

This should be fixed for you Vicky, apologies for the inconvenience.


#7

Matt you are amazing. Thank you so much. Do you mind saying what happened (just so I don’t do it again).


#8

Hi Matt,

Thanks a lot! I’ve just attached a new volume about 9880G, with /dev/sdb but this doesn’t exist when I login and use fdisk. Will this just take time?

Thanks again!
Vicky


#9

No worries!

The problem was down to the VM to which the volume was attached existing in a “Shelved/Offloaded” state. This means that it isn’t running or allocated to a hypervisor, but its volumes and network information still exist in our database.

Because attaching or detaching a volume requires communication with the VM, and the VM wasn’t allocated to a hypervisor, this volume failed to detach because the VM couldn’t be contacted to confirm that the volume really wasn’t present any longer.

This VM was shelved/offloaded by us in our security audit before Christmas - you would have been contacted about this. We chose to shelve VMs that failed the audit rather than terminate them completely specifically to help in situations like this - they are no longer susceptible to password attacks because they are shelved, but we can still get to your data in an emergency.


TL; DR - this wasn’t your fault, or caused by anything you did. You took the correct approach and asked us to look at it, we were able to do some things on the backend to get the volume to detatch.


#10

Hi Vicky - if the volume is showing as attached in the dashboard, it should be visible from inside your instance.

You can check for your volume using lsblk - it should should be visible as a device of the right size, which you can then get the correct device name from.

We don’t often get people using their whole group disk quota in a single volume, so I had a look at the log from your instance. It looks like this instance might need a reboot (sudo systemctl reboot) before you can format and mount this volume.

For a volume this big, I’d recommend foregoing partitioning with fdisk and just create a filesystem directory on the device. You could also use XFS instead of ext4 to allow for easier filesystem resizing later, using the mkfs.xfs /dev/sdX command instead of mkfs.ext4 /dev/dsX.


#11

Hi Matt,
Thank you so much for your help so far.

I just attached my volume to a new instance named “JG”. I think Im also having the same problem as Vicky.
When I log on and go to /mnt/ I can’t see my volume in the instance. I tried a reboot (sudo systemctl reboot) but the volume still not there.


#12

You can see a brief explanation of the states in which a volume can exist here. Yours is currently attached and not mounted.

Your volume won’t show at /mnt/ unless you’ve partitioned/created a filesystem and mounted it there. You can follow the instructions here to partition, format and mount your new volume.