Creating and attaching volumes using Horizon

That’s fine, I am offline now

Case summary: For anyone having problem with attaching a volume to a running VM - please shut down your VM first and try to attach it to the stopped instance. Then start it again and follow tutorial above (create partition and so on).

This is a necessary workaround until a fix is released by libvirt software maintainers.

Apologies for my ignorance, Ive beeing trying to use a volume attached to the PoreCamp2017 instance for the last few days and keep geeting this error: sudo: unable to resolve host pore2017, my instance is pore2017

probelm is I need the space on the vol for nanopore processing and cant get programmes to run within it ,
outside the mount I can run programmes and eveything is fine. I googled this and there were suggestions about modifyng different… files ect . but im sure this should work as is. Is this a known problem. thanks garry.

Hi Garry - the “sudo: unable to resolve host” message is a known problem. It doesn’t affect the command that you’re running and is just informational.

You can make it go away by adding the hostname of your instance to the /etc/hosts file. Find the hostname of your instance by running the command

hostname

then paste this into /etc/hosts at the end of the the line that starts with “127.0.0.1 localhost”.

In your case, the line will end up looking like this:

127.0.0.1 localhost pore2017

You can edit /etc/hosts using sudo nano /etc/hosts.


Your volume problem isn’t one that I’ve come across before, so I’ll need a bit more information to help you troubleshoot it. Could you paste the output of:

  1. lsblk
  2. ls -ld /path/to/volume/mountpoint
  3. $PATH

Hi,

I have been trying to add a new volume to my instance but I am having problems following these instructions (I am not a bioinformatician or programmer!).

I have got as far as ‘mkdir example’, but I have no idea what ‘example’ refers to, and when I tried entering something I get the message 'cannot create directory ‘Evans2’: no space left on device

Please help!

Ben

Hi Ben

mkdir example

Is an example of a Linux command. You can see it has two parts; mkdir and example.

The first part

mkdir

is the command for creating a directory - mk = make, dir = directory.

The second part

example

is an option that we are providing to the command mkdir that specifies the name of the directory we want to create. Generally, we try to specify directory names without spaces or special characters so that they are interpreted correctly by the mkdir command. Here, “example” means exactly that - a made up directory name that you’re free to replace with whatever you want, but remember that you’ll need to do this for downstream commands as well, or you’ll be referring to a directory that doesn’t exist!

The error message you’re seeing implies that you have run out of space on your instance root disk. Linux hates running out of disk space, so you will need to delete a file or two before you can add the new volume. Once you free up a bit of space, the error message should go away, and you’ll be free to continue to move data onto your new volume.


You definitely don’t need to be a bioinformatician or programmer to use the Linux command line, but you might find you’ll benefit from having a go at this great Software Carpentry tutorial to help get accustomed to how the command line works.

Hi Matt,

Thanks for this. I managed to get through the rest of the instructions. However, I think I have now deleted most of my data by mistake! I thought that I had created a second volume on which all my data had been duplicated, so I started deleting it off there, only to find it had disappeared from my original volume as well. Is there any way I can undo this?

Best wishes,

Ben

Hi Ben - that’s annoying! Could you please try following the information here, to establish whether you’ve unmounted a volume somewhere?

If you could post the output from df and lsblk, I might be able to give you some more information, but if you’ve formatted a volume with data on it, its gone forever I’m afraid!

Hi Matt,

Output is as follows:

ubuntu@evans:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 16G 8.0K 16G 1% /dev
tmpfs 3.2G 6.8M 3.2G 1% /run
/dev/vda1 119G 34G 80G 30% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 0 5.0M 0% /run/lock
none 16G 72K 16G 1% /run/shm
none 100M 12K 100M 1% /run/user
cm_processes 16G 0 16G 0% /run/cloudera-scm-agent/process
ubuntu@evans:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:0 0 120G 0 disk
└─vda1 253:1 0 120G 0 part /
loop0 7:0 0 100G 0 loop
└─docker-253:1-655369-pool (dm-0) 252:0 0 100G 0 dm
loop1 7:1 0 2G 0 loop
└─docker-253:1-655369-pool (dm-0) 252:0 0 100G 0 dm

After I had run through the initial process a new directory within ‘Evans’ appeared called ‘Evans2’ (which is what I had named the second directory), and within it seemed to be everything that was in ‘Evans’. I thought it had duplicated everything across. I therefore manually deleted everything from within ‘Evans2’, but then when I went back up a level to ‘Evans’, all the directories had been deleted from there as well.

Right - you don’t actually have a new volume attached to your instance.

I can see that you’ve made a new volume, but it doesn’t look like its attached to your instance (it would appear as /dev/vdb in the output from lsblk). I’ve quoted the relevant section of the OP tutorial below - if you follow stages, you can attach your volume to your instance.

Hi Matt,

I’ve tried this several times, and while I don’t get any error messages, and it says it is attaching the volume, it never seems to attach it, and the ‘attached to’ box for the new volume on openstack remains empty.

Given this though, I guess that means I have permanently deleted the data from my other volume?

Best wishes,

Ben

Hi Ben,

Thanks for the additional info - I’ve rebooted your instance and attached your new volume. It should appear in the output of lsblk now, ready for you to format, make a filesystem and mount.

I’m afraid that if you deleted the data in the original directory and the volume wasn’t mounted where you thought it was, you might’ve lost data here. Apologies, but we don’t have any way of recovering user-deleted data.

If something isn’t working in the way you think it should be, please get in touch before you rm -rf anything! We’re here to help with any problems, whether the underlying system isn’t working as it should, or you need some help with the tutorials.

Hi Mattbull,
Please is there any other way to create a filesystem on a volume which that does not erase the data in the volume?
Thx
Ebenn

Creating a filesystem on a volume is a “one time” operation - if there’s data in the volume, you’ve already created a filesystem and you shouldn’t need to do it again unless something is seriously wrong.

Please you could explain what you’re trying to achieve? I can point you in the right direction if I have some more information.

1 Like

Many thanks for your response.
I have attached a new 1TB volume on an instance, which has no space on the boot volume (completely full). After mounting, and working in the new volume I am still getting response that there is no space left on disk, and tab completion was not even working for me. I have tried to delete some files form the boot volume but the problem persists even when I am working in my new volume. I was hoping to launch a new VM, detach the new volume from the old instance and attach to new VM but did not want to lose the data in the new volume (I had already generated some data in this new volume). I tried this approach suggested by camillaip

sudo mkfs -t ext4 /dev/sdb
sudo mkdir /efn
sudo mount /dev/sdv /efn

but it still created a filesystem and l have lost the data in my new volume in trying to attach to my new VM. I am sure this is my fault for doing something wrong, but am keen to know how to detach a volume from one instance and attach to another without losing the data (if it is not a completely new volume).

I hope this explanation suffices please? Grateful for your kind assistance.

Hi Ebenn,

Yes - great explanation, thank you!

Consider your volume as a USB hard-drive, and making a filesystem as formatting that hard-drive.

You can plug in and remove your USB hard-drive to/from different computers - this is analogous to attaching and detaching volumes from VMs. When you remove your USB hard-disk (or detach your volume!), the data stay in that volume until you plug it in (attach it) to a new computer (or VM). You don’t want to format (make a new filesystem) on the hard-drive when you plug it into a new computer, else you’ll lose your data!

If you have data in a volume that you would like to attach to a new VM, don’t fdisk or mkfs!

The process to change the VM that the volume is attached to is:

  1. Detach the volume from the old VM
  2. Attach the volume to the new VM
  3. In the new VM: sudo mount /dev/sdX /home/ubuntu/mountpoint
  4. Analyse your data in your new VM

Many thanks for the prompt response.

To prevent the volume from detaching each time I reboot my VM, I am using the following after doing mount /dev/sdX mountpoint:

‘grep /dev/sd /proc/mounts’

to see where the volume is mounted. The output I get is

/dev/sda1 / ext4 ro,relatime,data=ordered 0 0
/dev/sda1 /home/linuxbrew ext4 ro,relatime,data=ordered 0 0
/dev/sdb /home/ubuntu/efn xfs rw,relatime,attr2,inode64,noquota 0 0

Then I did ‘sudo nano /etc/fstab’ into which I added the path to the mountpoint. This fstab file however comes with this information in it already:

/home/ubuntu/efn LABEL=cloudimg-rootfs / ext4 defaults 0 0
/mnt/gvl/apps/linuxbrew /home/linuxbrew none bind 0 0

So that when I add the full information to my new volume, the information in my fstab file is now

/home/ubuntu/efn LABEL=cloudimg-rootfs / ext4 defaults 0 0
/mnt/gvl/apps/linuxbrew /home/linuxbrew none bind 0 0
/dev/sdb /home/ubuntu/efn xfs rw,relatime,attr2,inode64,noquota 0 0

However when I use Ctrl+X to close the file and choose Yes to save, I am getting an error message, [ Error writing /etc/fstab: Read-only file system ]
^G Get Help ^O Write Out ^W Where Is ^K Cut Text ^J Ju

Can you please advise what I am missing in mounting my volume permanently so that it does not unmount each time I reboot?

Many thx,
Ebenn

A read-only filesystem suggests a deeper problem than making a filesystem and editing fstab.

Please could you give me the name of the instance that you’re working on and I’ll look into it in more detail.

Thanks!

Here is it please.
137.44.59.27, name is Ebenn, running at Swansea.

Thx,
Ebenn

Having looked at your post again - it appears that you’ve broken /etc/fstab. It should always have six, tab-separated columns, and the first line of yours has seven, which is why your root fs has mounted read-only:

1 2 3 4 5 6 7
/home/ubuntu/efn LABEL=cloudimg-rootfs / ext4 defaults 0 0

This should read:

1 2 3 4 5 6 7
LABEL=cloudimg-rootfs / ext4 defaults 0 0

If you want to add another volume to /etc/fstab, then it needs to be added like this (column explanations are in italics, don’t add these to /etc/fstab):

1 2 3 4 5 6
Device Mountpoint Filesystem Options Dump Check
LABEL=cloudimg-rootfs / ext4 defaults 0 0
/mnt/gvl/apps/linuxbrew /home/linuxbrew none bind 0 0
/dev/sdb /home/ubuntu/efn xfs rw,relatime,attr2,inode64,noquota 0 0

Please, please be careful editing /etc/fstab, because if you get it wrong its a pain to fix!


The simplest solution to this is to start a new instance and move the volume across to it, as described above.

If you have vitally important data or software on the root disk of this instance, please let me know and I’ll try my best to retrieve it for you.