Unable to remount my volumes after instance stops

Hi,

Yesterday I created some volumes in Horizon for new projects (following https://discourse.climb.ac.uk/t/creating-and-attaching-volumes-using-horizon/72) as I’ve previously done before.

Once created I proceeded to mount them. It worked perfectly for the first one as you can see:

~$ sudo mkfs -t ext4 /dev/sdc
mke2fs 1.42.13 (17-May-2015)
Discarding device blocks: done
Creating filesystem with 262144000 4k blocks and 65536000 inodes
Filesystem UUID: 095f0716-c302-4671-a78a-8b407ce1c139
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

~$ sudo mkdir P_aeruginosa_database

~$ sudo mount /dev/sdc P_aeruginosa_database/

~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 120G 0 disk
└─sda1 8:1 0 120G 0 part /
sdb 8:16 0 500G 0 disk /mnt/galaxy/home/smrtanalysis/pacbio_sequences
sdc 8:32 0 1000G 0 disk /home/ubuntu/P_aeruginosa_database
sdd 8:48 0 1000G 0 disk

Then, my instance suddenly stops when trying to mount the second volume I created :

~$ sudo mkfs -t ext4 /dev/sdd
mke2fs 1.42.13 (17-May-2015)
Discarding device blocks: done
Creating filesystem with 262144000 4k blocks and 65536000 inodes
Filesystem UUID: bd4d94d1-afd1-49cc-94ff-96a1227be943
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: Write failed: Broken pipe

ssh: connect to host 137.205.69.48 port 22: Operation timed out

So after rebooting my instance the ssh access was restored but now I’m unable to either mount the newly created volumes or even remount the previous one to its original point.

~$ mount /dev/sdb /mnt/galaxy/home/smrtanalysis/pacbio_sequences/
mount: wrong fs type, bad option, bad superblock on /dev/sdb,
missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.

Help please,

P.S.
Now I have several tmpfs that weren’t there before. Is that normal?

~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 32G 0 32G 0% /dev
tmpfs 6.3G 11M 6.3G 1% /run
/dev/sda1 117G 38G 80G 32% /
tmpfs 32G 104K 32G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 32G 0 32G 0% /sys/fs/cgroup
tmpfs 6.3G 16K 6.3G 1% /run/user/119
cm_processes 32G 0 32G 0% /run/cloudera-scm-agent/process
tmpfs 6.3G 0 6.3G 0% /run/user/121
tmpfs 6.3G 0 6.3G 0% /run/user/1001
tmpfs 6.3G 0 6.3G 0% /run/user/1000

** PLEASE DON’T GO ANY FURTHER **

If you are creating filesystems on volumes that contain data, you are overwriting that data!

We wrote a warning on the post you linked above, specifically for this situation:

If you have run mkfs on a volume that contains data, I’m sorry but the data on that volume is now irretrievable.


If you have rebooted your instance, all you need to do to access data in your volumes is to MOUNT them:

sudo mount /dev/sdX [mountpoint]

PLEASE DO NOT RUN fdisk OR mkfs ON DISKS THAT CONTAIN DATA.


EDIT:

Disregard the above - I’ve just realised that you’re mounting old volumes, and creating filesystems on NEW volumes, then during the filesystem construction your instance is crashing.

Huge apologies for the scare - this looks like our problem. I’ll investigate further and reply when I have a solution for you.

Hi,

Please log in again and try to remount your old volumes.

Let me know if there are any errors.

Maciej

OMG I’m relieved to read the last sentences you wrote. I read the tutorial very carefully trying to not make any mistake as formatting my data.

Hi,

Logged in again but still getting the same error trying to remount my old volume.

I’m so sorry, I was too quick for my own good there!

As you can see, Maciej (our Warwick sysadmin) is on the case, so no doubt this will be fixed in short order.

The thread has been moved to PM.

Issue solved, many thanks @mattbull and @maciek for your help!

Very best,