Trim_galore, python, pip segmentation faults

Hi all,

my CLIMB instance is running on the servers at Warwick. I have been able to run trim_galore over the past few days, but today I get an exit code 139. I tried to reinstall trim_galore, and I tried upgrading cutadapt using pip and both segfaulted and trim_galore is giving me an illegal divide by zero error at Line 1229. I also can’t pull up what version of python I’m using as that also gives me a segfault error. I’ve tried moving things from my root directory into my addition (known as /dev/sdb1 below), but I’m still having trouble. If more information is needed, do let me know. I am, however, in the US and 6 hours behind. Cheers!

ubuntu@alexc:~$ cutadapt --version
Segmentation fault (core dumped)

ubuntu@alexc:~$ python --version
Segmentation fault (core dumped)

ubuntu@alexc:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 32G 12K 32G 1% /dev
tmpfs 6.3G 3.3M 6.3G 1% /run
/dev/sda1 119G 102G 12G 90% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 0 5.0M 0% /run/lock
none 32G 0 32G 0% /run/shm
none 100M 4.0K 100M 1% /run/user
cm_processes 32G 0 32G 0% /run/cloudera-scm-agent/process
/dev/sdb1 2.0T 439G 1.4T 24% /home/ubuntu/directory

Hi Alex,

Do you still have these python errors thrown?

Regards

Maciej

I can’t even log into it at the moment.

Alexandras-MBP:~ alexandra$ ssh ubuntu@137.205.68.168
ssh_exchange_identification: Connection closed by remote host

After rebooting your VM, I’m able to connect to it with ssh. Please try again to log in and check if you can see any improvements.

Nevertheless, I’d recommend to create a new VM, with newer GVL (and other software) and to reconnect your extra disk (/dev/sdb1) to that new VM.

Maciej

I still can’t get in. I’m getting the same thing again.

Alexandras-MBP:~ alexandra$ ssh ubuntu@137.205.68.168
ssh_exchange_identification: read: Connection reset by peer
Alexandras-MBP:~ alexandra$ ssh ubuntu@137.205.68.168
ssh_exchange_identification: Connection closed by remote host

However, I just need to detach my volume, create a brand new GVL and delete the old one if I’m understanding correctly?

Yes, that’s right. After creating a new GVL, you shall attach this extra volume to that newly created VM.

And do not run commands fdisk nor mkfs on any disk with existing data!

Maciej

Thank you. Will get right on that, thought new instance launches are temporarily disabled. :confused: Do I need to chose another region? I’m currently on at Warwick.

As the remaining conversation has been moved to PM, I’m closing the issue here.
A new instance of GVL was started yesterday with the extra volume attached properly.