Remote Identification has Changed

I am getting an error message when I attempt to log onto my VM using SSH on my Macbook laptop.

It has worked before when I have logged onto the VM from home, but as of yesterday I get this:


@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Warning: Remote Host Identification has Changed! It is possible someone is doing something nasty!@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

Someone could be eavesdropping on you right now (man in the middle attack)!

It is possible the key has just been changed


When I arrive at work, my desktop Windows works fine connecting using SSH.

I have tried using my laptop on the network at work, but still my laptop is refused access!

Any anyone assist?

Hi Daniel - re our offline discussion, it looks like ssh-related keyfiles got updated on the afternoon of April 3rd - I think the first time you connected from your desktop machine must have been after that, so the error didn’t occur, whereas you had previously connected from your laptop when the old keys were still current, which explains the warning message occurring or not depending on your local machine.

It appears that quite a few system-related files got updated at the same time; I am unsure why any e.g. software upgrades, if that’s what happened, would end up replacing the public key files though - does anyone have a view on this?

As for the solution, it should be fine to proceed by accepting the new host key (e.g. deleting the old one first if using a command-line on your laptop, using for example 'rsh-keygen -R '…).

P.S. Just to avoid any ambiguity - when I said ‘ssh-related keyfiles got updated on the afternoon of April 3rd’, I mean the ones on the remote VM, not D’s local machine.

Thanks both for reporting and assisting with this…

As you identified @johnw, the simplest way to stop the warning appearing and access your instance is indeed to:

ssh-keygen -R [ip address]


It does seem a little bit strange that the host keys have changed however - they should be generated once at instance creation, and then remain identical for the life of the instance.

You mention that there were lots of “system-related” file changes around the time that the problem appeared. We configure GVLs to use unattended-upgrade because it’s quite easy for users to forget to run:

sudo apt-get update && sudo apt-get upgrade

However, even if openssh was upgraded, I wouldn’t expect those files to change.


If you would like to PM me the name of the instance and where its hosted, I can look in more detail, but you can check the last modified time of the host keys yourself with:

ls -l /etc/ssh/ssh_host_rsa_key
ls -l /etc/ssh/ssh_host_dsa_key
ls -l /etc/ssh/ssh_host_ecdsa_key
ls -l /etc/ssh/ssh_host_ed25519_key

You can also check the cloud-init.log to see if your host keys were regenerated by cloud-init:

grep "ssh_host" /var/log/cloud-init.log

and see if the time matches up.

Ideally, you’ll see that they were regenerated by the host for some reason and you aren’t being MITM’d between work and CLIMB.

Thanks Matt, I’ll PM you with the info in a moment. I had checked the /etc/ssh/* files but not the cloud-init.log - thanks for the info, and this does indeed match up, i.e.:

2018-04-03 14:16:27,146 - util.py[DEBUG]: Attempting to remove /etc/ssh/ssh_host_ed25519_key


…etc

  • followed by new key-generation commands immediately afterwards.

There are some other earlier mentions of ssh_host in the log. I’ll send you some details of these.

Thanks @johnw and @danielvipond

Final diagnosis from me - not a security problem, just something not-quite-right while this instance was rebooted.

Looks like you’ve got the cached key sorted, so carry on as you were! Apologies for the inconvenience, and thanks for giving me such a useful problem description and log info.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.