How to: Recover EC2 Instance Failing To Start

Your EC2 instance fails to start? Read here to fix it if it is a mount issue.

See the system log for errors:

Seeing volume mounting issues? It means you need to fix the /etc/fstab using another machine. Follow these steps:

  1. Create a new machine with the same instance family you want to recover.
  2. Before starting the new machine, detach the root volume from the machine you want to recover and attach it to the new machine. Rename it in case you have the /dev/sda1 in its Name tag (in AWS Console) to make sure AWS won’t attach it as a root volume to the new machine as well.
  3. Start the new machine
  4. If it is not started (due to the same root volume conflict probably), try to snapshot the old root volume, then create a new volume out of it, and then attach the new one. Sometimes volumes get messed up and the snapshot trick will fix it.
  5. lsblk to see the name of the attached volume
  6. mkdir /mnt/recovered
  7. mount -o nouuid /dev/nvme1n1p1 /mnt/recovered
    The -o here is to overcome the issue (mount: unknown filesystem type '(null)') of root volumes having the same UUID
  8. cd /mnt/recovered/etc
  9. vi fstab
  10. Add ,nofail after the defaults . This will allow the old instance to start and for you to do the right thing and change your /dev/nvme2n1 with a proper UUID to prevent it to change after a restart (to get the UUID use blkid).
  11. Shutdown the new machine. Detach the recovered volume and attach it back to the original machine (make sure to name it sda1 when prompted to be able to start it, otherwise you’ll get the “no root volume found” error). Start it. Done. You’ve saved the day :)
  12. Remember to terminate the recovery machine $$


Regev Golan