Breaking Into the CentOS Cloud Image, and Modern Linux Password Recovery

In this day and age of cloud computing, installing an image from scratch is something that is probably not needed very often, if at all, and probably something that is only needed if installing a hardware machine. Major Linux vendors offer cloud versions of their images, such as Ubuntu and CentOS. Using these images with a compatible system ensures that one can get started up on a fresh Linux install very quickly, be it with a Public cloud like EC2, an OpenStack infrastructure, or even just a basic KVM host.

However, if it’s desired to use some of these images in a non-cloud setup, such as the latter scenario, there are some things that need to be done. I will be using the CentOS images as an example.

Step 1 – Resetting the Root Password

After the image has been downloaded and added into KVM, the root password needs to be reset.

This is actually a refresh of an age-old trick to get into Linux systems. Before, it was easy enough as adding init=/bin/bash to the end of the kernel parameter in GRUB, but times have changed a bit. This method actually still works, but just needs a couple of additions to get it to go. Read on below.

A note – SELinux

SELinux is enabled on the CentOS cloud images. The steps below include disabling it when the root password is reset. Make sure this is done, or you will have a bad time. Note that method #1 also includes enforcing=0 as a boot parameter, so if the this is missed, you have an opportunity to do in the current boot session before the system is rebooted.

Method #1 – “graceful” thru rd.break

This is the Red Hat supported method as per the manual.

rd.break stops the system just before the hand-off to the ramdisk. There are some situations where this can cause issues, but this is rare, and a cloud image is far from it.

Reboot the system, and abort the GRUB menu timeout by mashing arrow keys as soon as the boot prompt comes up. Then, select the default option (usually the top, the latest non-rescue kernel option) and press “e” to edit the parameters.

Make sure the only parameters on the linux or linux16 line are:

  • The path to the kernel (should be the first option, probably referencing the /boot directory)
  • The path or ID for the root filesystem (the root= option)
  • The ro option

Then, supply rd.break enforcing=0 option at the end. Press Ctrl-X to boot.

This will boot the system into an emergency shell that does not require a password, right before when the system would normally have handed it off to the ramdisk.

When the system is in the rescue state like this, the system is mounted on /sysroot. As such, a few extra steps are required to get the system mounted so that the password can be reset properly. Run:

mount -o remount,rw /sysroot
chroot /sysroot
passwd root
[change password here]
vi /etc/sysconfig/selinux
[set SELINUX=disabled]
mount -o remount,ro /sysroot

This load /sysroot into a chroot shell. The password will be prompted for on the passwd root line. Also, make sure to edit /etc/sysconfig/selinux and set SELINUX=disabled. After both of these are done, exit the shell, re-mount the filesystem read-only again to flush any writes, and exit the emergency shell. The system will either now reboot or just resume booting.

Method #2 – old-school init=/bin/bash

init=/bin/bash still works, funny enough, but there are some options that need to be removed on the CentOS system, as mentioned in method #1.

Reboot the system, and abort the GRUB menu timeout by mashing arrow keys as soon as the boot prompt comes up. Then, select the default option (usually the top, the latest non-rescue kernel option) and press “e” to edit the parameters.

Make sure the only parameters on the linux or linux16 line are:

  • The path to the kernel (should be the first option, probably referencing the /boot directory)
  • The path or ID for the root filesystem (the root= option)
  • The ro option

Then, supply the init=/bin/bash option at the end. Press Ctrl-X to boot.

After the initial boot the system is tossed into a root shell. Unlike method #1, this shell is already in the system, and / is the root, without a chroot being necessary. Simply run the following:

mount -o remount,rw /
passwd root
[change password here]
vi /etc/sysconfig/selinux
[set SELINUX=disabled]
mount -o remount,ro /

The password will be prompted on the passwd root command. Also, make sure to edit /etc/sysconfig/selinux and set SELINUX=disabled. After both of these are done, the filesystem should be remounted read-only to ensure that all writes are flushed. From here, simply reboot through a hard reset or ctrl-alt-del.

Last Few Steps

Now that the system can be rebooted and logged into, There are a few final steps:

Remove cloud-init

This is probably spamming the console right about now. Go ahead and disable it.

systemctl stop cloud-init.service
yum -y remove cloud-init

Enable password authentication

Edit /etc/ssh/sshd_config and change PasswordAuthentication to yes. Make sure the line that is not commented out is changed. Then restart SSH:

systemctl restart sshd.service

The cloud image should now be ready for general use.

Honorable Mention – Packer

All of this is not to say tools like Packer don’t have great merit in image creation, and in fact if you wanted to build a generic image for KVM, rather than just grabbing one like I mention above, there is a qemu builder that can do just that. Doing this will also ensure that the image lacks the cloud-init tools and what not that you may not need in your application.


packstack – Cinder and Nagios Options, and Advanced Operation

After getting a grasp on all of the concepts that I have discussed regarding OpenStack, I have decided to give my dev server a re-deploy, namely to set up LVM for proper use with Cinder.

I noticed a few things that should probably be noted when using packstack to configure an OpenStack host.

First off, packstack is not necessarily just for setting up lab environments. By running packstack –help, the program will output a plethora of options that can be used for controlling how packstack runs. This can be used to limit deployment to just a few options, so that, for example, it only deploys a compute node, or a storage node, etc. It also allows for answer file use. With fine-tuning of the options, packstack can be used to configure entire clouds, or at the very least more sophisticated multi-server labs.

Another thing to note is that there are several options that are set by default that may not be desired. For example, ultimately my second run at at an OpenStack all-in-one is looking like this:

packstack --allinone --provision-all-in-one-ovs-bridge=n --cinder-volumes-create=n --nagios-install=n

This excludes the following:

  • The Open vSwitch bridge setup. This is because I want to re-configure the Open vSwitch bridges again as per this article.
  • Creation of the test Cinder volume group – I already have one created this time around that I want to use with Cinder. This is named cinder-volumes as per the default volume group that Cinder looks for, and is also the volume group that packstack will create with raw file on the file system, which is not suitable for production use. If you have this volume group set up already and do not select this flag, packstack will ultimately fail.
  • Disabling the installation of a full Nagios monitoring suite on the host, as I plan to set up monitoring later – and not with Nagios, mind you!

Remember that you can check out a brief rundown on how to install and use packstack at the quickstart page on RDO.

CentOS ifcfg Scripts: DEVICE vs NAME

In the previous article that I discussed setting up Open vSwitch in, I encountered an odd issue when setting up my bridges for configuration upon startup (especially the br-ex bridge, which I had a management address on as well). Restarting the network after the initial changes worked, but upon startup, the network did not come up properly. After logging in and restarting the network again, things came up. This process was reproducible 100% of the time.

Upon startup, only the OvS bridges were up. Physical interfaces were not added. Upon initial inspection there did not seem to be anything wrong, so I started to delve into the process further and spent a morning of googling and poring thru the network init scripts, trying to figure out exactly what I did wrong, or if there was a bug with the way OvS auto-configuration was handled on RedHat-family systems.

The answer? Yes and no.

Ultimately there was an error in my configuration. In my physical interface config file (ie: ifcfg-eth0 or its equivalent in the new PCI syntax CentOS and RHEL 7 follow), I had:

NAME="eth0" <-- BAD

The correct file has:

DEVICE="eth0" <-- GOOD

Basically, the issue was the presence of the NAME directive in place of the correct DEVICE directive.

The Cause

I’m pretty sure NAME was there from installation. I performed minimal modification of the physical interface configuration file, and my bridge configuration file had the correct syntax (which I took from a tutorial, save the actual addressing information which I transplanted from the physical interface).

After poring over logs, I found the giveaway:

Feb 9 10:39:52 devhost network: Bringing up interface eth0: Error: either "dev" is duplicate, or "br-ex" is a garbage.
Feb 9 10:39:52 devhost network: cat: /sys/class/net/eth0: Is a directory
Feb 9 10:39:52 devhost network: cat: br-ex/ifindex: No such file or directory
Feb 9 10:39:52 devhost network: /etc/sysconfig/network-scripts/ifup-eth: line 273: 1000 + : syntax error: operand expected (error token is "+ ")
Feb 9 10:39:52 devhost network: ERROR : [/etc/sysconfig/network-scripts/ifup-aliases] Missing config file br-ex.
Feb 9 10:39:52 devhost /etc/sysconfig/network-scripts/ifup-aliases: Missing config file br-ex.
Feb 9 10:39:52 devhost ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --may-exist add-port br-ex "eth0
Feb 9 10:39:52 devhost ovs-vsctl: br-ex"
Feb 9 10:39:53 devhost network: [ OK ]

There was a line break in the ovs-vsctl command logged in syslog.

I ultimately traced this down to the get_device_by_hwaddr() function, in /etc/sysconfig/network-scripts/network-functions:

get_device_by_hwaddr ()
    LANG=C ip -o link | awk -F ': ' -vIGNORECASE=1 '!/link\/ieee802\.11/ && /'"$1"'/ { print $2 }'

This will return multiple interfaces, separated by line breaks, when there is more than one interface on the system sharing a MAC address. Using DEVICE instead of NAME skips this, as the system only runs this when the former is not defined.

The MAC address got duplicated probably from a previous setup of OvS and was retained when the bridge was brought back up upon startup.

Moral of the story: if you are setting up bridging or any other kind of virtual interface infrastructure after installation, don’t overlook this, to save yourself some pain!

I posted a bug to CentOS about this as well:

An Intro to Open vSwitch

I’ve spent the last few days getting my bearings around Open vSwitch. It’s pretty amazing, and IMO if you are virtualizing under Linux these days, it’s pretty much a must.

So what is Open vSwitch? It’s basically the Open Source answer to other proprietary technologies, such as VMware’s distributed vSwitch (which you have probably used if you have ever used multiple servers within vSphere). It allows you to build a distributed layer-2 network entirely in software. It also performs very well under a single server setup, allowing you to build sophisticated switch fabrics under a single physical interface. It solves some frustrations surrounding network bridging under a KVM setup as well that you may have encountered, such as having a bridge that shares the same physical interface as your host’s management address.

Check out for some documentation and tutorials.

I will be explaining some basic concepts regarding Open vSwitch here, namely how to set up a bridge, attach an interface to it, and also how to automate the process using ifcfg-* files under CentOS (and by proxy, probably RHEL and Fedora as well). Also, we will discuss how to set up a VM on the bridge using libvirt.

Some Terms

bridge is a network fabric under Open vSwitch. For the purpose of this tutorial, a bridge represents a broadcast domain within the fabric at large, ie: a VLAN. Note that this does not have to be the case all the time, as it is possible to have a bridge that has ports on different VLANs, just like a physical switch.

port is a virtual switch port within the bridge. These ports are attached to interfaces, such as physical ones, virtual machine interfaces, or other bridges.

Open vSwitch Basic Bridge

Above is a very simple diagram that depicts the bridge br-ex, with ports connected to a VM’s eth0, and the host machine’s eth0.

Creating a Bridge

Run the following command to create a bridge:

ovs-vsctl add-br br-ex

This command would create a new empty bridge, br-ex. This bridge can then be addressed just like a regular interface on the system, but of course, would not do much at this point in time since we do not have any ports attached to it.

Adding a Port

ovs-vsctl add-port br-ex eth0

This would add eth0 to the bridge br-ex.

CentOS/RedHat/Fedora Interface Configuration Files

You can also have the OS set up bridges for you upon system startup – this is especially useful if you are binding IP addresses to a specific bridge. Note that any bridges that you create like this will get destroyed/re-created upon restart of the network (ie: system network restart or systemctl restart network.service).

Change your ifcfg-eth0 file to look something like this:


And create a ifcfg-br-ex interface configuration file:


Sub in your values for MAC addresses, physical interface names, and IP addresses, obviously.

Another note here that is extremely important is that you need to make sure that you use the DEVICE directive instead of the NAME directive. The latter may be left over in your physical interface configuration file from installation, so make a note to change it. I will address the exact reason why in a different article.

Setting Up a Libvirt VM to use a Bridge

Now that you have set up the above, you can add a VM to the bridge with Libvirt. Edit your domain’s (VM’s) XML file and add a block like this for every NIC you want to create:

 <interface type='bridge'>
  <mac address='52:54:00:71:b1:b6'/>
  <source bridge='br-ex'/>
  <virtualport type='openvswitch'/>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>

This was taken directly from the Open vSwitch Libvrirt HOWTO.

Make sure of course that you assign a correct PCI ID. You may wish to create the domain first via other means, add the devices you will need, and just elect to not use network at first. Unfortunately, it does not seem that a lot of Libvirt admin tools have specific Open vSwitch support just yet (at least it does not seem that the version of virt-manager that comes with most distributions does, anyway).

Edit Feb 12 2015 – Slight correction to physical interface config file – duplicate device type.