After getting a grasp on all of the concepts that I have discussed regarding OpenStack, I have decided to give my dev server a re-deploy, namely to set up LVM for proper use with Cinder.
I noticed a few things that should probably be noted when using packstack to configure an OpenStack host.
First off, packstack is not necessarily just for setting up lab environments. By running packstack –help, the program will output a plethora of options that can be used for controlling how packstack runs. This can be used to limit deployment to just a few options, so that, for example, it only deploys a compute node, or a storage node, etc. It also allows for answer file use. With fine-tuning of the options, packstack can be used to configure entire clouds, or at the very least more sophisticated multi-server labs.
Another thing to note is that there are several options that are set by default that may not be desired. For example, ultimately my second run at at an OpenStack all-in-one is looking like this:
The Open vSwitch bridge setup. This is because I want to re-configure the Open vSwitch bridges again as per this article.
Creation of the test Cinder volume group – I already have one created this time around that I want to use with Cinder. This is named cinder-volumes as per the default volume group that Cinder looks for, and is also the volume group that packstack will create with raw file on the file system, which is not suitable for production use. If you have this volume group set up already and do not select this flag, packstack will ultimately fail.
Disabling the installation of a full Nagios monitoring suite on the host, as I plan to set up monitoring later – and not with Nagios, mind you!
Remember that you can check out a brief rundown on how to install and use packstack at the quickstart page on RDO.
Note: Some inaccurate information was corrected in this article – see here for the details.
The past articles regarding Open vSwitch have kind of been a precursor to this one, because to understand how OpenStack networking worked, the concepts regarding some of the underlaying components needed to be understood for me first.
When I started looking into this last week, I really had no idea where to start. As I dug deeper, I found that this guide was probably the best in explaining the basics on how Neutron worked: Neutron in the RHEL/CentOS/Fedora deployment guide.
Neutron Network Example (Courtesy OpenStack Foundation – Apache 2.0 License)
The diagram above was probably one of the tools that helped me out the most. You can see how Neutron works on both the compute and network nodes, and the role that Open vSwitch plays in the deployment at large.
Note that both GRE and VXLAN are supported for tunnels, and in fact packstack will configure your setup with VXLAN. Some features are still being developed with VXLAN, and because I haven’t delved into it too much I’m not too sure what is still missing (although one feature seems to be VLAN pruning). I really don’t have the experience to say which one is the currently the better choice as of Juno.
For now, I am focusing on the basics – what I needed to do to get my dev server set up. This entailed a few things:
Re-configuring my external bridge so that I could run my management interface and the “external” network on the same physical interface – see this previous article
Setting up neutron to map the external network to the external bridge, explicitly
Setting up my external and internal networks
Network Types
There are currently five network types that you can set up in OpenStack.
Local: I equated this to “non-routed”, but it can be used on single server setups for tenant networking. However, it cannot scale past one host.
Flat: A untagged direct network-to-physical mapping. This was ultimately the best choice for my external network since my requirements are not that complicated at this point in time.
VLAN: This is like Flat with VLAN tagging. This would, of course, allow you to run multiple segregated external networks over a single interface.
GRE/VXLAN: Your tunneling options. Generally used on the integration bridge to pass traffic between nodes. Best used for tenant networks.
For my setup, as I mentioned, I ultimately settled on using a flat network for our external bridge, and I haven’t touched the internal network setups (it really is not necessary at this point in time, seeing as I only have one host).
Neutron Configuration
Keep in mind that I don’t cover how to do the Open vSwitch stuff here. If you need that info see this previous article – An Intro to Open vSwitch.
With that in mind, if you are using a separate interface you can simply add it to the Open vSwitch database without much in the way of extra configuration – just run the following:
ovs-vsctl add port br-ex eth1
Assuming that eth1 is your extra interface.
On to the Neutron configuration. Generally, this is stored in /etc/neutron/plugin.ini. Note that we are using the ML2 (Modular Layer 2) plugin here, which has to be symlinked appropriately:
Make sure you define the network types you will allow:
type_drivers = flat,vxlan
Pick a network type for your tenant networks, generally one is fine:
tenant_network_types = vxlan
Mechanism drivers – using Open vSwitch for now, of course. This will be set up for you by default if you are using packstack.
mechanism_drivers = openvswitch
From here I am going to skip to the changes I needed to make in my packstack setup to get the external bridge working. Most of the config that I had I left at the defaults, so if you are using packstack as well it does not need to be changed much.
The only thing left is to define your external network as a flat network:
[ml2_type_flat]
flat_networks = external
Restarting Services
Once this is all done, you can save and restart nova and neutron services. Restarted the services below based on the node that is being updated.