During the 16.04 development cycle, the nova-lxd developers were focused on getting
nova-lxd ready for production. This involved shaking out the bugs that were found during the cycle, focusing on quality, and being sure that features that were introduced worked well. However in the Ubuntu 16.10 development cycle, we decided to switch it up a bit and add features that were available in LXD that were still considered “experimental”. We decided to start with live migration.
There are several use cases for live migration in a cloud. An operator before starting their maintenance window can live migrate the instances over to a new host with little instance down time for their users. An operator also can use live migration to evenly balance the load of instances across multiple hosts.
Also it makes a cool demo:
What the above video demonstrates is a single running instance migrating from host A to host B. This is done by nova-lxd by taking advantage of LXD 2.0.3 running with CRIU, and the latest version of Ubuntu 16.04.
For an operator to take advantage of this feature in nova-lxd, an operator has to set up their compute hosts as described in the LXD documentation:
lxc config set core.https_address "[::]"
lxc config set core.trust_password some-password
lxc remote add host-a
Live migration will be included in the final release of nova-lxd when Ubuntu 16.10 is released. Users who are still using the LTS release will still be able to take full advantage of nova-lxd via the Ubuntu Cloud Archive when Ubuntu 16.10 is released as well.
In the previous blog post, I teased the new features that we have in nclxd, a list ot the features that we have added are the following:
- stop/start/reboot/terminate container
- Attach/detach network interface
- Create container snapshot
- Rescue/unrescue instance container
- Pause/unpause/suspend/resume container
- OVS/bridge networking
- instance migration
- firewall support
In Ubuntu 16.04 we will be adding more features such as block device support, live-mgration support and other features. However since 16.04 is a LTS release we will focusing on scale and making it production ready.
For the past couple of months, we have been working hard on adding new features to nova-compute-lxd (nclxd). The new features that have been added go beyond starting and stopping containers, to make it more useful for day to day use. An example is container migration:
The above video shows a container migrating between two nova-compute nodes. This feature is still very alpha, and the was deployed via the juju charm that we have. The environment is the following:
- nova-compute-lxd 0.18
- Ubuntu Wily Werewolf (15.10)
- OpenStack Liberty
- LXD 0.20
In the coming days I will be posting more content on how you can get yourself running nclxd with OpenStack.
In part of my work for nova-compute-lxd, we use a combination of httplib, UNIX domain sockets, and JSON to talk to the LXD daemon via the REST API. Talking to various people involved in the LXD project, I have decided to split this part of nova-compute-lxd into its own project called pylxd. Pylxd is a general python library that you can use to interact with LXD in order to do container operations.
Right now, it is about 80% complete, the bits that are missing is that it needs more unit tests and needs more container operations (snapshots, running commands in the container, etc). That functionality will be coming in later releases of pylxd. I intend to use pylxd in the next version of nova-compute-lxd for the Liberty release of OpenStack as well. I am sure that there is other use cases that developers and users can use pylxd for.
To use pylxd, you just have to clone the git tree and build it. An example usage of pylxd is that you need a way to display the ‘/etc/hosts’ from your running ‘test1’ container. With pylxd this is pretty simple to do:
from pylxd import api
c = api.API()
print c.get_container_file(‘test1′, ‘/etc/hosts’)
The result of the above snippet is that the ‘/etc/hosts’ from the ‘test1’ container will be displayed. Pretty simple eh?
The code for pylxd is available on github here. Please report give it a a twirl, please report bugs, and would love to get feedback.
LXD is a lightweight container hypervisor for full system containers, unlike Docker and Rocket which is for application containers. This means that the container will look and feel like a regular VM – but will act like a container. LXD uses the same container technology found in the Linux kernel (cgroups, namespaces, LSM, etc). The LXD project has a 2 week development cycle which many more features are coming in the future. Stéphane Graber’s blog post is a very good introduction about LXD.
I have been focusing on LXD with OpenStack integration. In order to facilitate the OpenStack integration with LXD we have created a plugin called nova-compute-lxd (nclxd). The plugin uses the REST API to interact with OpenStack to provide common container operations to OpenStack. An high level example of how nclxd creates a container is the following:
- Download the tarball image from glance.
- Untar the tarball image into its own directory.
- Create a rootfs of the tarball that have been just created.
- Craft the appropriate json needed by the LXD rest api
- Tell the LXD daemon to create a container by generating a “put” request.
There is a couple of ways where one can install nclxd and use it with OpenStack:
Installing the plugin from source
The source for nclxd is available via Github. To install the source
git clone https://github.com/lxc/nova-compute-lxd
After you have installed the source and configured LXD. You will need to configure the compute driver in your /etc/nova/nova.conf to the following:
and configure LXD, configure OpenStack, and restart the nova-compute service.
Installing from the Ubuntu Archive
Since nclxd is still configured to be a technology preview, it is not available via the Ubuntu Cloud Archive. However it is available for Ubuntu 15.04. To install it simply run the following command:
sudo apt-get install nova-compute-lxd
Once you have finished installing you will need to configure LXD, configure your nova user to see the LXD daemon and restart your nova-compute service.
Installing via Juju Charms
Installing nclxd via the nova-compute charm is the easiest way to deploy it. In order for you to use nclxd with the charm, the option is simply the following in the nova-compute charm:
Once you have deployed the nova-compute charm with LXD enable, the users with be created with the correct permissions, and nova-compute will just be simply ready for you to use.
Using LXD with Openstack
Once you have finished downloading and configured nclxd you will need to upload an cloud image to the glance server that LXD can use. In order to do that you simply have to do the following:
wget -O vivid-server-cloudimg-amd64-root.tar.gz \
glance image-create --name='lxc' --container-format=bare --disk-format=raw \
After you upload the image to glance then you will be ready to go. If you have any questions please don’t hesitate to ask on the LXC mailing, #lxcontainers IRC channel on freenode, or contacting me directly. We always love getting bug reports and feedback!
What is nova-compute-flex?
For the past couple of months I have been working on the OpenStack PoC called nova-compute-flex. Nova-compute-flex allows you to run native LXC containers using the python-lxc calls to liblxc. It creates small, fast, and reliable LXC containers on OpenStack. The main features of nova-compute-flex are the following:
- Secure by default (unprivileged containers, apparmor, etc)
- LXC 1.0.x
- python-lxc (python2 version)
- Uses btrfs for instance creation.
Nova-compute-flex (n-c-flex) is a new way of running native LXC containers on OpenStack. It is currently designed with Juno in mind since Juno is the latest release of OpenStack. This tutorial to get nova-compute-flex up and running assumes that you will be using Ubuntu 14.04 release and will be running devstack with it.
How does n-c-flex work?
N-c-flex works the same way as the other virt drivers in OpenStack. It will stop and start containers, use neutron for networking, etc. However it does not use qcow2 or raw images, it uses an image that we call “root-tar”.
“Root-tar” images are simply a tarball of the container which is similar to the ubuntu-cloud templates in LXC. They are relatively small and contain just enough to get you running a LXC container. These images are published by Ubuntu as well, and they can be found here.. If you wish to use other distros you can simply tar up the directories found on a given qcow2 image. As well as you could use the templates found in LXC. Its just that simple.
The way that nova-compute-flex works is the following:
- Download the tar ball from the glance server.
- Create a btrfs snapshot.
- Use lxc-usernsexec to un-tar the tar ball into the snapshot.
- When the instance starts create a copy of the snapshot.
- Create the LXC configuration files.
- Create the network for the container.
- Start the container.
It just takes seconds to create a new instance since it is just doing a copy of the btrfs snapshot when the image was downloaded from the glance server.
When the instance is created, the container is an unprivileged LXC container. This means that nova-compute-flex uses user-name-spaces with apparmor built in (if you are using Ubuntu). The instance behaves like a container, but it looks and feels like a normal OpenStack instance.
Getting Started with n-c-flex
Assuming that you already have btrfs-tools is installed and you don’t have a free partition. You will need to create the instances directory where your n-c-flex instances are going to live. To do that you simply have to do the following:
dd if=/dev/zero of=<name of your large file> bs=1024k count=2000
sudo mkfs.btrfs <name of your large file>
sudo mount -t brfs -o user_subvol_rm_allowed <name of your large file> <mount point>
To make the changes permanent, modify your /etc/fstab accordingly.
Installing devstack and n-c-flex
In your “/opt” directory, run the following commands:
This will prepare your devstack to install software like LXC that been back ported to the Ubuntu Cloud Archive. The reason for the back port is that some of the features that is needed in nova-compute-flex is not
found in the trusty version of LXC
After running the above commands you will have the following in your localrc:
To make your devstack more useful you should have the following in your localrc as well:
This will allow you to use the stable branches of juno with neutron support. After modifying your localrc, you can proceed to install by running the “./stack.sh” script.
Running your first instance
As said before nova-compute-flex uses a different kind of image compared to regular nova. To upload the image to the glance server you have to do the following:
glance image-create --name='lxc' --container-format=root-tar --disk-format=root-tar < utopic-server-cloudimg-amd64-root.tar.gz
After uploading the image you can run the regular way of creating instances by either using the python-novaclient tools or the euca2ools.
At the OpenStack Developrs Summit last week, Mark Shutleworth announced lxd (lex-dee). LXD is a container “hypervisor” that is built on top of the LXC project. LXD is meant to be used as system container, rather than application containers like Docker.
I will be using the knowledge that we have gained from working on nova-compute-flex and applying it nova-compute-lxd. LXD will have a Rest-API to interact with the LXD containers, so nova-compute-lxd will be the lxd api to stop/start containers and other functions one expects to find in Nova. More discussion will be going on at the lxc-devel mailing list over the next couple of months.
However, if you want to use nova-compute-flex now go for it! If you wish to submit patches, the github project can be found at https://github.com/zulcss/nova-compute-flex. The work will be fed back into the nova-compute-lxd project as well. It also has an issue tracker where you can submit bugs as well.
If you run into road blocks please let me know, and I will be happy to help.
It has been a while since I posted anything on this blog, but rest assured like the rest of the Ubuntu Server Team has been very busy trying to get Quantal out of the door. So with respect to Openstack what have we been doing?
Folsom Release on Ubuntu 12.10
So a couple of days ago, Thierry Carrez, the OpenStack Release Manager, announced that the Folsom release of the Openstack project was available. This includes projects such as Nova, Swift, Quantum, Glance, Keystone, Horizon, and Cinder. After the Ubuntu Beta2 was released, the Folsom release of Openstack was packaged for Ubuntu 12.10. The supported projects that will be included in the Quantal release of Openstack will be the same as it was in Precise but 2 new upstream projects will be supported. These two projects were created with the intention of simplifying nova, and allowing those with most expertise focus on specific areas. These projects are called Cinder (block storage) and Quantum (SDN). Please see the release notes for any gotchas that you might have. Please test and report bugs to the usual places.
Foslom release on Ubuntu 12.04
What I am more excited about is that you can now run Folsom on Ubuntu 12.04. In the past you would have to request a back port of a specific piece of software through the Ubuntu Backporters Team, or some volunteer would back port an entire release through a PPA. Neither of these are usually formally supported by Canonical. When the Ubuntu Cloud Archive was announced a couple of weeks ago, we can release the latest version of Openstack for users who want the stability of an LTS release but need a newer version of Openstack for their cloud. In the future, newer releases and updates of Openstack will be back ported to the Ubuntu Cloud Archive as well.
In order to have stability with the Ubuntu Cloud Archive we follow very much like the Stable Release Process, there are two types of “pockets”: a proposed pocket and an updates pocket. The “proposed” pocket will have back ports that are ready for the community to test and report bugs. The “updates” pocket will be finished product of the community testing.
If you want to help out with the testing and reporting of bugs on your 12.04 servers, you will want the following in your /etc/apt/sources.list:
deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-proposed/folsom main
If you just want to install the latest version of Openstack on your 12.04 servers, you will want the following in your /etc/apt/sources.list:
deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/folsom main
Please report any bugs that you discover on launchpad as well. In the coming months, we will be making it easier to use Openstack on both newer releases of Ubuntu and back porting those changes to 12.04 users as well.
A couple of minutes ago the “Essex” was released by Thierry Carrez a couple of minutes ago. This is a great step forward because it offers more stability and features for Openstack Users. As a result of the release, we now have “Essex” final packages for 12.04. This list includes:
Congratulations and thanks to the Openstack community for releasing such an important piece of cloud infrastructure. Also thanks to all the users that provide bug reports, testing, comments, and feedback to the Openstack community as well. However it is not finished for Ubuntu since we have an important LTS release coming up pretty quickly. So back to work for me!
In some families its traditional to send a Christmas card with an update on the family and what is happening during the year. Since it has been a while since I wrote anything on this blog with regards to Openstack and Ubuntu, I thought I would be a good idea to give users an update on what is happening in precise. So what has been happening?
The Ubuntu packaging for Openstack has been in a good state for a while now. We are working through the keystone growing pains, but it should almost be ready soon. The Ubuntu packages are constantly being tested in our test lab with the help of our QA rig. The continuous integration tests runs on every commit to the openstack github repos and uses the correct ubuntu packages. If something has changed in Openstack we can catch it right away and make the appropriate changes to the packages. More information can be found at the Ubuntu Server Blog.
When precise is released I expect to see the major components (nova/swift/glance/horizon/keystone) in main and supported in Ubuntu. Other projects such such as quantum and melange will be in Universe, however they are being maintained as a “best effort” for precise. Since they are not apart of the core project. However I expect to see this changed in precise+1, as quantum has given the go ahead to come out of incubation.
The juju charms for Openstack are looking really good as well. Again this is due to the fact that we have continuous integration testing going on. We use the juju charms to deploy a multi-node installation and run QA tests against it. The Keystone charms are in the process of being changed to work with the “redux” merge, they should be ready soon. When precise is release I expect to see the charms matching the major components as well since that is what we support.
Since precise is a LTS release for the Ubuntu project the Ubuntu Server Team and other developers within Ubuntu has mostly been focusing on fixing bugs that we find and contributing the bug fixes back upstream to the appropriate projects . I expect to see more bug fixes happening between now and when Essex will be released.
Besides bug fixes we have been contributing to the Stable Release Team. Along with Mark McLoughlin (from Red Hat), Dave Walker, and myself; the stable release team in the Openstack project that has been responsible for back porting fixes to the stable release of projects such as Nova. The rules for back porting fixes to the Diablo stable branch is similar to our SRU rules in Ubuntu. meaning that no major features get in, just bug fixes.This is important to us going forward in precise. Since Precise is a LTS release and when it is released, back porting fixes from Folsom to Essex will be necessary for users who decide to run Openstack on Ubuntu on Precise.
The next Openstack Design Summit will be happening soon. ODS is the semi-annual developer gathering of developers where they discuss ideas and blueprints amongst fellow developers. I expect to be there in SFO in April and see old and new friends. I expect to see more cutting edge features when Folsom is released that will make Openstack stand out from other cloud projects. However I also expect to see back porting fixes to Essex so that users are not left out in the cold. I totally expect to see Ubuntu leading the way that it has for the past couple of releases.
It has been a while since I posted anything OpenStack related on my blog, so I just want to let people know what is happening with OpenStack on Ubuntu. A sort of development update if you will.
At UDS in October we had a session about the things we wanted to do for Openstack in Precise, the list of work items that we want to get through is found in the blueprint. Right now, in the archive we have again a weekly snapshot of Nova and Swift like in Oneiric, but we have also added a weekly snapshot of keystone, horizon, and quantum that is available as an apt installable package. Soon we will be adding Melange to that rotation since it is now packaged in Ubuntu as well. Please try them out and report any bugs in Launchpad.
On the Juju side, Adam has updated the charms for precise and have added a charm for keystone, I also believe he is close to finishing a charm for horizon as well. This will make installing OpenStack really easy since all the configuration is done for you in the charms.
Since Precise is an LTS release we care about the stability of the Openstack packages we are the middle of figuring out a way of doing on commit tests from the Openstack git repository. The way that this will probably work is that we will create packages from a commit that will happen, and the packages will get rebuild with a new snapshot and will use the charms to deploy Openstack on in the QA Lab. This will give us confidence that the packages and the charms are well tested before they are uploaded to the Ubuntu archive.
Looking back to Oneiric and the Diablo release, there were a lot of bugs that affected the stability and quality for production deployment. With that in mind we had a session at the OpenStack Developers summit about creating a stable release team. The team is responsible for accepting back ported patches from Essex into Diablo, kind of like what the stable kernel tree maintainers do for the Linux kernel. The team is made up of people from both Fedora and Ubuntu, so it is nice to see some collaboration taking place. I do not have any statistics on how many patches were back ported from Essex to Diablo but I did review *alot* of patches. What that means for Ubuntu is that we have a stable diablo release for Oneiric that is available for testing in the proposed repository. To learn how to enable the proposed repository you can read the instructions on the Ubuntu Wiki.
With that I will be going on Christmas holidays near the end of the week, so I want to wish you a Merry Christmas and a Happy Holidays. I hope to have another development update soon.