LXD Containers and Networking with Ubuntu 16.04 Server

There are cases when you want the functionality of a virtual machine but dont necessarially want the overhead of virtualizing all of the hardware.

Whats LXD Anyways?

LXD is actually a addition to LXC Containers. LXC containers are also known as ‘linux containers’ and are basically a easy way to manage chroot on modern linux systems. Doing this allows LXD / LXC containers to provide a virtual server which only has to virtualize user-space and all of the containers share the underlying kernel of the host system.

End Goal

The end goal of this tutorial is to have a functioning web server in a LXC container which is accessable via a public ip address and can also access the local network across a IP Sec Site to Site VPN connection. This setup is fairly typical of a main office location with a connection to offsite servers at a colocation facility.

IP Address Ranges:

  • Main Office: 10.0.0.0/22
  • Colocation Faility: 10.0.16.0/23
    • Physical Servers: 10.0.16.0/24
    • LXD / LXC Conatiners: 10.0.17.0/24

NOTE I will not cover how to setup the ipsec tunnel in the blog post. I will assume that you already have that configured.

Installation

Installing LXD is simple on Ubuntu 16.04 as its already installed and just needs to be configured.

First things first, to make this simple and not have to keep typing in sudo lets just make ourselves the root user. Please be careful if you’re on a production server!

sudo su

In order to allow your normal users to manage LXC containers without having to become the root user we are going to add our normal user account to the lxd group. Obviously make sure you substitute your username in place of ‘buffalodatasystems’ in the command below.

usermod --append --groups lxd buffalodatasystems

LXD works best with ZFS based file systems so we’ll need to install the zfsutils-linux package before we can configure LXD. Feel free to skip this step if you already have ZFS storage setup on your server.

apt-get update
apt-get install zfsutils-linux

All thats left to do is initialize the LXD system and provide it with some basic configuration

lxd init

Name of the storage backend to use (dir or zfs) [default=zfs]: zfs
Create a new ZFS pool (yes/no) [default=yes]? yes
Name of the new ZFS pool [default=lxd]: lxd
Would you like LXD to be available over the network (yes/no) [default=no]? no
Do you want to configure the LXD bridge (yes/no) [default=yes]? yes

From here follow the prompts and setup your IPv4 information to fit in with your network. I won’t be covering IPv6 in this guide so feel free to skip it.

Ok, now that we have that all setup you can launch a container at this point and it will be able to access the internet and you’ll be able to ping it from your LAN. What you wont be able to do however is ping anything on your lan from the container, so lets fix that before we launch any containers. When LXD sets up the network for you it adds a whole bunch of IPTables rules, and some of those are around how fowarding works, however this isn’t clear as those rules aren’t displayed when you issue a iptables -L command.

The rule which LXD adds that prevents a container from pinging anything on the other side of the IPSec tunnel is this one -A POSTROUTING -s 10.0.17.0/24 ! -d 10.0.17.0/24 -m comment --comment "managed by lxd-bridge" -c 160 9696 -j MASQUERADE. If you remember in my example I assigned 10.0.17.0/24 to the LXD network and this rule is sending anything that originates on the 10.0.17.x network and is NOT going to the 10.0.17.x network out the public ip address rather than to your lan like it should be.

There is a super easy fix for this and it’s simply adding a POSTROUTING rule before the rule created by LXD that accepts all traffic bound for your local networks. In my case the rule to add was iptables -I POSTROUTING -s 10.0.17.0/24 -d 10.0.0.0/22 -j ACCEPT. After adding that rule we can now create our NGINX container to act as a web server.

Creating a Container

Now for the fun part, creating a container that actually does something! We are going to base this NGINX container off of Ubuntu 16.04. First we’ll need to get a image for Ubuntu 16.04. We’re going to use ubuntu:x here as x is just short for ‘Xenial’ which is the codename for version 16.04. If you’d rather use a different image you can get a listing of all that are avaiable to you by running lxc image list images:

lxc launch ubuntu:x webserver

Thats it. You now have a ubuntu instance running in a LXC container. That was easy wasn’t it? The next thing we have to do is get into our instance and install some software. Since theres currently no keys stored in the ssh authorized_keys file we have to find another way in. Lucky for us LXC provides a way to connect to it from the command line.

lxc exec webserver -- sudo --login --user ubuntu

You’ll notice that you’re at a standard Ubuntu prompt, so lets install NGINX.

sudo apt-get update
sudo apt-get install nginx

We’re now free to exit out of this container’s command line, so just type in exit and we can move onto setting up some ip tables rules. I’m going to assume that you’re running this on a server in a datacenter and you have multiple public ip addresses at your disposal. If you don’t just change out the commands below to do normal NAT instead of 1:1 NAT.

To find out what private ip address our webserver has we need to run lxc list and note what the private ip for the webserver is. From here is just normal 1:1 nating, ill be using 1.2.3.4 for the public ip and 10.0.17.100 for the webserver in the below example.

iptables -t nat -A POSTROUTING -o eth0 -s 10.0.17.100 -j SNAT --to-source 1.2.3.4
iptables -t nat -A PREROUTING -i eth0 -d 1.2.3.4 -j DNAT --to-destination 10.0.17.100
iptables -A FORWARD -s 1.2.3.4 -j ACCEPT
iptables -A FORWARD -d 10.0.17.100 -j ACCEPT

Now if you go to 1.2.3.4 in your web-browser you’ll see that your NGINX welcome page comes up which is being served out of your container.