Network changes in Ubuntu 17.10+
This guide has been updated for netplan, introduced in 17.10. Please test the configuration and let me know if you have any issues with it (easiest via tweet, @jasonbayton).
LXD works perfectly fine with a directory-based storage backend, but both speed and reliability are greatly improved when ZFS is used instead. 16.04 LTS saw the first officially supported release of ZFS for Ubuntu and having just set up a fresh LXD host on Elastichosts utilising both ZFS and bridged networking, I figured it’d be a good time to document it.
LXD LXD is a next generation system container and virtual machine manager. It offers a unified user experience around full Linux systems running inside containers or virtual machines. It's image based with pre-made images available for a wide number of Linux distributions. Download input device drivers or install DriverPack Solution software for driver scan and update.
In this article I’ll walk through the installation of LXD, ZFS and Bridge-Utils on Ubuntu 16.04 and configure LXD to use either a physical ZFS partition or loopback device combined with a bridged networking setup allowing for containers to pick up IP addresses via DHCP on the (v)LAN rather than a private subnet.
Before we begin
This walkthrough assumes you already have a Ubuntu 16.04 server host set up and ready to work with. If you do not, please download and install it now.
You’ll also need a spare disk, partition or adequate space on-disk to support a loopback file for your ZFS filesystem.
Finally this guide is reliant on the command line and some familiarity with the CLI would be advantageous, though the objective is to make this a copy & paste article as much as possible.
Part 1: Installation
To get started, let’s install our packages. They can all be installed with one command as follows:
sudo apt-get install lxd zfsutils-linux bridge-utils
However for this I will output the commands and the result for each package individually:
sudo apt-get install lxd
sudo apt-get install zfsutils-linux
sudo apt-get install bridge-utils
You’ll notice I’ve installed LXD, ZFS and bridge utils. LXD should be installed by default on any 16.04 host as is shown by the output above, however should there be any updates this will bring them down before we begin.
ZFS and bridge utils are not installed by default; ZFS needs to be installed to run our storage backend and bridge utils is required in order for our bridged interface to work.
Part 2: Configuration
With the relevant packages installed, we can now move on to configuration. We’ll start by configuring the bridge as before this is complete we won’t be able to obtain DHCP addresses for containers within LXD.
Setting up the bridge
Legacy ifupdown
We’ll begin by opening /etc/network/interfaces
in a text editor. I like vim:
sudo vim /etc/network/interfaces
This is the default interfaces
file. What we’ll do here is add a new bridge named br0
. The simplest edit to make to this file is as follows (note the emphasis):
This will set the eth0
interface to manual and create a new bridge that piggybacks directly off it.
If you wish to create a static interface while you’re editing this file, the following may help you:
Following any edits, it’s a good idea to restart the interfaces to force the changes to take place. Obviously if you’re connected via SSH this will disconnect your session. You’ll need to have physical access to the machine/VM.
sudo ifdown eth0 && sudo ifup eth0 && sudo ifup br0
Modern netplan
We’ll begin by opening /etc/netplan/01-netcfg.yaml
in a text editor. I like vim:
All of the above bolded lines have been added/modified for a static IP bridge. Edit to suit your environment and then run the following to apply changes:
sudo netplan apply
Running ifconfig
on the CLI will now confirm the changes have been applied:
Configuring LXD & ZFS
With the bridge up and running we can now begin to configure LXD. Before we start setting up containers, LXD requests we run sudo lxd init
to configure the package. As part of this, we’ll be selecting our newly created bridge for network connectivity and configuring ZFS as LXD will take care of both during setup.
For this guide I’ll be using a dedicated hard drive for the ZFS storage backend, though the same procedure can be used for a dedicated partition if you don’t have a spare drive handy. For those wishing to use a loopback file for testing, the procedure is slightly different and will be addressed below.
Find the disk/partition to be used
First we’ll run sudo fdisk -l
to list the available disks & partitions on the server, here’s a relevant snippet of the output I get:
Make a note of the partition or drive to be used. In this example we’ll use partition sdb1
on disk /dev/sdb
Be aware
If your disk/partition is currently formatted and mounted on the system, it will need to be unmounted with sudo umount /path/to/mountpoint
before continuing, or LXD will error during configuration.
Additionally if there’s an fstab
entry this will need to be removed before continuing, otherwise you’ll see mount errors when you next reboot.
Configure LXD
Changes to bridge configuration
As of LXD 2.5 there have been a few changes. If installing a version of LXD under 2.5 please continue below, however for 2.5 and above in order to use the pre-configured bridge select No for Do you want to configure the LXD bridge (yes/no)?
then see Configure LXD bridge (2.5+) below for details of adding the bridge manually after this.
Check the version of LXD by runningsudo lxc info
.
Start the configuration of LXD by running sudo lxd init
Let’s break the above options down:
Name of the storage backend to use (dir or zfs): zfs
Here we’re defining ZFS as our storage backend of choice. The other option, DIR, is a flat-file storage option that places all containers on the host filesystem under /var/lib/lxd/containers/
(though the ZFS partition is transparently mounted under the same path and so accessed equally as easily). It doesn’t benefit from features such as compression and copy-on-write however, so the performance of the containers using the DIR backend simply won’t be as good.
Create a new ZFS pool (yes/no)? yes
Name of the new ZFS pool: lxd
Here we’re creating a brand new ZFS pool for LXD and giving it the name of “lxd”. We could also choose to use an existing pool if one were to exist, though as we left ZFS unconfigured it does not apply here.
Would you like to use an existing block device (yes/no)? yes
Path to the existing block device: /dev/sdb1
Here we’re opting to use a physical partition rather than a loopback device, then providing the physical location of said partition.
Would you like LXD to be available over the network (yes/no)? no
It’s possible to connect to LXD from other LXD servers or via the API from a browser (see https://linuxcontainers.org/lxd/try-it/ for an example of this).
As this is a simple installation we won’t be utilising this functionality and it is as such left unconfigured. Should we wish to enable it at a later date, we can run:
lxc config set core.https_address [::]
lxc config set core.trust_password some-secret-string
Where some-secret-string is a secure password that’ll be required by other LXD servers wishing to connect in order to admin the LXD host or retried non-public published images.
Do you want to configure the LXD bridge (yes/no)? yes
Here we tell LXD to use our already-preconfigured bridge. This opens a new workflow as follows:
We don’t want LXD to create a new bridge for us, so we’ll select no here.
LXD now knows we may have our own bridge already set up, so we’ll select yes in order to declare it.
Finally we’ll input the bridge name and select OK. LXD will now use this bridge.
And with that, LXD will finish configuration and ready itself for use.
Configure LXD bridge (2.5+)
In version 2.5, the above purple bridge workflow has been retired in favour of the new lxc network
command.
With lxd init
complete above, add the br0
interface to the default profile with:
lxc network attach-profile br0 default eth0
If by accident the lxdbr0
interface was configured, it must be first detached from the default profile with:
lxc network detach-profile lxdbr0 default eth0
It’ll be obvious if this needs to be done as running lxc network attach-profile br0 default eth0
will result in the error error: device already exists
.
With that complete, LXD will now successfully use the pre-configured bridge.
Configuring LXD with a ZFS loopback device
Run sudo lxd init
as above, but use the following options instead.
The size in GB of the ZFS partition is important, we don’t want to run out of space any time soon. Although ZFS partitions may be resized, it’s better to be a little generous now and not have to worry about reconfiguring it later.
Increasing file and inode limits
Since it’s entirely possible we may in the future wish to run multiple LXD containers, it’s a good idea to already increase the number of open files and inode limits, this will prevent the dreaded “too many open files” errors which commonly occur with container solutions.
For the inode limits, open the sysctl.conf
file as follows:
sudo vim /etc/sysctl.conf
Now add the following lines, as recommended by the LXD project
It should look as follows:
After saving the file we’ll need to reboot, but not yet as we’ll also configure the open file limits.
Open the limits.conf
file as follows:
sudo vim /etc/security/limits.conf
Now add the following lines. 100K should be enough:
It should look as follows:
Once the server is rebooted (this is important!) the new limits will apply and we’ll have future-proofed the server for now.
sudo reboot
Part 3: Test
With our bridge set up, our ZFS storage backend created and LXD fully configured, it’s time to test everything is working as it should be.
We’ll first get a quick overview of our ZFS storage pool using sudo zpool list lxd
With ZFS looking fine, we’ll run a simple lxc info
to generate our client certificate and verify the configuration we’ve chosen for LXD:
It would appear the storage backend is correctly using our ZFS pool: “lxd”. If we now take a look at the default profile using:
lxc profile show default
We should see LXD using br0
as the default container eth0
interface:
Success! The only thing left to do now is launch a container.
We can use the official Ubuntu image repo and spin up a Xenial container with the alias xen1 using the command:
lxc launch ubuntu:xenial xen1
Which should return an output like this:
Now, we can use lxc list
to get an overview of all containers including their IP addresses:
We can see the xen1 container has picked up an IP from our DHCP server on the LAN, which is exactly what we want.
Finally, we can use lxc exec xen1 bash
to gain CLI access to the container we’ve just launched:
Conclusion
While a little long-winded, setting up LXD with a ZFS storage backend and utilising a bridged interface for connecting containers directly to the LAN isn’t overly difficult, and it’s only gotten easier as LXD has matured to version 2.0.
Are you brand new to LXD? I thoroughly recommend you take a look at LXD developer Stéphane Graber’s incredible LXD blog series to get up to speed.
ID, 'post_type' => $post->post_type ) ) ); if ($subpages 0 ) { ?-->Updated instructions for LXD 4.5 (September 2020)
LXD 4.5 has added features that make proxy devices more secure in the sense that if something goes wrong on the proxy device, your system is safer. Specifically, the proxy devices are now under AppArmor confinement. In doing so, however, something broke and it was not possible to start anymore GUI/X11 LXD containers. Among the possible workarounds and real solutions, it is better to move to a solution. And the solution is to move forward to LXD 4.6 which has a fix to the AppArmor confinement issue.
Run snap info lxd
to verify which channel you are tracking. In my case, I am tracking the latest/stable
channel, which currently has LXD 4.5 (proxy devices do not work). However, the latest/candidate
channel has LXD 4.6, and Stéphane Graber said it has the fix for proxy devices. Be aware that we are switching to this channel for now, until LXD 4.6 is released as a stable version next week. That is, if you do the following, make a note to come back here (around next Thursday) so that you switch back from the candidate channel to a stable channel (latest/stable
or the 4.6/stable
).
Now, we refresh the LXD snap package into the latest/candidate
channel.
And that’s it. Oh no, it’s updating again.
NOTE: If you have setup the latest/candidate
channel, you should be now switch to the latest/stable
channel. LXD 4.6 has been released now into the stable channel. Use the following command
The post continues…
With LXD you can run system containers, which are similar to virtual machines. Normally, you would use a system container to run network services. But you can also run X11 applications. See the following discussion and come back here. In this post, we further refine and simplify the instructions for the second way to run X applications. Previously I have writtenseveraltutorialson this.
LXD GUI profile
Here is the updated LXD profile to setup a LXD container to run X11 application on the host’s X server. Copy the following text and put it in a file, x11.profile. Note that the bold text below (i.e. X1) should be adapted for your case; the number is derived from the environment variable $DISPLAY on the host. If the value is :1, use X1 (as it already is below). If the value is :0, change the profile to X0 instead.
Then, create the profile with the following commands. This creates a profile called x11.
To create a container, run the following.
To get a shell in the container, run the following.
Once we get a shell inside the container, you run the diagnostic commands.
You can run xclock
which is an Xlib application. If it runs, it means that unaccelerated (standard X11) applications are able to run successfully.
You can run glxgears
which requires OpenGL. If it runs, it means that you can run GPU accelerated software.
You can run paplay
to play audio files. This is the PulseAudio audio play.
If you want to test with Alsa, install alsa-utils
and use aplay
to play audio files.
Explanation
We dissect the LXD profile in pieces.
We set two environment variables in the container. $DISPLAY for X and PULSE_SERVER for PulseAudio. Irrespective of the DISPLAY on the host, the DISPLAY in the container is always mapped to :0. While the PulseAudio Unix socket is often located eunder /var, in this case we put it into the home directory of the non-root account of the container. This will make PulseAudio accessible to snap packages in the container, as long as they support the home interface.
This enables the NVidia runtime with all the capabilities, if such a GPU is available. The text all for the capabilities means that it enables all of compute, display, graphics, utility, video
. If you would rather restrict the capabilities, then graphics
is for running OpenGL applications. And compute
is for CUDA applications. If you do not have an NVidia GPU, then these directly will silently fail.
Here we use cloud-init
to get the container to perform the following tasks on the first time it starts. The sed
command disables shm support in PulseAudio, which means that it enables the Unix socket support. Additionally, the listed three packages are installed with utilities to test X11 application, X11 OpenGL applications and audio applications.
This command shares the Unix socket of the PulseAudio server on the host to the container. In the container it is /home/ubuntu/pulse-native
. The security configuration refers to the host. The uid, gid and mode refer to the Unix socket in the container. This is a LXD proxy device, and binds into the container, meaning that it makes the host’s Unix socket appear in the container.
This part shares the Unix socket of the X server on the host to the container. If $DISPLAY on your host is also :1, then keep the default shown below to X1. Otherwise, adjust the number accordingly. The @
character means that we are using abstract Unix sockets, which means that there is no actual file on the filesystem. Although /tmp/.X11-unix/X0
looks like an absolute path, it is just a name. We could have used myx11socket
instead, for example. We use an abstract Unix socket so that it is also accessible by snap packages. We would have used an abstract Unix socket for PulseAudio, but PulseAudio does not support them. The security uid and gid refer to the host.
We make available the host’s GPU to the container. We do not need to specify explicitly which GPU we are using if we only have a single GPU.
Installing software
You can install any graphical software. For example,
Then, run as usual.
Creating a shortcut for an X11 application in a LXD container
When we are running X11 applications from inside a LXD container, it is handy if we could have a .desktop
on the host, and use it to launch the X11 application from the container. Below we do this. We perform the following on the host.
First, select an icon for the X11 application and place it into a sensible directory. In our case, we place it into ~/.local/share/icons/. You can find an appropriate icon from the resource files of the installed X11 application in the container. If we assume that the container is called steam
and the appropriate icon is /home/ubuntu/.local/share/Steam/tenfoot/resource/images/steam_home.png
, then we can copy this icon to the ~/.local/share/icons/
folder on the host with the following command.
Then, paste the following in a text editor and save it as a file with the .desktop
extension. For this example, we select steam.desktop. Fill in appropriately the Name, Comment, Exec command line and Icon.
Finally, move the desktop file into the ~/.local/share/applications directory.
We can then look for the application on the host and place the icon on the launcher.
Lxd Input Devices Driver Vga
Conclusion
Lxd Input Devices Driver Win 7
This is the latest iteration of instructions on running GUI or X11 applications and having them appear on the host’s X server.
LXD Input Devices Driver
Note that the applications in the container have full access to the X server (due to how the X server works as there are no access controls). Do not run malicious or untrusted software in the container.