The more I use LXD the less I understand why we're all using Docker. Why oh why?

Exposing lxd container ports on host

You can either used bridged networking or use default NAT networking and add a route to your firewall:


iptables \
  -t nat \
  -i eth0 \
  -p TCP \
  -d $PUBLIC_IP \
  --dport $PORT \
  -j DNAT \
  --to-destination $CONTAINER_IP:$PORT \
  -m comment \
  --comment "forward to the container"'

Export and import a container

The command line interface is so easy to use, you hardly need to look a the documentation.

$ lxc export mycontainer mycontainer.tar.gz

This will include all snapshots. To optimise the backup file, you might want to look into adding --instance-only and --optimized-storage.

This tarball can then be used on the same host or copied to a different machine where you want the same container. To make use of the tarball that you exported, you'll of course use a command called import:

$ lxc import mycontainer.tar.gz
$ lxc start mycontainer

And it just works! mycontainer is now up and running.

Mount your home directory read/write inside an LXD container

To mount your home directory read/write inside an LXD container, do:

$ lxc config device add buster myhome disk source=$HOME path=$HOME
$ lxc config set buster raw.idmap "both 1000 0"
$ lxc restart buster

Files written by the root user (which has user id=0) inside the container are owned by my own torstein user on the host system (which has user id=1000).

The crux here is the user id mapping. To give another example: If my host user had user id 1200 and the user I wanted to map to inside the container had id 3000, I would instead configure:

$ lxc config set buster raw.idmap "both 1200 3000"

AFAIK, there's no Docker equivalent, see issue 2259 in their bugtracker. With docker you need to hack around it by chowning the files after mounting them to make the user inside the container write to them (if it's not a root user) and on the host system, you must chown files created by the container to allow non-root users to write to them.

Run Docker containers inside an LXD container

You can even run Docker containers inside an LXD container By passing security.nesting=true to lxc when creating a container, you can run other containers inside it:

$ lxc launch ubuntu box-in-a-box -c security.nesting=true

You can now lxc exec into the box-in-a-box and install Docker like normal, after which lxc ls will list the Docker interfaces along side the eth0 device which is used for communicating with your lxd container:

❯ lxc ls box-in-a-box
|      NAME    |  STATE  |             IPV4             |                     IPV6                     |   TYPE    | SNAPSHOTS |
| box-in-a-box | RUNNING | (br-b0334a281f15) | fd42:3cb:5f02:b33b:216:3eff:fee5:2320 (eth0) | CONTAINER | 0         |
|              |         | (docker0)         |                                              |           |           |
|              |         | (eth0)          |                                              |           |           |

Note that these 172.x IPs are not accessible from your host machine, so you need to proxy these from something that listens on eth0 in the box-in-a-box container. I prefer running nginx there to proxy requests to the Docker container IPs so that I can easily access them from my machine.

Grow the LXD storage

On my system, I'd just hit ENTER when installing lxd and had thus a BTRFS backed storage pool:

$ lxc storage ls
|  NAME   | DESCRIPTION | DRIVER |                   SOURCE                   | USED BY |
| default |             | btrfs  | /var/snap/lxd/common/lxd/disks/default.img | 18      |

To add 20 GBs to it, I did the following:

First off, to be safe than sorry, I stopped LXD:

# snap stop lxd

After that, I grew the file itself:

# sudo truncate -s +20G /var/snap/lxd/common/lxd/disks/default.img

Then get a hold of which loopback device it's using:

# losetup | grep default.img
/dev/loop6         0      0         1  0 /var/snap/lxd/common/lxd/disks/default.img   0     512

Re-create it:

# losetup -c /dev/loop6

Then finally, mount the device and use btrfs to resize it:

# mkdir /mnt/foo
# mount /dev/loop6 /mnt/foo
# btrfs filesystem resize max /mnt/foo
# umount /mnt/foo

Once that was done, I started LXD again:

# snap start lxd

My containers could now use 20GBs more.


Networking in lxd containers doesn't work

The containers don't get IPv4 addresses and networking doesn't work from within the containers. The problem is the same on Debian, Ubuntu and Alpine.

❯ lxc list
|    NAME    |  STATE  | IPV4 |                     IPV6                     |    TYPE    | SNAPSHOTS |
| buster     | RUNNING |      | fd42:3cb:5f02:b33b:216:3eff:fe5a:710e (eth0) | PERSISTENT | 0         |
| first      | RUNNING |      | fd42:3cb:5f02:b33b:216:3eff:fe34:c776 (eth0) | PERSISTENT | 0         |
| ubuntu1904 | RUNNING |      | fd42:3cb:5f02:b33b:216:3eff:fe67:ccf1 (eth0) | PERSISTENT | 0         |
❯ lxc exec buster bash
root@buster:~# ifconfig eth0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::216:3eff:fe5a:710e  prefixlen 64  scopeid 0x20<link>
        inet6 fd42:3cb:5f02:b33b:216:3eff:fe5a:710e  prefixlen 64  scopeid 0x0<global>
        ether 00:16:3e:5a:71:0e  txqueuelen 1000  (Ethernet)
        RX packets 24  bytes 4026 (3.9 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 19  bytes 2934 (2.8 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
root@buster:~# apt update
Err:1 http://deb.debian.org/debian buster InRelease
  Temporary failure resolving 'deb.debian.org'

The reason for this, was that my firewall blocked the requests from the DHCP server lxd was running to assign IPs to the containers. snap who's running lxd on my Debian system used the wrong command to interact with my firewall. To solve this, I did:

# update-alternatives --set iptables /usr/sbin/iptables-legacy
# update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
# snap restart lxd

Now, my containers got IPv4 addresses and container networking worked like a charm:

❯ lxc list
|    NAME    |  STATE  |         IPV4         |                     IPV6                     |    TYPE    | SNAPSHOTS |
| buster     | RUNNING | (eth0) | fd42:3cb:5f02:b33b:216:3eff:fe5a:710e (eth0) | PERSISTENT | 0         |
| first      | RUNNING | (eth0) | fd42:3cb:5f02:b33b:216:3eff:fe34:c776 (eth0) | PERSISTENT | 0         |
| ubuntu1904 | RUNNING | (eth0) | fd42:3cb:5f02:b33b:216:3eff:fe67:ccf1 (eth0) | PERSISTENT | 0         |

Still no IPv4

On a different Debian system, I still couldn't get an IP, even after updating iptables alternatives outlined in the above section.

After investigating this, I discovered that I had a DNS server running:

root@geronimo ~ # netstat -nlp --tcp | grep -w 53
tcp        0      0    *               LISTEN      794/dnsmasq
tcp6       0      0 :::53                   :::*                    LISTEN      794/dnsmasq

This causes conflicts with lxd, which also wants to fire up its own DNS server. Since I had to need for the DNS server (I'd forgotten why I installed it in the first place), I removed it and restarted lxd:

# apt-get remove dnsmasq
# snap restart lxd

And lo and behold, my containers were started again, this time with a shiny IPv4 address!

snap-confine has elevated permissions

snap-confine has elevated permissions and is not confined but should
be. Refusing to continue to avoid permission escalation attacks

It's due to Apparmor and the kernel you're running. I remedied this with:

# apparmor_parser -r /etc/apparmor.d/*snap-confine*
# apparmor_parser -r /var/lib/snapd/apparmor/profiles/snap*

Licensed under CC BY Creative Commons License ~ ✉ torstein.k.johansen @ gmail ~ 🐦 @torsteinkrause ~