lxc.conf

Langue: en

Version: 26 July 2010 (ubuntu - 24/10/10)

Section: 5 (Format de fichier)

NAME

lxc.conf - linux container configuration file

DESCRIPTION

The linux containers (lxc) are always created before being used. This creation defines a set of system resources to be virtualized / isolated when a process is using the container. By default, the pids, sysv ipc and mount points are virtualized and isolated. The other system resources are shared across containers, until they are explicitly defined in the configuration file. For example, if there is no network configuration, the network will be shared between the creator of the container and the container itself, but if the network is specified, a new network stack is created for the container and the container can no longer use the network of its ancestor.

The configuration file defines the different system resources to be assigned for the container. At present, the utsname, the network, the mount points, the root file system and the control groups are supported.

Each option in the configuration file has the form key = value fitting in one line. The '#' character means the line is a comment.

HOSTNAME

The utsname section defines the hostname to be set for the container. That means the container can set its own hostname without changing the one from the system. That makes the hostname private for the container.

lxc.utsname
specify the hostname for the container

NETWORK

The network section defines how the network is virtualized in the container. The network virtualization acts at layer two. In order to use the network virtualization, parameters must be specified to define the network interfaces of the container. Several virtual interfaces can be assigned and used in a container even if the system has only one physical network interface.

lxc.network.type
specify what kind of network virtualization to be used for the container. Each time a lxc.network.type field is found a new round of network configuration begins. In this way, several network virtualization types can be specified for the same container, as well as assigning several network interfaces for one container. The different virtualization types can be:

empty: will create only the loopback interface.

veth: a peer network device is created with one side assigned to the container and the other side is attached to a bridge specified by the lxc.network.link. If the bridge is not specified, then the veth pair device will be created but not attached to any bridge. Otherwise, the bridge has to be setup before on the system, lxc won't handle any configuration outside of the container. By default lxc choose a name for the network device belonging to the outside of the container, this name is handled by lxc, but if you wish to handle this name yourself, you can tell lxc to set a specific name with the lxc.network.veth.pair option.

vlan: a vlan interface is linked with the interface specified by the lxc.network.link and assigned to the container. The vlan identifier is specified with the option lxc.network.vlan.id.

macvlan: a macvlan interface is linked with the interface specified by the lxc.network.link and assigned to the container. lxc.network.macvlan.mode specifies the mode the macvlan will use to communicate between different macvlan on the same upper device. The accepted modes are private, the device never communicates with any other device on the same upper_dev (default), vepa, the new Virtual Ethernet Port Aggregator (VEPA) mode, it assumes that the adjacent bridge returns all frames where both source and destination are local to the macvlan port, i.e. the bridge is set up as a reflective relay. Broadcast frames coming in from the upper_dev get flooded to all macvlan interfaces in VEPA mode, local frames are not delivered locallay, or bridge, it provides the behavior of a simple bridge between different macvlan interfaces on the same port. Frames from one interface to another one get delivered directly and are not sent out externally. Broadcast frames get flooded to all other bridge ports and to the external interface, but when they come back from a reflective relay, we don't deliver them again. Since we know all the MAC addresses, the macvlan bridge mode does not require learning or STP like the bridge module does.

phys: an already existing interface specified by the lxc.network.link is assigned to the container.

lxc.network.flags
specify an action to do for the network.

up: activates the interface.

lxc.network.link
specify the interface to be used for real network traffic.
lxc.network.name
the interface name is dynamically allocated, but if another name is needed because the configuration files being used by the container use a generic name, eg. eth0, this option will rename the interface in the container.
lxc.network.hwaddr
the interface mac address is dynamically allocated by default to the virtual interface, but in some cases, this is needed to resolve a mac address conflict or to always have the same link-local ipv6 address
lxc.network.ipv4
specify the ipv4 address to assign to the virtualized interface. Several lines specify several ipv4 addresses. The address is in format x.y.z.t/m, eg. 192.168.1.123/24.
lxc.network.ipv6
specify the ipv6 address to assign to the virtualized interface. Several lines specify several ipv6 addresses. The address is in format x::y/m, eg. 2003:db8:1:0:214:1234:fe0b:3596/64

NEW PSEUDO TTY INSTANCE (DEVPTS)

For stricter isolation the container can have its own private instance of the pseudo tty.

lxc.pts
If set, the container will have a new pseudo tty instance, making this private to it. The value specifies the maximum number of pseudo ttys allowed for a pts instance (this limitation is not implemented yet).

CONTAINER SYSTEM CONSOLE

If the container is configured with a root filesystem and the inittab file is setup to use the console, you may want to specify where goes the output of this console.

lxc.console
Specify a path to a file where the console output will be written.

CONSOLE THROUGH THE TTYS

If the container is configured with a root filesystem and the inittab file is setup to launch a getty on the ttys. This option will specify the number of ttys to be available for the container. The number of getty in the inittab file of the container should not be greater than the number of ttys specified in this configuration file, otherwise the excess getty sessions will die and respawn indefinitly giving annoying messages on the console.

lxc.tty
Specify the number of tty to make available to the container.

MOUNT POINTS

The mount points section specifies the different places to be mounted. These mount points will be private to the container and won't be visible by the processes running outside of the container. This is useful to mount /etc, /var or /home for examples.

lxc.mount
specify a file location in the fstab format, containing the mount informations.
lxc.mount.entry
specify a mount point corresponding to a line in the fstab format.

ROOT FILE SYSTEM

The root file system of the container can be different than that of the host system.

lxc.rootfs
specify a directory to become the root of the container. If not specified, the container shares its root file system with the host.
lxc.rootfs.mount
where to recursively bind lxc.rootfs before pivoting. This is to ensure success of the pivot_root(8) syscall. Any directory suffices, the default should generally work.
lxc.pivotdir
where to pivot the original root file system under lxc.rootfs, specified relatively to that. The default is mnt. It is created if necessary, and also removed after unmounting everything from it during container setup.

CONTROL GROUP

The control group section contains the configuration for the different subsystem. lxc does not check the correctness of the subsystem name. This has the disadvantage of not detecting configuration errors until the container is started, but has the advantage of permitting any future subsystem.

lxc.cgroup.[subsystem name]
specify the control group value to be set. The subsystem name is the literal name of the control group subsystem. The permitted names and the syntax of their values is not dictated by LXC, instead it depends on the features of the Linux kernel running at the time the container is started, eg. lxc.cgroup.cpuset.cpus

CAPABILITIES

The capabilities can be dropped in the container if this one is run as root.

lxc.cap.drop
Specify the capability to be dropped in the container. A single line defining several capabilities with a space separation is allowed. The format is the lower case of the capability definition without the "CAP_" prefix, eg. CAP_SYS_MODULE should be specified as sys_module. See capabilities(7),

EXAMPLES

In addition to the few examples given below, you will find some other examples of configuration file in /usr/share/doc/lxc/examples

NETWORK

This configuration sets up a container to use a veth pair device with one side plugged to a bridge br0 (which has been configured before on the system by the administrator). The virtual network device visible in the container is renamed to eth0.

         lxc.utsname = myhostname
         lxc.network.type = veth
         lxc.network.flags = up
         lxc.network.link = br0
         lxc.network.name = eth0
         lxc.network.hwaddr = 4a:49:43:49:79:bf
         lxc.network.ipv4 = 1.2.3.5/24
         lxc.network.ipv6 = 2003:db8:1:0:214:1234:fe0b:3597
       
 

CONTROL GROUP

This configuration will setup several control groups for the application, cpuset.cpus restricts usage of the defined cpu, cpus.share prioritize the control group, devices.allow makes usable the specified devices.

         lxc.cgroup.cpuset.cpus = 0,1
         lxc.cgroup.cpu.shares = 1234
         lxc.cgroup.devices.deny = a
         lxc.cgroup.devices.allow = c 1:3 rw
         lxc.cgroup.devices.allow = b 8:0 rw
       
 

COMPLEX CONFIGURATION

This example show a complex configuration making a complex network stack, using the control groups, setting a new hostname, mounting some locations and a changing root file system.

         lxc.utsname = complex
         lxc.network.type = veth
         lxc.network.flags = up
         lxc.network.link = br0
         lxc.network.hwaddr = 4a:49:43:49:79:bf
         lxc.network.ipv4 = 1.2.3.5/24
         lxc.network.ipv6 = 2003:db8:1:0:214:1234:fe0b:3597
         lxc.network.ipv6 = 2003:db8:1:0:214:5432:feab:3588
         lxc.network.type = macvlan
         lxc.network.flags = up
         lxc.network.link = eth0
         lxc.network.hwaddr = 4a:49:43:49:79:bd
         lxc.network.ipv4 = 1.2.3.4/24
         lxc.network.ipv4 = 192.168.10.125/24
         lxc.network.ipv6 = 2003:db8:1:0:214:1234:fe0b:3596
         lxc.network.type = phys
         lxc.network.flags = up
         lxc.network.link = dummy0
         lxc.network.hwaddr = 4a:49:43:49:79:ff
         lxc.network.ipv4 = 1.2.3.6/24
         lxc.network.ipv6 = 2003:db8:1:0:214:1234:fe0b:3297
         lxc.cgroup.cpuset.cpus = 0,1
         lxc.cgroup.cpu.shares = 1234
         lxc.cgroup.devices.deny = a
         lxc.cgroup.devices.allow = c 1:3 rw
         lxc.cgroup.devices.allow = b 8:0 rw
         lxc.mount = /etc/fstab.complex
         lxc.mount.entry = /lib /root/myrootfs/lib none ro,bind 0 0
         lxc.rootfs = /mnt/rootfs.complex
         lxc.cap.drop = sys_module mknod setuid net_raw
         lxc.cap.drop = mac_override
       
 

SEE ALSO

chroot(1), pivot_root(8), fstab(5)

SEE ALSO

lxc(1), lxc-create(1), lxc-destroy(1), lxc-start(1), lxc-stop(1), lxc-execute(1), lxc-kill(1), lxc-console(1), lxc-monitor(1), lxc-wait(1), lxc-cgroup(1), lxc-ls(1), lxc-ps(1), lxc-info(1), lxc-freeze(1), lxc-unfreeze(1), lxc.conf(5)

AUTHOR

Daniel Lezcano <daniel.lezcano@free.fr>