Environments #
The environments are characterized of from these properties or elements:
-
version
: describe the version of the specifications. At the moment the only version supported is “1”. -
template_engine
: under the template_engine section is configured the engine used for create project files. The available engines arejinja2
(it usesj2cli
tool and jinja2 framework like for Ansible) andmottainai
that it uses Golang’s template render. For thejinja2
template is possible set additional options for thej2cli
throughopts
field. -
profiles
: the list of LXD profiles used by the environment that could be added and/or updated on the LXD instances used by the projects. -
networks
: the list of LXD networks used by the environment that could be added and/or updated on the LXD instances used by the projects. -
commands
: the list of commands defined related to the projects of the environment. It’s helpful to have a register of the more useful commands to run on a running system for backup, validation, etc. The commands are aliases ofapply
command where it’s possible define flags, additional hooks, etc. -
storages
: the list of LXD storages used by the environment that could be added and/or updated on the LXD instances used by the projects. -
projects
: the projects to deploy.
lxd-compose read all files under the directories defined on the paramater env_dirs
of the configuration file and load the specification of all projects in memory before
run commands.
The profiles
, networks
, storages
and commands
are all loaded and rendered
through the LXD Compose render engine that permit to customize the entities without
create multiple time the same resource but with few differences.
Hereinafter, an extract of the configuration file available on LXD Compose Galaxy:
general:
debug: false
# lxd_confdir: ./lxd-conf
push_progressbar: false
logging:
level: "info"
# Define the directories list where load
# environments.
env_dirs:
- ./envs/nginx
- ./envs/mottainai-server
where is been defined the directories where lxd-compose the files
envs/{nginx,mottainai-server}/*.yml
or .yaml
files.
The environment’s files could be a pure YAML file or template for the Helm engine; in this case, you need to define the render values file from CLI or from the configuration file.
For example you can define the source image used by a node inside a group in this way:
nodes:
- name: "node1"
image_source: "alpine/{{ .Values.alpine_version }}"
image_remote_server: "images"
and to test your services with all available version of alpine images on define different render files like this:
# file: alpine3_11.yml
alpine_version: "3.11"
# file: alpine3_10.yml
alpine_version: "3.10"
At this point you can run your project in this way:
$> lxd-compose apply myproject --render-values alpine3_11.yml
$> lxd-compose destroy myproject
$> lxd-compose apply myproject --render-values alpine3_10.yml
In alternative you can set the default render file inside the config:
# file: .lxd-compose.yml
render_default_file: alpine3_11.yml
and then override the value only when it’s needed:
$> lxd-compose apply myproject
$> lxd-compose destroy myproject
$> lxd-compose apply myproject --render-values alpine3_10.yml
In general, the render engine is used to generate the environment’s files at runtime, instead the template engine defined inside the environment is used as template engine for the files to use inside the deploy workflow.
It’s a good practice avoid to use group names equal across different projects or nodes with equals names because inside the project it’s possible to define a hook to execute a command to an external node of the project. The lxd-compose validate command blocks duplicate at the moment.
Profiles #
Inside the environment’s files could be defined the LXD profiles:
# Define the list of LXD Profiles used by all projects.
# This profiles are not mandatory. An user could create
# his profiles without to use this list.
profiles:
- name: "mottainai-https"
description: "Profile for export HTTPS port to Host"
devices:
https:
bind: host
connect: tcp:0.0.0.0:443
listen: tcp:0.0.0.0:443
nat: false
proxy_protocol: true
type: proxy
This is section is used only for tracing the profiles needed by the infrastructure.
It is possible create and/or update profiles through the lxd-compose profile
subcommand.
The definition of the profiles could be inline over the environment YAML or with external files
through the include_profiles_files
attribute.
Networks #
In a similar way, inside an environment file it’s possible define the list of network device or bridge used by the LXD instances.
networks:
- name: "mottainai0"
type: "bridge"
config:
bridge.driver: native
dns.domain: mottainai.local
dns.mode: managed
ipv4.address: 172.18.10.1/23
ipv4.dhcp: "true"
ipv4.firewall: "true"
ipv4.nat: "true"
ipv6.nat: "false"
ipv6.dhcp: "false"
To show all possible configurations for both networks
and profiles
there is the
LXD documentation. lxd-compose maps
the API configurations directly.
Some examples are available on LXD Compose Galaxy.
The definition of the networks could be inline over the environment YAML or with external files
through the include_networks_files
attribute.
ACLs #
It’s possible define traffic rules that allow controlling network access between different instances connected to the same network or other networks.
This could be done directly to the NICs of an instance or to a network.
lxd-compose
permits to trace the ACLs at environment level and then
use them through the security.acls option in the device section of the
container or of the network.
Additional details could be retrieved from the LXD documentation.
Assign a security.acls directly to a NIC of a container is possible only for OVN networks.
Normally, the definition and the creation of the ACLs must be done before the networks because, a specific ACL could be assigned to a network in this way:
networks:
- name: "mottainai0"
type: "bridge"
config:
bridge.driver: native
dns.domain: mottainai.local
dns.mode: managed
ipv4.address: 172.18.1.249/23
ipv4.dhcp: "true"
ipv4.firewall: "true"
ipv4.nat: "true"
ipv6.nat: "false"
ipv6.dhcp: "false"
security.acls: acltest
Hereinafter, an example about how could be created an ACL with an ingress and egress rules:
acls:
- name: "acltest"
ingress:
- action: allow
destination: 172.18.1.1,172.18.1.2
protocol: icmp4
state: enabled
egress:
- action: allow
destination: 0.0.0.0/0
destination_port: 443
protocol: tcp
state: enabled
In the example the ACL acltest allow ingress traffic to 172.18.1.1 and 172.18.1.2 for ICMPv4 and allow traffic to all destination for the TCP/443 flows.
When the ACLs are defined on environment file you can create and/or update them with:
$# lxd-compose acl create myproject -u -a
Commands #
The commands have different missions:
-
permit to define and register maintenance tasks and/or particolar hooks to run over existing container of already deployed projects. An example could the task that update the lencrypt certificate of existing HTTP service.
-
permit to deploy a specific project with customization (different vars files, flags, etc.). For example, the task to build LXD images over LXD Compose Galaxy is a single project that supply different commands as shortcuts for build the different LXD images.
Inside the environment file the commands could be defined inline:
commands:
- name: mottainai-proxy-update-cerbot
description: |
Update letencrypt certificate
on mottainai Proxy.
NOTE: the container must be already created.
project: mottainai-server-services
apply_alias: true
enable_groups:
- mottainai-proxy1
enable_flags:
- certbot_standalone
or through includes:
include_commands_files:
- commands/certbot.yml
- commands/backup-certbot.yml
Obviously, using include_commands_files
permit to reuse the same command over multiple projects.
Storages #
Inside LXD there are different way to setup the LXD storage: btrfs, zfs, lvm, loopback, etc.
The storage is the main element when an LXD instance is configured. This is the reason why it’s important to trace the configurations option used over a specific remote.
The LXD Compose Galaxy has already a good list of possible configurationa that could be used by the users in their projects.
The storage specs could be defined inside the environment YAML inline or as included
files through the include_storages_files
attribute.
An example of btrfs loopback storage:
name: "btrfs-loopback"
documentation: |
BTRFS Storage Pool Loop disk.
driver: "btrfs"
config:
size: "150GB"
btrfs.mount_options: "rw,relatime,space_cache,compress=zstd:3"