Template Engine #
The term template engine
is used in lxd-compose
to identify
the engine that is used to generate projects’s files used in the
deploy process but not related to the lxd-compose
specification.
In the deploy process could be possible that we need to generate configuration files based on the project variables.
In lxd-compose
there are two way to use the template engine:
-
jinja2
: through the toolj2cli
it’s possible to generate configuration files based on Jinja2 template engine. -
mottainai
: this doesn’t require external tool from the system. It use golang template engine with additiona macro from the Mottainai project and from sprig.
The generation of the files from the template engine is done from
the config_templates
option available at project level, group level
or node level.
If there aren’t variables defined at runtime through the hooks it’s possible testing compilation of the templates without container with the command:
$> lxd-compose compile -p nginx-proxy
>>> [nginx1] Compile 1 resources... 🍦
>>> [nginx1] - [ 1/ 1] /tmp/nginx/nginx.conf ✔
Compilation completed!
If you have variables generated through the hooks it’s possible to use
the lxd-compose compile
commands but the missing variables will be empty.
Jinja2 Engine #
The jinja2
engine is the same engine used by Ansible. It requires
the
j2cli tool installed.
More details about jinja engine are available in the project site.
The engine is defined at environment level in this way:
template_engine:
engine: "jinja2"
# For jinja2 there are a lot filter in ansible package
# that could be loaded with:
opts:
# Enable to_yaml, to_json, etc.
- "--filters"
- "/usr/lib/python3.7/site-packages/ansible/plugins/filter/core.py"
- "contrib/filters/from_json.py"
How described in the j2cli
documentation it’s possible to add
additional plugin to extend the available macro to use in your
template files.
In the example, they are used the Ansible macros and a custom filter.
When the engine configuration is done you can define the files to generate
inside the config_templates
section in this way:
# List of templates files to compiles before push the
# result inside container.
config_templates:
- source: files/template.j2
dst: files/myconf.yaml
where files/template.j2
is the template file and the files/myconf.yaml
is
the generated file that could be used later with the sync_resources
to push
the file directly in the container.
The template file (in this case template.j2
) could be something like this:
node:
# This creates node information of lxd-compose specification in YAML format.
{{ node | to_nice_yaml(indent=2) | indent(2, true) }}
project:
# This creates project information of lxd-compose specification in YAML format.
{{ project | to_nice_yaml(indent=2) | indent(2, true) }}
# key1 and key2 are defined as variable of the project or generated by hooks.
key1: {{ key1 }}
key2: {{ key2 }}
# key_from_file1 is a variable defined inside the variable file.
key_from_file: {{ key_from_file1 }}
{{ json_var | from_json | to_nice_yaml(indent=2) | indent(2, true) }}
{% for user in json_var | from_json %}
{{ user.user }}
{% endfor %}
This an example of one of the included variables files:
envs:
key_from_file1: "xx"
There isn’t a limitation about the types of files to generate. You can generate YAML file, JSON file, etc.
Generated output
$ cat contrib/examples/envs/files/myconf.conf.yaml
node:
config_templates:
- dst: files/myconf.conf.yaml
source: files/template.j2
entrypoint:
- /bin/sh
- -c
hooks:
- commands:
- echo "Run host command"
event: post-node-creation
node: host
- commands:
- echo "1"
event: post-node-creation
node: ''
- commands:
- apk add curl
- curl --no-pregress-meter https://raw.githubusercontent.com/geaaru/luet/geaaru/contrib/config/get_luet_root.sh | sh
- luet install utils/jq
- echo "${node}" | jq
event: post-node-creation
node: ''
- commands:
- echo "HOST PRE-NODE-SYNC"
event: pre-node-sync
node: host
- commands:
- echo "Start app"
event: post-node-sync
node: ''
out2var: myvar
- commands:
- echo "${myvar}"
event: post-node-sync
node: ''
- commands:
- echo "${key1}"
event: post-node-sync
node: ''
- commands:
- echo "${obj}"
event: post-node-sync
node: ''
- commands:
- echo "${mynode_data1}"
event: post-node-sync
node: ''
- commands:
- echo "HOST ${myvar}"
- echo "${myvar}"
event: post-node-sync
node: host
- commands:
- echo "${myvar}" > /tmp/lxd-compose-var
entrypoint:
- /bin/bash
- -c
event: post-node-sync
flags:
- flag1
node: host
- commands:
- 'echo ''{ "obj1": "value1" }'' | jq ''.obj1''
'
entrypoint:
- /bin/bash
- -c
event: post-node-sync
flags:
- flag2
node: host
out2var: host_var
- commands:
- echo "${host_var}"
entrypoint:
- /bin/bash
- -c
event: post-node-sync
flags:
- flag2
node: host
- commands:
- echo "${json_var}"
event: post-node-sync
node: ''
- commands:
- echo "${runtime_var}"
event: post-node-sync
node: ''
image_source: alpine/3.12
labels:
mynode_data1: data1
name: node1
sync_resources:
- dst: /etc/myapp/myconf.conf.yaml
source: files/myconf.conf.yaml
- dst: /etc/myapp2/
source: files/
project:
description: LXD Compose Example1
groups:
- common_profiles:
- default
- net-mottainai0
connection: local
description: Description1
ephemeral: true
hooks:
- commands:
- echo "HOST PRE-NODE-SYNC (ON GROUP)"
event: pre-node-sync
node: host
name: group1
nodes:
- config_templates:
- dst: files/myconf.conf.yaml
source: files/template.j2
entrypoint:
- /bin/sh
- -c
hooks:
- commands:
- echo "Run host command"
event: post-node-creation
node: host
- commands:
- echo "1"
event: post-node-creation
node: ''
- commands:
- apk add curl
- curl https://raw.githubusercontent.com/geaaru/luet/geaaru/contrib/config/get_luet_root.sh | sh
- luet install utils/jq
- echo "${node}" | jq
event: post-node-creation
node: ''
- commands:
- echo "HOST PRE-NODE-SYNC"
event: pre-node-sync
node: host
- commands:
- echo "Start app"
event: post-node-sync
node: ''
out2var: myvar
- commands:
- echo "${myvar}"
event: post-node-sync
node: ''
- commands:
- echo "${key1}"
event: post-node-sync
node: ''
- commands:
- echo "${obj}"
event: post-node-sync
node: ''
- commands:
- echo "${mynode_data1}"
event: post-node-sync
node: ''
- commands:
- echo "HOST ${myvar}"
- echo "${myvar}"
event: post-node-sync
node: host
- commands:
- echo "${myvar}" > /tmp/lxd-compose-var
entrypoint:
- /bin/bash
- -c
event: post-node-sync
flags:
- flag1
node: host
- commands:
- 'echo ''{ "obj1": "value1" }'' | jq ''.obj1''
'
entrypoint:
- /bin/bash
- -c
event: post-node-sync
flags:
- flag2
node: host
out2var: host_var
- commands:
- echo "${host_var}"
entrypoint:
- /bin/bash
- -c
event: post-node-sync
flags:
- flag2
node: host
- commands:
- echo "${json_var}"
event: post-node-sync
node: ''
- commands:
- echo "${runtime_var}"
event: post-node-sync
node: ''
image_source: alpine/3.12
labels:
mynode_data1: data1
name: node1
sync_resources:
- dst: /etc/myapp/myconf.conf.yaml
source: files/myconf.conf.yaml
- dst: /etc/myapp2/
source: files/
hooks:
- commands:
- 'echo ''[{ "user": "user1" }]''
'
event: pre-group
node: host
out2var: json_var
- commands:
- 'echo ''[{ "user": "user1" }]''
'
event: pre-group
node: host
out2var: json_var
include_env_files:
- ../vars/file1.yml
name: lxd-compose-example1
vars:
- envs:
LUET_YES: 'true'
key1: value1
key2: value2
obj:
foo: baa
key: xxx
- envs:
json_var: '[{ "user": "user1" }]
'
key_from_file1: xx
key1: value1
key2: value2
key_from_file: xx
- user: user1
user1
Mottainai Engine #
In a similar way, the mottainai
engine could be used to generate files used
in the deploy workflow.
It’s used the Golang template engine with additional functions from Mottainai Server project and from sprig.
This engine doesn’t require external tools.
Hereinafter, an example related to an NGINX configuration:
nginx.conf.tmpl
user {{ .nginx_user }};
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 1024;
use epoll;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main
'$remote_addr - $remote_user [$time_local] '
'"$request" $status $bytes_sent '
'"$http_referer" "$http_user_agent" '
'"$gzip_ratio"';
client_header_timeout 10m;
client_body_timeout 10m;
send_timeout 10m;
connection_pool_size 256;
client_header_buffer_size 1k;
large_client_header_buffers 4 2k;
request_pool_size 4k;
gzip off;
output_buffers 1 32k;
postpone_output 1460;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 75 20;
ignore_invalid_headers on;
index index.html;
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=mattermost_cache:10m max_size=3g inactive=120m use_temp_path=off;
{{ range $index, $upstream := .nginx_upstreams }}
upstream {{ index $upstream "name" }} {
server {{ index $upstream "server" }};
keepalive {{ index $upstream "keepalive" }};
}
{{ end }}
server {
listen 80;
server_name {{ .mypublic_domain }};
server_tokens off;
access_log /var/log/nginx/access_log main;
error_log /var/log/nginx/error_log info;
{{ range $index, $loc := .nginx_location_http }}
location {{ index $loc "path" }} {
{{ index $loc "content" }}
}
{{ end }}
}
server {
listen 443 ssl;
server_name {{ .mypublic_domain }};
server_tokens off;
ssl_certificate /certbot/live/{{ .mypublic_domain }}/fullchain.pem;
#ssl_certificate /certbot/live/{{ .mypublic_domain }}/cert.pem;
ssl_certificate_key /certbot/live/{{ .mypublic_domain }}/privkey.pem;
access_log /var/log/nginx/ssl_access_log main;
error_log /var/log/nginx/ssl_error_log info;
{{ range $index, $loc := .nginx_location_ssl }}
location {{ index $loc "path" }} {
{{ index $loc "content" | nindent 10 }}
}
{{ end }}
root /var/www/html;
}
}
If we consider a variable file like this:
envs:
nginx_user: www-data
nginx_logrotate_days: 30
nginx_upstreams:
- name: upstream1
server: 192.168.0.90:8065
keepalive: 32
nginx_reset_htpasswd: "1"
nginx_auth_basic_files:
- path: /etc/nginx/myauth
users:
- user: "user1"
pwd: "xxxxxx"
- user: "user2"
pwd: "yyyyy"
nginx_location_http:
- path: "/"
content: |
deny all;
nginx_location_ssl:
- path: "/"
content: |
deny all;
- path: "/public/"
content: |
allow all;
- path: "/private/"
content: |
satisfy all;
#allow 192.168.0.0/24;
#deny all;
index index.htm;
auth_basic "Restricted Area";
auth_basic_user_file /etc/nginx/myauth;
the output generated is something like available below:
Generated output
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 1024;
use epoll;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main
'$remote_addr - $remote_user [$time_local] '
'"$request" $status $bytes_sent '
'"$http_referer" "$http_user_agent" '
'"$gzip_ratio"';
client_header_timeout 10m;
client_body_timeout 10m;
send_timeout 10m;
connection_pool_size 256;
client_header_buffer_size 1k;
large_client_header_buffers 4 2k;
request_pool_size 4k;
gzip off;
output_buffers 1 32k;
postpone_output 1460;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 75 20;
ignore_invalid_headers on;
index index.html;
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=mattermost_cache:10m max_size=3g inactive=120m use_temp_path=off;
upstream upstream1 {
server 192.168.0.90:8065;
keepalive 32;
}
server {
listen 80;
server_name example1.com;
server_tokens off;
access_log /var/log/nginx/access_log main;
error_log /var/log/nginx/error_log info;
location / {
deny all;
}
}
server {
listen 443 ssl;
server_name example1.com;
server_tokens off;
ssl_certificate /certbot/live/example1.com/fullchain.pem;
#ssl_certificate /certbot/live/example1.com/cert.pem;
ssl_certificate_key /certbot/live/example1.com/privkey.pem;
access_log /var/log/nginx/ssl_access_log main;
error_log /var/log/nginx/ssl_error_log info;
location / {
deny all;
}
location /public/ {
allow all;
}
location /private/ {
satisfy all;
#allow 192.168.0.0/24;
#deny all;
index index.htm;
auth_basic "Restricted Area";
auth_basic_user_file /etc/nginx/myauth;
}
root /var/www/html;
}
}