the ramblings of a random norwegian techie

I’ve never liked Docker. In fact, I’ve always kind of hated it. The reason is simple - I’m an operational engineer. I do operations, not development. And when Docker first came out, everyone jumped on the technology and praised it for making development so much easier. And I agree - containers can make development easier. However, at the time people were setting up Docker containers for everything. And standard prosedure was to set up a container, and forget it. That is a really bad practice, operations-wise. In operations we like things that are automatically maintained, and most tutorials I came across only told you how to get the Docker image up and running. It said nothing about how it should be maintained, and how you could automate that task.

This is still kinda true today. But, as I’ve come to learn with time, best practices have developed, and there exists multiple systems that can maintain your Docker containers for you. I recently received a tip to check out Watchtower, and it seems like a simple and nice tool that can pull the latest version of your images, and run them automatically.. I have come across Watchtower before, but as I’ve mostly been igroring everything Docker, I’ve silently ignored this as well.

Below I will explain how to set up Docker and Watchtower, and lastly, how to set up Kanboard with Watchtower so that it is automatically maintained. Why Kanboard? Just because I want to try it out, and it’s a good example of a simple application that can be deployed with Docker.

Installing Docker

Some of this has been stolen from Digital Oceans article on How To Install and Use Docker on Ubuntu 18.04, which I recommend you read. Also, some is from Dockers own official documentation.

In the article, they want you to add Dockers own repositories. However, I like to use Ubuntus own repositories (just for the maintainability), so I will use that.

First off, update your syste.

$ sudo apt update
$ sudo apt dist-upgrade

Then we can install the Docker-package.

$ sudo apt install

And that’s it. Let’s check the version.

$ docker --version
Docker version 18.06.1-ce, build e68fc7a


Also, we should add our own user to the docker-group, so we can use Docker without sudo. Also, we need to log into the new group.

$ sudo adduser username docker
$ newgrp docker

You should now be able to control Docker with your own user.

Testing Docker

You can test Docker by simply running the following

$ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
1b930d010525: Pull complete
Digest: sha256:2557e3c07ed1e38f26e389462d03ed943586f744621577a99efb77324b0fe535
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:

For more examples and ideas, visit:

This tries to start the image hello-world. Docker can’t find the image locally (because we haven’t downloaded any images), so it fetches it from Dockerhub, and runs it.

I will probably add another article later with more info on how to actually use Docker, and how to create your own Docker containers. And also how to build them continously.

Installing Watchtower

Watchtower is itself packaged as a Docker container, which makes “installation” rather easy. It is actually apparently as simple as this:

$ docker run -d \
  --name watchtower \
  -v /var/run/docker.sock:/var/run/docker.sock \

We can use docker ps to see if it is running

docker ps
d7b9cba16215  v2tec/watchtower  "/watchtower"  3 seconds ago  Up 3 seconds         watchtower

Initializing Watchtower, and other containers

This however, is no way to run a Docker container on a permanent basis. If we reboot the host, the Docker container will stop.

Docker has a way around this. You can specify --restart [no|on-failure|unless-stopped|always]. But this means that you get two init-systems you need to be aware of on your host - both systemd and Docker.

Another alternative is to write a systemd-service file for Watchtower, and every other Docker container that you will be running permanently. If you do this, you also have a nice place to put environmental variables and such as well. Personally, I think I prefer this.

Here is a systemd-service for Watchtower:

Description=Watchtower Docker container docker.service

# Don't restart - it will conflict with Watchtower.

# Start with removing old images
ExecStartPre=-/usr/bin/docker rm -f watchtower
ExecStart=/usr/bin/docker run --rm \
                              --name watchtower \
                              -v /var/run/docker.sock:/var/run/docker.sock \
                              v2tec/watchtower \
                              --schedule "0 32 4 * * *" \

ExecStop=/usr/bin/docker stop -t 2 watchtower


Installing Kanboard

Alright. Now that we have Docker working, and Watchtower set up, we can set up Kanboard.

I plan on using a systemd-service and Watchtower for Kanboard as well, so let’s start with creating some directories. I plan on using /var/local to store files for Docker.

$ sudo mkdir -p /var/local/docker-kanboard/{data,app,ssl}

When that’s done, we can create our database. I use postgres.

sudo -u postgres psql
psql (10.6 (Ubuntu 10.6-0ubuntu0.18.10.1))
Type "help" for help.

postgres=# create database kanboard;
postgres=# create user kanboard with encrypted password 'LALALA';
postgres=# grant all on database kanboard to kanboard;
postgres=# \q

Also, we need to be able to connect to postgres, so you need to make sure that postgres listens to the right address (I just listen to all interfaces, and use a firewall to make sure no-one can connect externally). And our new Kanboard-user needs permissions to connect from the Docker network.

I’ve added this line in /etc/postgresql/10/main/pg_hba.conf:

host    kanboard        kanboard          scram-sha-256

This means that any host with an adress in can connect to the database kanboard with the user kanboard, using scram-sha-256 as the auth-method.

Now I need a systemd-service for kanboard.

Description=Kanboard Docker container docker.service watchtower.service

# Don't restart - it will conflict with Watchtower.


# Start with removing old images
ExecStartPre=-/usr/bin/docker rm -f kanboard
ExecStart=/usr/bin/docker run --rm \
                              --name kanboard \
                              -e DATABASE_URL="postgres://kanboard:LALALA@server-hostname:5432/kanboard" \
                              -v /var/local/docker-kanboard/data:/var/www/app/data \
                              -v /var/local/docker-kanboard/plugins:/var/www/app/plugins \
                              -v /var/local/docker-kanboard/ssl:/etc/nginx/ssl \
                              -p 8002:80 \

ExecStop=/usr/bin/docker stop -t 2 kanboard


And that’s kinda it. If I run systemctl start kanboard.service now, I will have kanboard available on port 8002. I can now set up nginx as a reverse proxy for this, so kanboard becomes available on

Now all I need is something to continously build any Docker image I create! I will try to write an article about this some time in the future.

As mentioned in my post about using nsupdate and bind to set up dynamic DNS, I have a home server which I use to store several things. I then use an nginx-proxy from my main server to reach these things. The problem with using a dynamic DNS to reach the server, is that I need to restart nginx every time the address changes. This isn’t a huge problem as the address doesn’t change that often, but it is annoying when it happens.

My plan here is to set up a VPN between these machines, so that I can reach my home server from my main server using a static address. To achieve this, my home server will connect to my main server using WireGuard. Hopefully, this will give me IPv6 on my home server as well (as my ISP does not provide this…).

This post is mainly written so I can remember what I did in the future. I will shamelessly steal from guides from both DigitalOcean and Linode, and if you’re looking for a guide to do this yourself, you should probably check out both.

Going forward my home server will be the client, and my main server will be the server.

Installing WireGuard

Wireguard provides a PPA for Ubuntu, and is quite easy to install. Just run the following commands on both server and client.

$ sudo apt install software-properties-common
$ sudo add-apt-repository ppa:wireguard/wireguard
## Hit enter when prompted if you want to add the new source
$ sudo apt update
$ sudo apt install wireguard-dkms wireguard-tools

Generating keys

Now we need to create the keys. This should be done on both server and client.

$ cd /etc/wireguard
$ umask 077
$ wg genkey | sudo tee privatekey | wg pubkey | sudo tee publickey

This will generate a private key, put it in /etc/wireguard/privatekey and pipe it to the wg pubkey-command, which will create a public key based on the private key and put it in /etc/wireguard/publickey.


Configure the server

Edit /etc/wireguard/wg0.conf. It should look something like this:

PrivateKey = <Private Key>
Address =, fd172.21:12::1/64
ListenPort = 51820
# SaveConfig = true
PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eno1 -j MASQUERADE; ip6tables -A FORWARD -i wg0 -j ACCEPT; ip6tables -t nat -A POSTROUTING -o eno1 -j MASQUERADE
PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o eno1 -j MASQUERADE; ip6tables -D FORWARD -i wg0 -j ACCEPT; ip6tables -t nat -D POSTROUTING -o eno1 -j MASQUERADE

# Client
PublicKey = <Client Public Key>
AllowedIPs =, fd172.21:12::2/128

The addresses are just randomly chosen private addresses. Please note that the PublicKey under [Peer] should be the clients public key.

I have decided to comment out the SaveConfig-option. This option will save any changes you make to the live VPN-connection (using wg) to the config-file. This is nice if you actually make changes live. I prefer to update config-files.

Also note that the AllowedIPs-option works as an ACL on the server-side, and decides what to route through the VPN on the client-side.

The IPTables-rules are only necessary if you want to be able to surf the web through the VPN. If you only want a VPN between the machines, you can remove PostUp and PostDown. If you do want to surfe the web, you will also have to uncomment these two lines in your /etc/sysctl.conf-file:


And then you have to run sudo sysctl -p.

Now we can enable the wireguard-service and start it by running

$ sudo systemctl enable wg-quick@wg0
$ sudo systemctl start wg-quick@wg0

WireGuard should now be good to go on the server.

Configure the client

/etc/wireguard/wg0.conf should look like this:

PrivateKey = <Private Key>
Address =, fd172.21:12::2/64
#SaveConfig = true

# server
PublicKey = <Server Public key>
Endpoint = <Server Public IP>:51820
AllowedIPs =, ::/0
PersistentKeepalive = 25

As I mentioned, I don’t have IPv6 at home. So, I wish to route all my IPv6 packets through the server. I have IPv4 at home, so I will only route the VPN-net through the VPN.

I’ve thrown in PersistentKeepalive for good measure, since my home-server is behind NAT. This will send keepalive-packets through the tunnel, so that it stays open even when it’s not in any use.

Enable the wireguard service and start it by running

$ sudo systemctl enable wg-quick@wg0
$ sudo systemctl start wg-quick@wg0

Testing the VPN

First, let’s ping the client from the server.

$ ping -c3
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=8.53 ms
64 bytes from icmp_seq=2 ttl=64 time=8.84 ms
64 bytes from icmp_seq=3 ttl=64 time=9.03 ms

--- ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 8.538/8.806/9.038/0.219 ms

$ ping -6 fd10:21:12::2 -c3
PING fd10:21:12::2(fd10:21:12::2) 56 data bytes
64 bytes from fd10:21:12::2: icmp_seq=1 ttl=64 time=9.75 ms
64 bytes from fd10:21:12::2: icmp_seq=2 ttl=64 time=10.0 ms
64 bytes from fd10:21:12::2: icmp_seq=3 ttl=64 time=9.07 ms

--- fd10:21:12::2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 9.077/9.634/10.073/0.422 ms

Alright! Now, let’s ping the server from the client!

$ ping -c3
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=7.86 ms
64 bytes from icmp_seq=2 ttl=64 time=8.14 ms
64 bytes from icmp_seq=3 ttl=64 time=9.03 ms

--- ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 7.867/8.347/9.035/0.509 ms

$ ping -6 fd10:21:12::1 -c3
PING fd10:21:12::1(fd10:21:12::1) 56 data bytes
64 bytes from fd10:21:12::1: icmp_seq=1 ttl=64 time=9.23 ms
64 bytes from fd10:21:12::1: icmp_seq=2 ttl=64 time=10.1 ms
64 bytes from fd10:21:12::1: icmp_seq=3 ttl=64 time=8.87 ms

--- fd10:21:12::1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 8.877/9.413/10.125/0.536 ms

And, lastly, let us try to ping google with IPv6, to see if my client can reach the world using Ipv6.

$ ping -6 -c3
PING (2a00:1450:400f:809::200e)) 56 data bytes
64 bytes from (2a00:1450:400f:809::200e): icmp_seq=1 ttl=56 time=19.3 ms
64 bytes from (2a00:1450:400f:809::200e): icmp_seq=2 ttl=56 time=19.7 ms
64 bytes from (2a00:1450:400f:809::200e): icmp_seq=3 ttl=56 time=19.8 ms

--- ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 19.381/19.658/19.875/0.206 ms

Great Success!


I’ve been using the Google Authenticator PAM Module for years. It works great, and is easy to set up. But, I generally try to stay away from all things Google, so I wanted to set up two-factor ssh using something else.

After a bit of googling (…), I found OATH Toolkit. Reading their documentation, it seems rather easy to set up. Also, this blog has a nice TL;DR of the setup. I’ve pretty much followed that, and added a few bits.

Setting up oathtool

# Install oathtool.
sudo apt install oathtool libpam-oath

export HEX_SECRET=$(head -15 /dev/urandom | sha1sum | cut -b 1-30)

oathtool --verbose --totp $HEX_SECRET --digits=8

# Type in the Base32-secret on your phone

sudo touch /etc/users.oath
sudo chmod 0600 /etc/users.oath

# Running subshell so we can send output to file with sudo-permissions
sudo /bin/bash -c "echo HOTP/T30 $USER - $HEX_SECRET \ >> /etc/users.oath"

# Unset your secret

Setting up an access-list for two-factor

Now, I want to be able to define who I require two-factor for, and from where I require it. I have a couple of hosts that I trust, where I can log in from in case I lose my two-factor.

Create /etc/security/login_token.conf. My file has the following contents:

# Do not require two-factor from here:
+ : dennis :

# lolnope don't need two-factor at all
+ : lolnope : ALL

# Demand two-factor from everywhere and everyone else
- : ALL : ALL

See man 5 access.conf for details on the format (link).

Setting up libpam-oath

First install the package.

sudo apt install libpam-oath

Add the following to the top of /etc/pam.d/sshd:

Exceptions from two-factor
auth    [success=1 default=ignore] accessfile=/etc/security/login_token.conf
# Two-factor
auth reqiored usersfile=/etc/users.oath


Now all we need to do is to enable two-facor in sshd_config. Set ChallengeResponseAuthentication to yes in /etc/ssh/sshd_config. Now we can restart ssh, and test it!


This is what it should look like when logging in:

$ # I use ssh-keys, do I need to auth without them
$ ssh -o PubkeyAuthentication=no dennis@host
One-time password (OATH) for `dennis':

And, when logging in from one of the hosts I’ve defined in /etc/security/login_token.conf, I’m not asked about the OTP!

This article is a work in-progres

So, I need a new backup-system.

Today I use tarsnap on my servers, and Arq on my personal computer(s). Tarsnap uses Amazon S3, and Arq backs stuff up to my home-server. This works fairly well, but tarsnap isn’t the cheapest solution, and the way I use Arq isn’t redundant - if my home-server breaks down, my backup is gone.

I don’t have too many requirements for a backup solution, but I have some - in prioritized order:

  • Files need to be encrypted locally before they are uploaded, no matter where they are uploaded
  • If you “p0wn” one of my servers, you don’t automatically get access to all my backups
  • Deduplication
  • Be able to resume an interrupted backup - I have some external disks for my laptop, and it can take some time to make a complete backup of these
  • Be able to backup different folders on different schedules - I want to backup chatlogs and mail hourly on servers, but I don’t need hourly backups of /var/www
  • Redundancy - I need to be able to access backups even if my home burns down. Basically I need to be able to lose any two devices and still be able to make a full recovery

Looking at the list of different backup-solutions listed in the Arch-wiki, it seems that Borg might fit most of my requirements. I’ve tried Borg previously, through work, and it seemed fine. Also, paired with rclone or git-annex, I can store a redundant backup in the cloud. It appears that several others have done similar things.

According to the post at, I can use rclone to just mount up my redundant cloud-backup if I lose my home-server. This seems really easy, so I’m going with this. As such, the solution will be borg+rclone.


mrslave is my home-server, and will serve as my main backup repository.

laptop is a laptop with macOS

stella is one of my servers (it’s the one serving you this page, actually)

All clients (stella and laptop are clients) back up to mrslave, over ssh. Since we’re using ssh, we’ll need a user-account on mrslave. All clients will have their own directory, and they will be locked to this directory. We’ll use the restriction that the borg-doc recommends, which is to restrict what commands a user will be able to use via the ssh-key in ~/.ssh/authorized_keys.



Installing Borg is pretty trivial.

dennis@mrslave ~ % sudo apt install borgbackup borgbackup-doc

(you don’t really need the doc, but it’s nice to have)

Then we will need a user dedicated to Borg. Let’s call that user borg. We’ll also need a directory to store the backups. I find that /srv/ is a good place to put backup-data.

dennis@mrslave ~ % sudo groupadd --system borg
dennis@mrslave ~ % sudo useradd --home-dir /srv/backup/borg --create-home --system --gid borg borg
dennis@mrslave ~ % sudo -u borg -H mkdir ~/.ssh ~/stella ~/laptop
dennis@mrslave ~ % sudo -u borg -H touch ~/.ssh/authorized_keys
dennis@mrslave ~ % sudo chmod 700 /srv/backup/borg /srv/backup/borg/.ssh


We need to install Borg, create a ssh-key, create a keyfile (and a place to put it). For a keyfile, we’ll just use a file generated by ssh-keygen.

dennis@stella ~ % sudo apt install borgbackup
dennis@stella ~ % sudo su -
root@stella ~ # ssh-keygen -t ed25519 -f ~/.ssh/mrslave.borg.id_ed25519 -P ""
root@stella ~ # cat << EOF >> ~/.ssh/config
root@stella ~ # heredoc>
Host borgbackup
    HostName mrslave.example
    IdentityFile ~/.ssh/mrslave.borg.id_ed25519
    User borg
root@stella ~ # heredoc> EOF

If you’re on macOS, the installation is pretty much the same, except for the first line:

dennis@laptop ~% brew cask install borgbackup

Then we need to copy the public-key to the server. The public-key goes into /srv/backup/borg/.ssh/authorized_keys, and should be in the following format, with the following restrictions (IN ONE LINE):

command="cd /srv/backup/borg/<client>;
         borg serve --restrict-to-path /srv/backup/borg/<client>"
         <keytype> <key> <host>

For example:

command="cd /srv/backup/borg/stella; borg serve --restrict-to-path /srv/backup/borg/stella" ssh-ed25519 AAAAv4aTaC4lZOI1LTE3BBAAIPo9xS64a0p///IyU8vkl90KHck42Ole/w/I0po6FuCK

Alright. Let’s initialize the backup. Since I’m planning on syncing to the cloud, I’ll use keyfile, which stores the keyfile on the client, instead of repokey, which stores the key in the repo (you still need the passphrase, though).

root@stella ~ [2] # borg init --encryption=keyfile borg@borgbackup:stella

And then, to backup the keyfile:

root@stella ~ # borg key export borg@borgbackup:stella keyfile && cat keyfile && rm keyfile

Copy the key, and the passphrase, to somewhere safe.

Now, let’s test the backup.

root@stella ~ # echo "foo" > bar
root@stella ~ # borg create borg@borgbackup:stella::testarchive test
root@stella ~ # mkdir mountpoint
root@stella ~ # borg mount borg@borgbackup:stella mountpoint
Enter passphrase for key /root/.config/borg/keys/borgbackup__stella:
root@stella ~ # cd mountpoint
root@stella ~/mountpoint # ls
root@stella ~/mountpoint # cd testarchive
root@stella ~/mountpoint/testarchive # ls
root@stella ~/mountpoint/testarchive # cat bar
root@stella ~/mountpoint/testarvhice # cd
root@stella ~ # borg umount mountpoint

Wohoo, it worked.

Automating it

I’m working on a script for automating the backups. It will be posted when it’s finished.

The password-manager

The password-manager can’t be backed up in the same way as the rest of your system. If you lose access to your password-manager, how are you going to get access to your backups? You need the password-manager to access your backups, and as such, you need another way of backing it up.

I solve this by backing up my password-managers to several of my servers. That way, unless I lose them all, I’ll still have access to them.

Cloud redundancy

Choosing a cloud-storage

As far as I can tell, Wasabi and Backblaze B2 offers some of the cheapest cloud-storage available today, costing respectively $0.0049 and $0.005 per gigabyte of storage per month (as of 2018-05-20. See this page for more options).

However, I’ve never been a huge fan of cloud-services. Especially not those hosted in the US. Time4VPS offers storage VPSes for a good price. Right now, you pay €5.99/TB/month, or €9.99/2TB/month. If you go with the bi-anual plan, you get 25% off. That’s €7.49/2TB/month! Converted to gigabytes and US dollars, that’s $0.00418 per gigabyte, per month. Of course, here you have to pay for the full 2TB, so it’ll probably end up being more expensive than Wasabi or Backblaze, where you only pay for what you use. Also, it means you have to manage one more server.

I’m going with Time4VPS.

Installing and setting up rclone

dennis@mrslave ~ % sudo apt install rclone
dennis@mrslave ~ % sudo su -
root@mrslave ~ # rclone config
2018/05/20 16:48:15 NOTICE: Config file "/root/.config/rclone/rclone.conf" not found - using defaults
No remotes found - make a new one
n) New remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
n/r/c/s/q> n
name> remote

Installing rclone was pretty easy. For the rest of the config, you can follow rclone’s documentation on setting up SFTP storage. Also, you might want to throw some encrypted storage on top of that, using rclones crypt-setup.

Data recovery

Recovery of client

Recovery of files on a client is easy.

root@stella ~ # borg key import borg@borgbackup:stella path-to-keyfile && rm path-to-keyfile
root@stella ~ # borg mount borg@borgbackup:stella mountpoint
root@stella ~ # cd mountpoint

Remember setting up ssh-keys and such first.

Recovery of server

Recovery after the server has gone should also be easy. First you set up rclone, as described above. Then you can use rclone mount.

dennis@laptop ~ % rclone mount backup:repo mountpoint

Now you should have your entire borg-archive available at mountpoint.

I like to host everything myself, which means I run CalDAV and CardDAV on my own server to synchronize contacts and calendars. Previously I’ve been running Baïkal, but now It’s time to test something else. The reason I want to test out something else is that I want the ability to share calendars. Enter DAViCal.

I’ll be installing DAViCal on Ubuntu 16.04, but the configuration should be pretty similar for older and newer versions of both Ubuntu and Debian. Also, I’ll be explaining how I set this up in MY environment. Some parts might not apply in yours. For instance, I will be using BasicAuth in Apache to authenticate users, instead of DAViCals built-in solution.

I’m running PHP 7.0 using libapache2-mod-php7.0 and Apache, running behind an nginx-proxy. I’m assuming you’ve already got this set up, and that you’re already running a database. I’ll be using PostgreSQL.

Installing DAViCal

First off, let’s install DAViCal!

dennis@spandex:~$ sudo apt install davical

This will install DAViCal and all dependencies you need.

Database Setup


You’ll need to edit the PostgreSQL-config file to give DAViCal the permissions it needs. My config-file is located at /etc/postgresql/9.5/main/pg_hba.conf. Put the following into your config-file:

# DAViCal
local   davical         davical_app                             trust
local   davical         davical_dba                             trust

Above the line that says

local   all             all                                     peer

And reload PostgreSQL using

dennis@spandex:~$ sudo systemctl reload postgresql

Creating the database

Next up we need to create and build the database. DAViCal comes with a script,, which can do this for you. It is located in /usr/share/davical/dba/, and needs to be run as a user with permissions to create databases. Typically we can do something like this:

dennis@spandex:~$ sudo su postgres -s /usr/share/davical/dba/

Supported locales updated.
Updated view: dav_principal.sql applied.
CalDAV functions updated.
RRULE functions updated.
Database permissions updated.
*  The password for the 'admin' user has been set to 'laeDe9ae='

Thanks for trying DAViCal!  Check in /usr/share/doc/davical/examples/ for
some configuration examples.  For help, visit #davical on

If the above command fails, it’s like you’ve screwed up the database-permissions, or something else. Fix it, and run the script again, after you’ve deleted the database. You can delete the database using:

dennis@spandex:~$ sudo su postgres -c "dropdb davical"


I like to give everything its own address. DAViCal will be running on davical.domain.tld. As such, I’m setting up a CNAME-record from davical.domain.tld, to spandex.domain.tld, which is the server DAViCal will be running on.


Here’s my Apache-configuration, located at /etc/apache2/sites-available/:

<VirtualHost *:8080>

    ServerName davical.domain.tld
    UseCanonicalName on

    DocumentRoot /usr/share/davical/htdocs
    DirectoryIndex index.php index.html
    Alias /images/ /usr/share/davical/htdocs/images/

    # To circumvent phps $_SERVER['HTTPS']-check
    SetEnv HTTPS "on"

    AcceptPathInfo On

    <Directory "/usr/share/davical/htdocs">
        AuthType Basic
        AuthName "private area"
        AuthUserFile /etc/apache2/davical.htpasswd
        Require valid-user

    <Directory "/usr/share/davical/htdocs/images/">
        AllowOverride None
        Order allow,deny
        Allow from all

    CustomLog /var/log/apache2/baikal.domain.tld-access.log combined
    ErrorLog /var/log/apache2/baikal.domain.tld-error.log


Also, we’ll need to create a password-file. Mine is located at /etc/apache2/davical.htpasswd. You can create it like this:

dennis@spandex:~$ sudo htpasswd -c /etc/apache2/davical.htpasswd admin
New password:
Re-type new password:
Adding password for user admin

Note that the password you use here will be the actual admin-password.

The Apache-config is activated by running:

dennis@spandex:~$ sudo a2ensite davical.domain.tld.conf
Enabling site davical.domain.tld.
To activate the new configuration, you need to run:
  service apache2 reload
dennis@spandex:~$ sudo systemctl reload apache2


You’ll notice that Apache is listening to port 8080. That is because I’m running Apache behind Nginx. This is because I’m using Nginx for other things, and it is hogging port 80 and 443. Also, I prefer Nginx, so I use it to terminate TLS.

This is my Nginx-config, located at /etc/nginx/sites-available/davical.domain.tld:

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name davical.domain.tld;

    access_log /var/log/nginx/davical.domain.tld-access.log;
    error_log /var/log/nginx/davical.domain.tld-error.log;

    ssl on;
    ssl_certificate /etc/ssl/letsencrypt/davical.domain.tld.pem;
    ssl_certificate_key /etc/ssl/letsencrypt/davical.domain.tld.key;

    charset utf-8;

    location / {
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_read_timeout 604800;
        proxy_set_header X-Real-IP  $remote_addr;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header Host $host;

(You’ll have to figure out the SSL-part yourself. I use certbot and Let’s Encrypt)

It is activated like this:

dennis@spandex:~$ sudo ln -s "../sites-available/davical.domain.tld" /etc/nginx/sites-enabled/davical.domain.tld"
dennis@spandex:~$ sudo systemctl reload nginx

DAViCal should now be available on the web. Try visiting davical.domain.tld. It should look something like this.

Configuring DAViCal

DAViCals configurationfiles are located in /etc/davical/. If you want to run multiple instances of DAViCal, each instance can must have its own config-file, named domain.tld-conf.php (or sub.domain.tld-conf.php - you get the gist of it). If you only need one instance, you can name the file config.php.

Since I only need one instance of DAViCal, my config-file will be /etc/davical/config.php.

  $c->domain_name = "davical.domain.tld";
  $c->sysabbr     = 'DAViCal';
  $c->admin_email = 'root-davical@domain.tld';
  $c->system_name = "My DAViCal Server";
  $c->pg_connect[] = 'dbname=davical port=5432 user=davical_app';

  // Use Apache-supplied headers and believe them
  $c->authenticate_hook['server_auth_type'] = 'Basic';

The last part lets Apache do the authentication.

Setting up users

Navigate to davical.domain.tld and log in as admin. Select “Create Principal” from “User Functions” in the menu. Fill in the details, and click Create.

Remember that whenever you setup a new user, you will have to create a line for that user in /etc/apache2/davical.basicauth using the command you user earlier.

Sharing a calendar

Sharing calendars isn’t too difficult. You start off by creating the users involved. Then you create a group, and add the users to the group. Then you grant this group privileges to access (preferably read-only) each user. In iCal on MacOS you’ll find the shared calendars under Delegation under the account in Accounts.

Configuring the Client

I use MacOS and iOS on the client-side, so I will only explain how to set up caldav- and carddav-sync on those platforms. DAViCal has instructions for setting up caldav and carddav on other clients.

iCal on MacOS

Open up iCal, open Preferences (you can use the hot-key ⌘,), select Accounts, hit the +-sign to add account, select Other CalDAV Account…, select Advanced as “Account Type”, and fill in everything.

User Name, Password, and Server Address should be pretty self-explanatory. Server Path is /caldav.php/USERNAME/, port is 443, the Use SSL-box should be ticked, and the Use Kerberos v5 for authentication should not.

Contacts on MacOS

Open up Contacts, open Preferences (you can use the hot-key ⌘,), select Accounts, hit the +-sign to add account, select Other Contacts Account…, select Manual as Account Type. Fill in the rest, and voilà, it should work.

Calendar on iOS

Open up Settings, select Calendar, select Accounts, select Add Account, select Other, select Add CalDAV Account, fill in your details, and hit Next. Voilà.

Contacts on iOS

Open up Settings, select Contacts, select Accounts, select Add Account, select Other, select Add CardDAV Account, fill in details, and hit Next. Voilà.


I have a home server with a dynamic IP-address. Previously I used an Asus router which comes with automatic dynamic DNS. This worked fine, but I kind of wanted to do this myself. So, I did it using bind and nsupdate.

I have a domain I use for my LAN, domain.tld, and ext.domain.tld points to my public IP-address. I host my DNS with Underworld, and I don’t have rights to use nsupdate directly with them. So, I’ll set up a DNS-server on one of my servers, and let that server resolve lookups for the subdomain ext.domain.tld, and continue to let Underworld serve domain.tld.

In this setup Underworld will be “super-master”, cookie (my main server) will be master, and mrslave will be the client (or, if you will, slave). Both cookie and mrslave runs Ubuntu 16.04, but the configuration should be pretty similar for older and newer versions of both Ubuntu and Debian.

Creating a key-pair

To create a key-pair, we’ll be using dnssec-keygen.

dennis@cookie:/tmp$ dnssec-keygen -a HMAC-SHA512 -b 512 -n USER mrslave.domain.tld.

This will give you two files. Kmrslave.domain.tld.+165+11930.private and Kmrslave.domain.tld.+165+11930.key. the .key-file will contain your public key, and look like something like this:

mrslave.domain.tld. IN KEY 0 3 165 iYUzeD93iVtpXFik/6vH8TVnOUfo27k5a2gS4SYBXTQaSJUE/A7KhzXn nrFP2LeJ6nm9mfAA3cjzBGXV6yv9gA==

And the .private-file will look like something like this:

Private-key-format: v1.3
Algorithm: 165 (HMAC_SHA512)
Bits: AAA=
Created: 20170324142537
Publish: 20170324142537
Activate: 20170324142537

As you might have noticed, both keys are the same, except for a space in the .key-file. You’ll also notice that the Private-key-format is v1.3. With v1.2, we needed the .private-file, but with v.1.3 we no longer need it. We only need the key from the .key-file, so you can just delete the .private-file, if you want to.

Installing and configuring bind9 on master

First things first. Let’s install bind9 on the master server, and set it up so that our server can answer requests for our subdomain.

dennis@cookie:~$ sudo apt install bind9 bind9utils bind9-doc

Now that bind is installed, we will have to configure it.


Let’s start by creating a configuration-file to put keys in. Create /etc/bind/keys.conf, and include it in named.conf by adding the following line to the bottom on named.conf:

include "/etc/bind/keys.conf";

Now we’ll put the key we created (from the .key-file, with the extra space) earlier into keys.conf.

key mrslave.domain.tld. {
    algorithm HMAC-SHA512;
    secret "iYUzeD93iVtpXFik/6vH8TVnOUfo27k5a2gS4SYBXTQaSJUE/A7KhzXn nrFP2LeJ6nm9mfAA3cjzBGXV6yv9gA=="

And then wee need to make sure the file is safe from prying eyes, and that bind can read the file.

dennis@cookie:~$ sudo chmod o-x /etc/bind/keys.conf
dennis@cookie:~$ sudo chgrp bind /etc/bind/keys.conf


Add these two lines below directory "/var/cache/bind"; in /etc/bind9/named.conf.options:

    recursion no;
    allow-transfer { none; };

The file should then look something like this:

options {
    directory "/var/cache/bind";

    recursion no;
    allow-transfer { none; };

    dnssec-validation auto;

    auth-nxdomain no;    # conform to RFC1035
    listen-on-v6 { any; };


Now we have to add our zone to named.conf.local, and create the zone-file. Put the following into your named.conf.local:

zone "ext.domain.tld" {
    type master;
    file "/etc/bind/pz/ext.domain.tld";
    allow-update {
        key mrslave.domain.tld.;

Using allow-update here will allow the key mrslave.domain.tld full access to the zone ext.domain.tld. If you want more fine-grained policies, you can use update-policy.

update-policy {
    grand <key> <type> <zone> <record-types>;

So, if you only want to allow the key mrslave.domain.tld to update the A-record of ext.domain.tld, the file would look something like this:

zone "ext.domain.tld" {
    type master;
    file "/etc/bind/pz/ext.domain.tld";
    update-policy {
        grant mrslave.domain.tld. name ext.domain.tld. A;


If there are no errors, we can create our zone-file. We have to create the directory /etc/bind/pz, give bind permission to write to it, and place the zone ext.domain.tld there.

dennis@cookie:~$ sudo mkdir /etc/bind/pz
dennis@cookie:~$ sudo chgrp bind /etc/bind/pz
dennis@cookie:~$ sudo chmod g+w /etc/bind/pz

This is what my zone-file, ext.domain.tld looks like:

$TTL    604800
@       IN      SOA (
                     2017032402         ; Serial
                         604000         ; 
                          86400         ; Retry
                        2419200         ; Expire
                         604000 )       ; Negative Cache TTL
@       IN      NS

@  600  IN      A       ; TTL=600s


With this setup, when you start using nsupdate, apparmor will start complaining with the following error:

Mar 25 17:21:46 cookie kernel: [4089644.355272] audit: type=1400 audit(1490458906.635:17): apparmor="DENIED" operation="mknod" profile="/usr/sbin/named" name="/etc/bind/pz/ext.domain.tld.jnl" pid=27119 comm="named" requested_mask="c" denied_mask="c" fsuid=131 ouid=131

In order to fix this, we have to give named (bind) permission to write to the directory /etc/bind/pz/. We do this by inserting the line /etc/bind/pz/ rw, right below /etc/bind/** r, in the file /etc/apparmor.d/usr.sbin.named. If you’ve never edited this file, lines 19-24 should probably look something like this:

  /etc/bind/** r,
  /etc/bind/pz/* rw,
  /var/lib/bind/** rw,
  /var/lib/bind/ rw,
  /var/cache/bind/** lrw,
  /var/cache/bind/ rw,

And then we reload apparmor.

dennis@cookie:~$ sudo systemctl reload apparmor.service


We can now check the configuration using check-nameconf.

dennis@cookie:~$ check-nameconf

This will check named.conf.options, named.local, named.conf.default-zones, and named.conf, as the three first are included in named.conf.

We’ll use named-checkzone to test the zone-file.

dennis@cookie:~$ sudo named-checkzone ext.domain.tld /etc/bind/pz/ext.domain.tld
zone ext.domain.tld/IN: loaded serial 2017032402

Alright. Now we’ve configured bind, and set up the zone. Now it’s time to restart bind, and see if it’s working. Do a systemctl restart bind9, and see if it’s working by testing it with dig. It should look something like this:

dennis@cookie:~$ sudo systemctl restart bind9
dennis@cookie:~$ dig ext.domain.tld +short

Alright! We’ve got a working master DNS-server!

Setting up the client

First, we need to install nsupdate. nsupdate is part of the package dnsutils, so we’ll install that.

dennis@mrslave:~$ sudo apt install dnsutils

“Configuring” nsupdate

When using nsupdate, we’ll need a key-file. As I mentioned earlier, the .private-file was needed when we were using Private-key-format v1.2. Now the key-file must be presented in bind-format. So, we can just copy /etc/bind/keys.conf from earlier!

key mrslave.domain.tld. {
   algorithm HMAC-SHA512;
   secret "iYUzeD93iVtpXFik/6vH8TVnOUfo27k5a2gS4SYBXTQaSJUE/A7KhzXn nrFP2LeJ6nm9mfAA3cjzBGXV6yv9gA=="

Now that we’ve got the key-file, we can create a text-file containing the update-commands we want to send. Let’s call ut nsupdate.txt, and it should look like this:

zone ext.domain.tld
update delete ext.domain.tld. A
update add ext.domain.tld. 600 A


Now we can try to update DNS!

dennis@mrslave:~$ nsupdate -k mrslave.conf -v nsupdate.txt
Outgoing update query:
;; ->>HEADER<<- opcode: UPDATE, status: NOERROR, id:      0
;; flags:; ZONE: 0, PREREQ: 0, UPDATE: 0, ADDITIONAL: 0
;ext.domain.tld.                        IN      SOA

ext.domain.tld.         0       ANY     A
ext.domain.tld.         600     IN      A

The update was successfull. Now we can test to see if it actually works.

dennis@mrslave:~$ dig ext.domain.tld +short

Earlier, when creating /etc/bind/pz/ext.domain.tld on the master, we set the IP-address to, so getting in response now means that it worked.

It worked!

Automating DNS-updates

As I mentioned earlier, this is my home-server, and I want it to update the DNS-record every time its IP-address updates. So, I have written a script that checks wether my public IP-address has changed, and then issues an update if it has.

I’ve uploaded the script to my server, so feel free to use it. Installing it is quite straight-forward. Put in /usr/local/sbin/ and configure it (you only need to change the top four variables), and install the cron-file.


Hello World

Hello world! This is my first post.

So, I went and made myself a blog. What now?

Well, I was thinking I could use this blog for documentation, testing, and “blogging” in general.. By documentation I mean documentation of how I set stuff up. Like how I’ve set up dynamic DNS for my home server (using nsupdate and bind9). As time passes I’m thinking I might add some pictures somewhere as well. We’ll see..

For now, this is it. I’ll be fidling with the blog itself until I’m satisfied with how it looks and works. That might take some time.

I’m using Hugo to generate static HTML-files from my ramblings, and I’ve done a bit of optimalization, so the page is quite fast. There’s still a lot of work remaining though! I haven’t even finished disagreeing with myself regarding what the URLs should look like. For now, I’ve settled on using “ugly” links (showing the .html-suffix). Blogposts will be in the format, and tech-posts will be like this: Why? I’m not 100% sure, but I think I’ve decided tech-posts are more timeless than blogposts, or something.

(Until I get my head out of my ass, this page will also double as my “about”-page.)