the ramblings of a random norwegian techie

This article is a work in-progres

So, I need a new backup-system.

Today I use tarsnap on my servers, and Arq on my personal computer(s). Tarsnap uses Amazon S3, and Arq backs stuff up to my home-server. This works fairly well, but tarsnap isn’t the cheapest solution, and the way I use Arq isn’t redundant - if my home-server breaks down, my backup is gone.

I don’t have too many requirements for a backup solution, but I have some - in prioritized order:

  • Files need to be encrypted locally before they are uploaded, no matter where they are uploaded
  • If you “p0wn” one of my servers, you don’t automatically get access to all my backups
  • Deduplication
  • Be able to resume an interrupted backup - I have some external disks for my laptop, and it can take some time to make a complete backup of these
  • Be able to backup different folders on different schedules - I want to backup chatlogs and mail hourly on servers, but I don’t need hourly backups of /var/www
  • Redundancy - I need to be able to access backups even if my home burns down. Basically I need to be able to lose any two devices and still be able to make a full recovery

Looking at the list of different backup-solutions listed in the Arch-wiki, it seems that Borg might fit most of my requirements. I’ve tried Borg previously, through work, and it seemed fine. Also, paired with rclone or git-annex, I can store a redundant backup in the cloud. It appears that several others have done similar things.

According to the post at opensource.com, I can use rclone to just mount up my redundant cloud-backup if I lose my home-server. This seems really easy, so I’m going with this. As such, the solution will be borg+rclone.

Backup-scheme

mrslave is my home-server, and will serve as my main backup repository.

laptop is a laptop with macOS

stella is one of my servers (it’s the one serving you this page, actually)

All clients (stella and laptop are clients) back up to mrslave, over ssh. Since we’re using ssh, we’ll need a user-account on mrslave. All clients will have their own directory, and they will be locked to this directory. We’ll use the restriction that the borg-doc recommends, which is to restrict what commands a user will be able to use via the ssh-key in ~/.ssh/authorized_keys.

Borg

Server-side

Installing Borg is pretty trivial.

dennis@mrslave ~ % sudo apt install borgbackup borgbackup-doc

(you don’t really need the doc, but it’s nice to have)

Then we will need a user dedicated to Borg. Let’s call that user borg. We’ll also need a directory to store the backups. I find that /srv/ is a good place to put backup-data.

dennis@mrslave ~ % sudo groupadd --system borg
dennis@mrslave ~ % sudo useradd --home-dir /srv/backup/borg --create-home --system --gid borg borg
dennis@mrslave ~ % sudo -u borg -H mkdir ~/.ssh ~/stella ~/laptop
dennis@mrslave ~ % sudo -u borg -H touch ~/.ssh/authorized_keys
dennis@mrslave ~ % sudo chmod 700 /srv/backup/borg /srv/backup/borg/.ssh

Client-side

We need to install Borg, create a ssh-key, create a keyfile (and a place to put it). For a keyfile, we’ll just use a file generated by ssh-keygen.

dennis@stella ~ % sudo apt install borgbackup
*snip*
dennis@stella ~ % sudo su -
root@stella ~ # ssh-keygen -t ed25519 -f ~/.ssh/mrslave.borg.id_ed25519 -P ""
*snip*
root@stella ~ # cat << EOF >> ~/.ssh/config
root@stella ~ # heredoc>
Host borgbackup
    HostName mrslave.example
    IdentityFile ~/.ssh/mrslave.borg.id_ed25519
    User borg
root@stella ~ # heredoc> EOF

If you’re on macOS, the installation is pretty much the same, except for the first line:

dennis@laptop ~% brew cask install borgbackup
*snip*

Then we need to copy the public-key to the server. The public-key goes into /srv/backup/borg/.ssh/authorized_keys, and should be in the following format, with the following restrictions (IN ONE LINE):

command="cd /srv/backup/borg/<client>;
         borg serve --restrict-to-path /srv/backup/borg/<client>"
         <keytype> <key> <host>

For example:

command="cd /srv/backup/borg/stella; borg serve --restrict-to-path /srv/backup/borg/stella" ssh-ed25519 AAAAv4aTaC4lZOI1LTE3BBAAIPo9xS64a0p///IyU8vkl90KHck42Ole/w/I0po6FuCK root@stella.dnns.no

Alright. Let’s initialize the backup. Since I’m planning on syncing to the cloud, I’ll use keyfile, which stores the keyfile on the client, instead of repokey, which stores the key in the repo (you still need the passphrase, though).

root@stella ~ [2] # borg init --encryption=keyfile borg@borgbackup:stella
*snip*

And then, to backup the keyfile:

root@stella ~ # borg key export borg@borgbackup:stella keyfile && cat keyfile && rm keyfile
*snip*

Copy the key, and the passphrase, to somewhere safe.

Now, let’s test the backup.

root@stella ~ # echo "foo" > bar
root@stella ~ # borg create borg@borgbackup:stella::testarchive test
root@stella ~ # mkdir mountpoint
root@stella ~ # borg mount borg@borgbackup:stella mountpoint
Enter passphrase for key /root/.config/borg/keys/borgbackup__stella:
root@stella ~ # cd mountpoint
root@stella ~/mountpoint # ls
testarchive
root@stella ~/mountpoint # cd testarchive
root@stella ~/mountpoint/testarchive # ls
bar
root@stella ~/mountpoint/testarchive # cat bar
foo
root@stella ~/mountpoint/testarvhice # cd
root@stella ~ # borg umount mountpoint

Wohoo, it worked.

Automating it

I’m working on a script for automating the backups. It will be posted when it’s finished.

The password-manager

The password-manager can’t be backed up in the same way as the rest of your system. If you lose access to your password-manager, how are you going to get access to your backups? You need the password-manager to access your backups, and as such, you need another way of backing it up.

I solve this by backing up my password-managers to several of my servers. That way, unless I lose them all, I’ll still have access to them.

Cloud redundancy

Choosing a cloud-storage

As far as I can tell, Wasabi and Backblaze B2 offers some of the cheapest cloud-storage available today, costing respectively $0.0049 and $0.005 per gigabyte of storage per month (as of 2018-05-20. See this page for more options).

However, I’ve never been a huge fan of cloud-services. Especially not those hosted in the US. Time4VPS offers storage VPSes for a good price. Right now, you pay €5.99/TB/month, or €9.99/2TB/month. If you go with the bi-anual plan, you get 25% off. That’s €7.49/2TB/month! Converted to gigabytes and US dollars, that’s $0.00418 per gigabyte, per month. Of course, here you have to pay for the full 2TB, so it’ll probably end up being more expensive than Wasabi or Backblaze, where you only pay for what you use. Also, it means you have to manage one more server.

I’m going with Time4VPS.

Installing and setting up rclone

dennis@mrslave ~ % sudo apt install rclone
*snip*
dennis@mrslave ~ % sudo su -
root@mrslave ~ # rclone config
2018/05/20 16:48:15 NOTICE: Config file "/root/.config/rclone/rclone.conf" not found - using defaults
No remotes found - make a new one
n) New remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
n/r/c/s/q> n
name> remote
*snip*

Installing rclone was pretty easy. For the rest of the config, you can follow rclone’s documentation on setting up SFTP storage. Also, you might want to throw some encrypted storage on top of that, using rclones crypt-setup.

Data recovery

Recovery of client

Recovery of files on a client is easy.

root@stella ~ # borg key import borg@borgbackup:stella path-to-keyfile && rm path-to-keyfile
root@stella ~ # borg mount borg@borgbackup:stella mountpoint
root@stella ~ # cd mountpoint

Remember setting up ssh-keys and such first.

Recovery of server

Recovery after the server has gone should also be easy. First you set up rclone, as described above. Then you can use rclone mount.

dennis@laptop ~ % rclone mount backup:repo mountpoint

Now you should have your entire borg-archive available at mountpoint.

I like to host everything myself, which means I run CalDAV and CardDAV on my own server to synchronize contacts and calendars. Previously I’ve been running Baïkal, but now It’s time to test something else. The reason I want to test out something else is that I want the ability to share calendars. Enter DAViCal.

I’ll be installing DAViCal on Ubuntu 16.04, but the configuration should be pretty similar for older and newer versions of both Ubuntu and Debian. Also, I’ll be explaining how I set this up in MY environment. Some parts might not apply in yours. For instance, I will be using BasicAuth in Apache to authenticate users, instead of DAViCals built-in solution.

I’m running PHP 7.0 using libapache2-mod-php7.0 and Apache, running behind an nginx-proxy. I’m assuming you’ve already got this set up, and that you’re already running a database. I’ll be using PostgreSQL.

Installing DAViCal

First off, let’s install DAViCal!

dennis@spandex:~$ sudo apt install davical

This will install DAViCal and all dependencies you need.

Database Setup

Permissions

You’ll need to edit the PostgreSQL-config file to give DAViCal the permissions it needs. My config-file is located at /etc/postgresql/9.5/main/pg_hba.conf. Put the following into your config-file:

# DAViCal
local   davical         davical_app                             trust
local   davical         davical_dba                             trust

Above the line that says

local   all             all                                     peer

And reload PostgreSQL using

dennis@spandex:~$ sudo systemctl reload postgresql

Creating the database

Next up we need to create and build the database. DAViCal comes with a script, create-database.sh, which can do this for you. It is located in /usr/share/davical/dba/, and needs to be run as a user with permissions to create databases. Typically we can do something like this:

dennis@spandex:~$ sudo su postgres -s /usr/share/davical/dba/create-database.sh


Supported locales updated.
Updated view: dav_principal.sql applied.
CalDAV functions updated.
RRULE functions updated.
Database permissions updated.
NOTE
====
*  The password for the 'admin' user has been set to 'laeDe9ae='

Thanks for trying DAViCal!  Check in /usr/share/doc/davical/examples/ for
some configuration examples.  For help, visit #davical on irc.oftc.net.

If the above command fails, it’s like you’ve screwed up the database-permissions, or something else. Fix it, and run the script again, after you’ve deleted the database. You can delete the database using:

dennis@spandex:~$ sudo su postgres -c "dropdb davical"

DNS

I like to give everything its own address. DAViCal will be running on davical.domain.tld. As such, I’m setting up a CNAME-record from davical.domain.tld, to spandex.domain.tld, which is the server DAViCal will be running on.

Apache

Here’s my Apache-configuration, located at /etc/apache2/sites-available/:

<VirtualHost *:8080>

    ServerName davical.domain.tld
    UseCanonicalName on

    DocumentRoot /usr/share/davical/htdocs
    DirectoryIndex index.php index.html
    Alias /images/ /usr/share/davical/htdocs/images/

    # To circumvent phps $_SERVER['HTTPS']-check
    SetEnv HTTPS "on"

    AcceptPathInfo On

    <Directory "/usr/share/davical/htdocs">
        AuthType Basic
        AuthName "private area"
        AuthUserFile /etc/apache2/davical.htpasswd
        Require valid-user
    </Directory>

    <Directory "/usr/share/davical/htdocs/images/">
        AllowOverride None
        Order allow,deny
        Allow from all
    </Directory>

    CustomLog /var/log/apache2/baikal.domain.tld-access.log combined
    ErrorLog /var/log/apache2/baikal.domain.tld-error.log

</VirtualHost>

Also, we’ll need to create a password-file. Mine is located at /etc/apache2/davical.htpasswd. You can create it like this:

dennis@spandex:~$ sudo htpasswd -c /etc/apache2/davical.htpasswd admin
New password:
Re-type new password:
Adding password for user admin

Note that the password you use here will be the actual admin-password.

The Apache-config is activated by running:

dennis@spandex:~$ sudo a2ensite davical.domain.tld.conf
Enabling site davical.domain.tld.
To activate the new configuration, you need to run:
  service apache2 reload
dennis@spandex:~$ sudo systemctl reload apache2

Nginx

You’ll notice that Apache is listening to port 8080. That is because I’m running Apache behind Nginx. This is because I’m using Nginx for other things, and it is hogging port 80 and 443. Also, I prefer Nginx, so I use it to terminate TLS.

This is my Nginx-config, located at /etc/nginx/sites-available/davical.domain.tld:

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name davical.domain.tld;

    access_log /var/log/nginx/davical.domain.tld-access.log;
    error_log /var/log/nginx/davical.domain.tld-error.log;

    ssl on;
    ssl_certificate /etc/ssl/letsencrypt/davical.domain.tld.pem;
    ssl_certificate_key /etc/ssl/letsencrypt/davical.domain.tld.key;

    charset utf-8;

    location / {
        proxy_pass http://127.0.0.1:8080;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_read_timeout 604800;
        proxy_set_header X-Real-IP  $remote_addr;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header Host $host;
    }
}

(You’ll have to figure out the SSL-part yourself. I use certbot and Let’s Encrypt)

It is activated like this:

dennis@spandex:~$ sudo ln -s "../sites-available/davical.domain.tld" /etc/nginx/sites-enabled/davical.domain.tld"
dennis@spandex:~$ sudo systemctl reload nginx

DAViCal should now be available on the web. Try visiting davical.domain.tld. It should look something like this.

Configuring DAViCal

DAViCals configurationfiles are located in /etc/davical/. If you want to run multiple instances of DAViCal, each instance can must have its own config-file, named domain.tld-conf.php (or sub.domain.tld-conf.php - you get the gist of it). If you only need one instance, you can name the file config.php.

Since I only need one instance of DAViCal, my config-file will be /etc/davical/config.php.

<?php
  $c->domain_name = "davical.domain.tld";
  $c->sysabbr     = 'DAViCal';
  $c->admin_email = 'root-davical@domain.tld';
  $c->system_name = "My DAViCal Server";
  $c->pg_connect[] = 'dbname=davical port=5432 user=davical_app';

  // Use Apache-supplied headers and believe them
  $c->authenticate_hook['server_auth_type'] = 'Basic';
  include_once('AuthPlugins.php');

The last part lets Apache do the authentication.

Setting up users

Navigate to davical.domain.tld and log in as admin. Select “Create Principal” from “User Functions” in the menu. Fill in the details, and click Create.

Remember that whenever you setup a new user, you will have to create a line for that user in /etc/apache2/davical.basicauth using the command you user earlier.

Sharing a calendar

Sharing calendars isn’t too difficult. You start off by creating the users involved. Then you create a group, and add the users to the group. Then you grant this group privileges to access (preferably read-only) each user. In iCal on MacOS you’ll find the shared calendars under Delegation under the account in Accounts.

Configuring the Client

I use MacOS and iOS on the client-side, so I will only explain how to set up caldav- and carddav-sync on those platforms. DAViCal has instructions for setting up caldav and carddav on other clients.

iCal on MacOS

Open up iCal, open Preferences (you can use the hot-key ⌘,), select Accounts, hit the +-sign to add account, select Other CalDAV Account…, select Advanced as “Account Type”, and fill in everything.

User Name, Password, and Server Address should be pretty self-explanatory. Server Path is /caldav.php/USERNAME/, port is 443, the Use SSL-box should be ticked, and the Use Kerberos v5 for authentication should not.

Contacts on MacOS

Open up Contacts, open Preferences (you can use the hot-key ⌘,), select Accounts, hit the +-sign to add account, select Other Contacts Account…, select Manual as Account Type. Fill in the rest, and voilà, it should work.

Calendar on iOS

Open up Settings, select Calendar, select Accounts, select Add Account, select Other, select Add CalDAV Account, fill in your details, and hit Next. Voilà.

Contacts on iOS

Open up Settings, select Contacts, select Accounts, select Add Account, select Other, select Add CardDAV Account, fill in details, and hit Next. Voilà.

Sources

I have a home server with a dynamic IP-address. Previously I used an Asus router which comes with automatic dynamic DNS. This worked fine, but I kind of wanted to do this myself. So, I did it using bind and nsupdate.

I have a domain I use for my LAN, domain.tld, and ext.domain.tld points to my public IP-address. I host my DNS with Underworld, and I don’t have rights to use nsupdate directly with them. So, I’ll set up a DNS-server on one of my servers, and let that server resolve lookups for the subdomain ext.domain.tld, and continue to let Underworld serve domain.tld.

In this setup Underworld will be “super-master”, cookie (my main server) will be master, and mrslave will be the client (or, if you will, slave). Both cookie and mrslave runs Ubuntu 16.04, but the configuration should be pretty similar for older and newer versions of both Ubuntu and Debian.

Creating a key-pair

To create a key-pair, we’ll be using dnssec-keygen.

dennis@cookie:/tmp$ dnssec-keygen -a HMAC-SHA512 -b 512 -n USER mrslave.domain.tld.
Kmrslave.domain.tld.+165+11930

This will give you two files. Kmrslave.domain.tld.+165+11930.private and Kmrslave.domain.tld.+165+11930.key. the .key-file will contain your public key, and look like something like this:

mrslave.domain.tld. IN KEY 0 3 165 iYUzeD93iVtpXFik/6vH8TVnOUfo27k5a2gS4SYBXTQaSJUE/A7KhzXn nrFP2LeJ6nm9mfAA3cjzBGXV6yv9gA==

And the .private-file will look like something like this:

Private-key-format: v1.3
Algorithm: 165 (HMAC_SHA512)
Key:
iYUzeD93iVtpXFik/6vH8TVnOUfo27k5a2gS4SYBXTQaSJUE/A7KhzXnnrFP2LeJ6nm9mfAA3cjzBGXV6yv9gA==
Bits: AAA=
Created: 20170324142537
Publish: 20170324142537
Activate: 20170324142537

As you might have noticed, both keys are the same, except for a space in the .key-file. You’ll also notice that the Private-key-format is v1.3. With v1.2, we needed the .private-file, but with v.1.3 we no longer need it. We only need the key from the .key-file, so you can just delete the .private-file, if you want to.

Installing and configuring bind9 on master

First things first. Let’s install bind9 on the master server, and set it up so that our server can answer requests for our subdomain.

dennis@cookie:~$ sudo apt install bind9 bind9utils bind9-doc

Now that bind is installed, we will have to configure it.

keys.conf

Let’s start by creating a configuration-file to put keys in. Create /etc/bind/keys.conf, and include it in named.conf by adding the following line to the bottom on named.conf:

include "/etc/bind/keys.conf";

Now we’ll put the key we created (from the .key-file, with the extra space) earlier into keys.conf.

key mrslave.domain.tld. {
    algorithm HMAC-SHA512;
    secret "iYUzeD93iVtpXFik/6vH8TVnOUfo27k5a2gS4SYBXTQaSJUE/A7KhzXn nrFP2LeJ6nm9mfAA3cjzBGXV6yv9gA=="
};

And then wee need to make sure the file is safe from prying eyes, and that bind can read the file.

dennis@cookie:~$ sudo chmod o-x /etc/bind/keys.conf
dennis@cookie:~$ sudo chgrp bind /etc/bind/keys.conf

named.conf.options

Add these two lines below directory "/var/cache/bind"; in /etc/bind9/named.conf.options:

    recursion no;
    allow-transfer { none; };

The file should then look something like this:

options {
    directory "/var/cache/bind";

    recursion no;
    allow-transfer { none; };

    dnssec-validation auto;

    auth-nxdomain no;    # conform to RFC1035
    listen-on-v6 { any; };
};

named.conf.local

Now we have to add our zone to named.conf.local, and create the zone-file. Put the following into your named.conf.local:

zone "ext.domain.tld" {
    type master;
    file "/etc/bind/pz/ext.domain.tld";
    allow-update {
        key mrslave.domain.tld.;
    };
};

Using allow-update here will allow the key mrslave.domain.tld full access to the zone ext.domain.tld. If you want more fine-grained policies, you can use update-policy.

...
update-policy {
    grand <key> <type> <zone> <record-types>;
};

So, if you only want to allow the key mrslave.domain.tld to update the A-record of ext.domain.tld, the file would look something like this:

zone "ext.domain.tld" {
    type master;
    file "/etc/bind/pz/ext.domain.tld";
    update-policy {
        grant mrslave.domain.tld. name ext.domain.tld. A;
    };
};

Zone-file

If there are no errors, we can create our zone-file. We have to create the directory /etc/bind/pz, give bind permission to write to it, and place the zone ext.domain.tld there.

dennis@cookie:~$ sudo mkdir /etc/bind/pz
dennis@cookie:~$ sudo chgrp bind /etc/bind/pz
dennis@cookie:~$ sudo chmod g+w /etc/bind/pz

This is what my zone-file, ext.domain.tld looks like:

$TTL    604800
@       IN      SOA     cookie.eriksen.im. webmaster.eriksen.im. (
                     2017032402         ; Serial
                         604000         ; 
                          86400         ; Retry
                        2419200         ; Expire
                         604000 )       ; Negative Cache TTL
;
@       IN      NS      cookie.eriksen.im.

@  600  IN      A       127.0.0.1       ; TTL=600s

Apparmor

With this setup, when you start using nsupdate, apparmor will start complaining with the following error:

Mar 25 17:21:46 cookie kernel: [4089644.355272] audit: type=1400 audit(1490458906.635:17): apparmor="DENIED" operation="mknod" profile="/usr/sbin/named" name="/etc/bind/pz/ext.domain.tld.jnl" pid=27119 comm="named" requested_mask="c" denied_mask="c" fsuid=131 ouid=131

In order to fix this, we have to give named (bind) permission to write to the directory /etc/bind/pz/. We do this by inserting the line /etc/bind/pz/ rw, right below /etc/bind/** r, in the file /etc/apparmor.d/usr.sbin.named. If you’ve never edited this file, lines 19-24 should probably look something like this:

  /etc/bind/** r,
  /etc/bind/pz/* rw,
  /var/lib/bind/** rw,
  /var/lib/bind/ rw,
  /var/cache/bind/** lrw,
  /var/cache/bind/ rw,

And then we reload apparmor.

dennis@cookie:~$ sudo systemctl reload apparmor.service

Testing

We can now check the configuration using check-nameconf.

dennis@cookie:~$ check-nameconf

This will check named.conf.options, named.local, named.conf.default-zones, and named.conf, as the three first are included in named.conf.

We’ll use named-checkzone to test the zone-file.

dennis@cookie:~$ sudo named-checkzone ext.domain.tld /etc/bind/pz/ext.domain.tld
zone ext.domain.tld/IN: loaded serial 2017032402
OK

Alright. Now we’ve configured bind, and set up the zone. Now it’s time to restart bind, and see if it’s working. Do a systemctl restart bind9, and see if it’s working by testing it with dig. It should look something like this:

dennis@cookie:~$ sudo systemctl restart bind9
dennis@cookie:~$ dig ext.domain.tld +short
127.0.0.1

Alright! We’ve got a working master DNS-server!

Setting up the client

First, we need to install nsupdate. nsupdate is part of the package dnsutils, so we’ll install that.

dennis@mrslave:~$ sudo apt install dnsutils

“Configuring” nsupdate

When using nsupdate, we’ll need a key-file. As I mentioned earlier, the .private-file was needed when we were using Private-key-format v1.2. Now the key-file must be presented in bind-format. So, we can just copy /etc/bind/keys.conf from earlier!

key mrslave.domain.tld. {
   algorithm HMAC-SHA512;
   secret "iYUzeD93iVtpXFik/6vH8TVnOUfo27k5a2gS4SYBXTQaSJUE/A7KhzXn nrFP2LeJ6nm9mfAA3cjzBGXV6yv9gA=="
};

Now that we’ve got the key-file, we can create a text-file containing the update-commands we want to send. Let’s call ut nsupdate.txt, and it should look like this:

server cookie.eriksen.im
zone ext.domain.tld
update delete ext.domain.tld. A
update add ext.domain.tld. 600 A 192.168.1.2
show
send

Testing

Now we can try to update DNS!

dennis@mrslave:~$ nsupdate -k mrslave.conf -v nsupdate.txt
Outgoing update query:
;; ->>HEADER<<- opcode: UPDATE, status: NOERROR, id:      0
;; flags:; ZONE: 0, PREREQ: 0, UPDATE: 0, ADDITIONAL: 0
;; ZONE SECTION:
;ext.domain.tld.                        IN      SOA

;; UPDATE SECTION:
ext.domain.tld.         0       ANY     A
ext.domain.tld.         600     IN      A       192.168.1.2

The update was successfull. Now we can test to see if it actually works.

dennis@mrslave:~$ dig ext.domain.tld +short
192.168.1.2

Earlier, when creating /etc/bind/pz/ext.domain.tld on the master, we set the IP-address to 127.0.0.1, so getting 192.168.1.2 in response now means that it worked.

It worked!

Automating DNS-updates

As I mentioned earlier, this is my home-server, and I want it to update the DNS-record every time its IP-address updates. So, I have written a script that checks wether my public IP-address has changed, and then issues an update if it has.

I’ve uploaded the script to my server, so feel free to use it. Installing it is quite straight-forward. Put nsupdate.sh in /usr/local/sbin/ and configure it (you only need to change the top four variables), and install the cron-file.


Sources

Hello World

Hello world! This is my first post.

So, I went and made myself a blog. What now?

Well, I was thinking I could use this blog for documentation, testing, and “blogging” in general.. By documentation I mean documentation of how I set stuff up. Like how I’ve set up dynamic DNS for my home server (using nsupdate and bind9). As time passes I’m thinking I might add some pictures somewhere as well. We’ll see..

For now, this is it. I’ll be fidling with the blog itself until I’m satisfied with how it looks and works. That might take some time.

I’m using Hugo to generate static HTML-files from my ramblings, and I’ve done a bit of optimalization, so the page is quite fast. There’s still a lot of work remaining though! I haven’t even finished disagreeing with myself regarding what the URLs should look like. For now, I’ve settled on using “ugly” links (showing the .html-suffix). Blogposts will be in the format dnns.no/2017/03/hello-world.html, and tech-posts will be like this: dnns.no/hello-world.html. Why? I’m not 100% sure, but I think I’ve decided tech-posts are more timeless than blogposts, or something.

(Until I get my head out of my ass, this page will also double as my “about”-page.)