skip to main content.

posts about computer.

this year, let's encrypt added two great features:

  1. they enabled the acme v2 protocol, and allow to obtain wildcard certificates through this.
  2. they improved their certificate transparency support by including signed certificate timestamp (sct) records in the certificates. chrome will, for example, require scts from april 2018 on.

i've already tried out both wildcard certificates and scts, and so far they work flawlessly! i've been using the acme v2 support in the letsencrypt module of ansible 2.5 (with a bugfix), into which i invested quite some work.

four days ago, arch linux switched to openssl 1.1.0. openssl 1.1.0 was originally released at the end of last august, but since it has some breaking api changes, it's only slowly creeping into new linux distributions.

this also means that i can finally test my let's encrypt library, let's encrypt ansible role and ocspbot against openssl 1.1.0. the let's encrypt code worked out of the box (i've already incorporated a change somewhen earlier, even without being able to properly test it), but ocspbot needed a bit more work. there's a command line syntax change between 1.0.x and 1.1.0 when specifying http headers to ocsp calls; the old syntax was -header name value, the new one is -header name=value. so i had to add a version detection (i.e. parsing the output of openssl version) to use the correct syntax depending on the used version. but now it works with both openssl 1.0.x and 1.1.0!

using openssl 1.1.0 on my server also allowed me to use x25519, using daniel j. bernstein's curve25519 in edwards form, for secret key negotation (i.e. ephemeral diffie-hellman). using it in nginx is pretty easy:

ssl_ecdh_curve X25519:secp521r1:secp384r1;

this uses x25519 as the default curve/key exchange, followed by the fallsbacks using ecdhe with a 521-bit nist curve and then a 384-bit nist curve as a third fallback. (btw, note the uppercase x in x25519 — if you use the lowercase variant, nginx won't load the config.) the third curve is the only one supported by almost every browser; only a few support the 521-bit one, and right now only chrome supports x25519.


classically, revokation of certificates was accomplished with certificate revokation lists (crls). the idea was that browsers regularly download crls from the certificate authorities (cas) and check whether certificates they see are on the list. this doesn't scale well, though. nowadays, there are many cas trusted by browsers in their default configuration, and crls tend to get huge.

a better solution is the online certificate status protocol (ocsp): a browser, when encountering a new certificate, asks the ocsp server of the browser (the url for it is contained in the certificate) whether the certificate is still valid. this has several downsides as well: first, ocsp servers are not always reliable. if a browser cannot connect to it (or doesn't get a reply), what should it do? deny access to the site? besides that, there's another large downside: privacy. the ocsp server knows which page you are visiting, because the browser tells it by asking whether its certificate is valid.

ocsp stapling was invented to improve upon this: the idea is that the webserver itself asks the ocsp server for the status of its certificate, and delivers the answer together with the certificate to the connecting browser. as the ocsp response is signed by the cas certificate, the browser can verify that the response is valid. also, the expiration time of ocsp responses is much less than the one for certificates, so if a certificate is revoked, existing ocsp responses will only be valid for a couple of more days.

this is pretty good already, except that a malicious webserver could simply not send the ocsp response with its certificate. if a browser cannot contact the ocsp server itself, it has no way to know whether the certificate is revoked or not. to overcome this, ocsp must-staple was invented. this is a flag in the certificate itself which says that the certificate is only valid with a valid and good ocsp response. so if a browser encounters a certificate with this flag, and the webserver isn't ocsp stapling, the browser knows that something is fishy.

unfortunately, there are some downsides. first, most the most common webservers for linux, apache and nginx, while having ocsp stapling support, do in some situations send replies without ocsp stapling. if the certificate has the ocsp must-staple flag set, these answers result in error pages shown in browsers. and that's something you really want to avoid, that visitors of your page thing there's something bad happening.

fortunately, at least for nginx, you can specify a file containing an ocsp response directly with the ssl_stapling_file directive. unfortunately, you have to make sure you always have a good and valid ocsp response at that place, and reload nginx in case the response is updated. other programs allow to specify an ocsp response in a similar way, such as exim with the tls_ocsp_file directive, and thus have the same problem. to solve this problem, i've started creating ocsp bot:

ocsp bot

ocsp bot is a python script which should be called frequently (as in: once per hour or so), which checks a set of x.509 certificates to obtain up-to-date ocsp responses. in case the current ocsp responses will expire soon, or aren't there, it will try to get a new response. it will only copy the new response to the correct place if it is valid and good. calling it frequently will ensure that in case of problems getting a new response, it will retry every hour (or so) until a good and valid response could have been obtained. so a user response is only necessary if the process fails several times in a row.

ocsp bot will signal with its exit code whether responses have been updated, allowing to reload/restart the corresponding service to use the new response.

you can install ocsp bot with pip install ocsp from pypi.

integration with ansible

i'm using ansible to configure my server. to copy certificates and obtain ocsp responses, i'm using a custom role.

the ansible tasks for the role are as follows. ocsp bot is installed in /var/www/ocsp:

- name: Create OCSP log folder
  file: dest=/var/www/ocsp/logs state=directory
- name: Create OCSP response folder
  file: dest=/var/www/ocsp/responses state=directory
- name: Install pyyaml
  pip: name=pyyaml
- name: Install OCSP response utility
  copy: dest=/var/www/ocsp/ mode=0755
  # is ocspbot/ from
- name: Install OCSP bash script
  template: dest=/var/www/ocsp/ mode=0755
- name: Install OCSP response utility configurations
  template: src=ocspbot.yaml.j2 dest=/var/www/ocsp/ocspbot-{{ item.key }}.yaml
  with_dict: "{{ certificates }}"
- name: Install OCSP response cronjob
  cron: name="Update OCSP responses" hour=* minute=0 job=/var/www/ocsp/ state=present

the variable certificates is defined as follows:

    - nginx
    key_owner: root
    key_group: root
    key_mode: "0400"
    - dovecot
    - exim
    key_owner: root
    key_group: exim
    key_mode: "0440"

The template for

{% for name, data in certificates|dictsort %}

# Renew OCSP responses for {{ name }}
/var/www/ocsp/ /home/ocsp/ocspbot-{{ name }}.yaml
if [ $RESULT -gt 0 ]; then
{%   for service in data.reload %}
    systemctl reload {{ service }}
{%   endfor %}
elif [ $RESULT -lt 0 ]; then
{% endfor %}

exit $RC

the template for the configuration yaml files:

make_backups: True

minimum_validity: 3d
minimum_validity_percentage: 42.8

ocsp_folder: /var/www/ocsp/responses
output_log: /var/www/ocsp/logs/{{ item.key }}-{year}{month}{day}-{hour}{minute}{second}.log

{% for domain in|sort %}
  {{ domain }}:
    cert: /var/www/certs/{{ domain }}.pem
    chain: /var/www/certs/{{ domain }}-chain.pem
    rootchain: /var/www/certs/{{ domain }}-rootchain.pem
    ocsp: {{ domain }}.ocsp-resp
{% endfor %}

the certificates are copied with the following ansible tasks:

- name: copy private keys
  copy: src=keys/{{ item.1 }}.key dest=/var/www/keys/{{ item.1 }}.key owner={{ item.0.value.key_owner }} group={{ item.0.value.key_group }} mode={{ item.0.value.key_mode }}
  - "certificates"
  - ""
  notify: update OCSP responses
- name: copy certificates
  copy: src=keys/{{ item.1 }}{{ item.2 }} dest=/var/www/certs/{{ item.1 }}{{ item.2 }} owner=root group=root mode=0444
  - "certificates"
  - ""
  - '["-rootchain.pem", "-fullchain.pem", "-chain.pem", ".pem"]'
  notify: update OCSP responses

(here, the dependent loop lookup plugin is used.)

the handler update OCSP responses is defined as follows:

- name: update OCSP responses
  command: /var/www/ocsp/
  register: result
  failed_when: result.rc != 0
  - reload nginx
  - reload exim
  - reload dovecot

i'm using this setup for some weeks now, and it seems to work fine. so far, i'm not using ocsp must-staple certificates (except for some test subdomains). if everything seems to be fine for some time, i'll switch to ocsp must-staple certificates.

the aim of this post is to describe how to set up an encrypted arch linux installation on a headless server. while migrating to a new server during the last days, i had to go through the procedure another time. since it is easy to screw something up and you don't get helpful error messages without a serial console or (virtual) kvm, i wanted to share my instructions on how to set up such a machine. my previous server, hosted at strato, had a serial console via ssh included, so it wasn't that challenging to set it up. for my new server, hosted at hosttech, no serial console is available, but you can get a kvm attached. i had my kvm day yesterday as it makes life much easier (handling grub menus, or see what went wrong when networking doesn't work), and set up the machine twice to see whether i could also do it without a kvm. the instructions here now work without a serial console or kvm, though ymmv: tiny differences in systems, rescue boots etc. can send you into a situation where something doesn't work and you don't know what. so be warned, and try it out with a vm first to be on the safe side. doing this whole thing with another distribution is certainly also possible, but will in many details be substantially different from what i describe here. these instructions also contain some hardening not necessarily for all situations.

this post assumes you have a certain level of linux experience. i assume that you have a headless server sitting somewhere which has a software raid-1 disk configuration and you have a rescue system available which boots over the network. all dedicated server hosters i know provide something like that, you can usually set a flag in the customer/setup area of your hoster to start such a system on the next boot. hosttech uses riplinux for their rescue system, so some of the details i describe below might be specific to this one and not work with other such systems.

your server will end up in a state where you have to unlock the encrypted disk remotely via ssh, so as long as your server isn't compromised (which can happen if it is hosted at a place you don't control), you can unlock it after reboots without entering your password in a kvm/serial console (which might be tapped into). this also means you must unlock it after every reboot; it won't come back up alone by itself. (otherwise the encryption would be moot.) so don't put anything on the server which is too critical to be leaked. (you might not want to put it on a computer in the first place, though.) despite this disadvantage, one big advantage is protection of your data: if a faulty disk of your server is replaced, or your server is decommissioned, your data cannot be extracted from the disk without knowing your encryption key. and if you can wipe the luks header several times, even having your keys won't bring the data back (except if you have a backup of the header and the person having access to your key also has access to that backup).

the whole setup is split up into two parts:
  1. setting up a small unencrypted installation of arch linux on the server;
  2. using that unencrypted installation to set up a proper encrypted arch linux server.

i've chosen this approach for the strato server back then since strato's rescue system didn't offer cryptsetup/luks back then. this approach also has less requirements on the rescue system, and you have a clean arch linux install to set up the real system. and you can use the unencrypted installation as your own personal rescue system to do maintenance on the encrypted installation, and be sure that all necessary tools are either already installed, or can easily be added the same way as you usually install packages on your real server. (rescue systems don't have to offer a package manager, so installing something you need but which isn't there can be really annoying.)

one simple note before we begin: if you need to create a password or random text string, you can use use dd status=none if=/dev/random of=/dev/stdout bs=1 count=15 | base64 to generate them.

also note that the arch linux wiki has a collection of useful installation guides, which cover a lot of different cases. here, i'm mostly following the steps in install from existing linux, as well as instructions from remote unlocking of the root (or other) partition. the wiki also contains a huge amount of other useful information, like howtos on setting up encrypted systems in many different variants.

one final note: you might be tempted to also try to encrypt the boot partition; while this is possible nowadays, you cannot use it for your server, as for remote unlocking you need the init ramdisk up and running, whose contents are stored on the boot partition. this will change if at some point, grub will include a possibility for remote unlocking. (if that ever happens.) (what you could also do is create a mini boot partition which allows remote unlocking the real boot partition, and then boots the system installed on the real boot partition. that doesn't really improve security by much, though.)

as all such instructions, this post comes without any warranty. you're on your own! if you have data on the server, back it up first! these instructions will delete everything on your server, and might put it into a state where it must be reset by your hoster, which might cost you money. also, if your server is currently a production machine, be sure that it is no longer actively used and all data is backed up before you start playing. if something goes wrong, don't blame me.

setting up the unencrypted arch linux installation

first boot your server into the rescue system, and begin setting up partitions. you need (at least) three partitions:
  • a partition for the unencrypted install (2 gb);
  • a boot partition for your encrypted install (2 gb);
  • a partition for the encrypted partitions (rest).

in case you use a gpt partition table, you need a bios boot partition. if you're using uefi, you'll have to ask someone else (and probably adjust some more things in my instructions, so try it out in a vm or with a serial console/kvm first!).

next, create raid arrays for partitions 2—4 (i'm assuming 1 is a bios boot partition; if not, you have to renumber the devices below):

mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/sda2 /dev/sdb2
mdadm --create --verbose /dev/md1 --level=mirror --raid-devices=2 /dev/sda3 /dev/sdb3
mdadm --create --verbose /dev/md2 --level=mirror --raid-devices=2 /dev/sda4 /dev/sdb4

the next step is to create an ext4 filesystem on /dev/md0 which will serve as the root filesystem of the unencrypted system:

mkfs.ext4 /dev/md0
mount /dev/md0 /mnt

/dev/md1 will later host the boot partition of the encrypted system, and /dev/md2 will store the encrypted root, home and swap partitions (or whatever more you want to create). it is good practice to wipe the encrypted partition, either before creating the encrypted system (by filling it with random data) or afterwards (by filling the encrypted partition with zeros). to wipe the partition before encrypting it, you can run:

openssl enc -aes-256-ctr -pass \
    pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" \
    -nosalt < /dev/zero > /dev/md2

note that the hosttech riplinux rescue system has no base64; you can instead run dd if=/dev/random bs=128 count=1 2>/dev/null | base64 on your desktop computer and put the result into the double quotes above. this step is rather slow, so you'll better do it in a screen session. on my server, it took roughly 1.5 hours for a 500 gb partition. in fact, starting a screen session is a good idea anyway, as you don't want connection failures to interrupt (and potentially destroy) your installation procedure.

to set up arch linux on /dev/md0, i followed the instructions here with some modifications; most of them were because the rescue system didn't support certain features. here are the details of what i did:

cd /tmp
sha512sum /tmp/archlinux-bootstrap-2016.06.01-x86_64.tar.gz

(the original instructions use curl -O instead of wget, but riplinux only provides the latter. also, the original url is https://, but the provided wget couldn't connect to.)

i also downloaded


on my desktop machine, computed sha512sum of archlinux-bootstrap-2016.06.01-x86_64.tar.gz and compared it to the one on the server, and finally used gpg (gnu privacy guard) to verify the signature (see this document for details on signature verification). if the sha512 checksums match and the signature validates, everything's ready to go! (you might have to use sha256sum or even md5sum, depending on what the rescue system you're using offers. if your rescue system offers gpg, you can also validate the signature on the server itself without downloading the file a second time.)

next, continue with:

tar xzf archlinux-bootstrap-2016.06.01-x86_64.tar.gz

now you're supposed to run /tmp/root.x86_64/bin/arch-chroot /tmp/root.x86_64/ according to the instructions, but that didn't work on any of the rescue systems i tried. instead, the manual method works:

mount --bind /tmp/root.x86_64 /tmp/root.x86_64
cd /tmp/root.x86_64
cp /etc/resolv.conf etc
mount -t proc /proc proc
mount --rbind /sys sys
mount --rbind /dev dev
#mount --rbind /run run
chroot /tmp/root.x86_64 /bin/bash

i skipped mounting /run as it wasn't provided on the rescue system. everything works fine without it. the next step is to set up pacman, the arch linux package manager. the suggested step for this is pacman-key --init which generates a gpg key using random data from /dev/random. unfortunately, on a headless server, this takes a long time. if you can, you can speed this up using haveged if your rescue system provides it, or you generate the necessary files on another system. to do this on my local machine, i downloaded the above bootstrap archive (archlinux-bootstrap-2016.06.01-x86_64.tar.gz), extracted it, chrooted into it, and ran pacman-key --init there. (it was done after a few seconds, as opposed to the 8 hours i tried on the headless server first, after which i killed it.) go into root.x86_64/etc/pacman.d and do tar cf pacman.tar gnupg, and transfer pacman.tar onto the rescue system. on the rescue system, go to /tmp/root.x86_64/etc/pacman.d/ (outside the chroot, as the chroot provides no tar!) and extract the tarball there, so that you now have a non-empty subdirectory called gnupg.

then go back into the chroot and continue with:

pacman-key --populate archlinux

next, leave the chroot and edit the mirrorlist at /tmp/root.x86_64/etc/pacman.d/mirrorlist (the chroot provides neither vi nor nano, but the rescue system does). uncomment whatever mirror you find useful and go back into the chroot environment. make sure that some http:// mirrors are uncommented as well if you had problems with downloading the https:// bootstrap archive above. then set up the basic system inside the chroot with:

pacman -Syy
pacman -S base base-devel parted

the next steps in the official howto is to continue with pacstrap and later arch-chroot. that didn't work for me; both scripts complain about devtmpfs not being available, and arch-chroot also complained about the invalid argument --pid of unshare. i patched the scripts with

nano `which pacstrap`
nano `which arch-chroot`

by searching for devtmpfs twice (the first ocurrence is at the beginning); the second match should be at these two lines:

chroot_add_mount udev "$1/dev" -t devtmpfs -o mode=0755,nosuid &&
chroot_add_mount devpts "$1/dev/pts" -t devpts -o mode=0620,gid=5,nosuid,noexec &&

i changed these to:

chroot_add_mount -o bind /dev "$1/dev" &&
chroot_add_mount -o bind /dev/pts "$1/dev/pts" &&

note that this will screw up the unmount mechanism in these scripts. that isn't nice, but it'll work without. (and as soon as you have the unencrypted system set up, you can use it to install the encrypted system, and since the unencrypted system is a full arch linux system, you won't have such problems again. that's another reason why i like to set up an unencrypted system as well.) in arch-chroot, i also had to change

SHELL=/bin/sh unshare --fork --pid chroot "$chrootdir" "$@"


SHELL=/bin/sh unshare --fork chroot "$chrootdir" "$@"

i.e. remove the --pid argument. finally, do

mkdir /run/shm

in case your rescue system doesn't have /run/shm (like mine did). then you can proceed with installing arch linux. first, mount the partition you want to install the unencrypted system on as /mnt:

mount /dev/md0 /mnt

then you can set up the base system:

pacstrap /mnt base
genfstab -U -p /mnt >> /mnt/etc/fstab

note that on my system, this didn't use uuids for identifying the disks, which is in general a good idea. to find out the uuids for the devices, run blkid and change /mnt/etc/fstab by replacing entries such as /dev/md0 with UUID=xxxxxxxxxxxx.

after that, continue with:

arch-chroot /mnt
echo unencrypted-rescue-system > /etc/hostname
ln -s /usr/share/zoneinfo/Europe/Zurich /etc/localtime
echo en_US.UTF-8 UTF-8 > /etc/locale.gen
echo LANG=en_US.UTF-8 > /etc/locale.conf

obviously, you should replace unencrypted-rescue-system and Europe/Zurich and possibly also en_US with something more fitting. next, run


to set a root password. generate a random one and write it down in a safe place. (you can also later log in with ssh and change it, if you fear the rescue system is too nosy.)

next, you have to configure your networking. first, you have to find your systemd network device name. they are usually of the form enpXsY (assuming you don't use wlan for your server); to find the right name (your rescue system might use old ethX names), run lspci and look for Ethernet controller. if you find something like

XX:YY.x Ethernet controller

you can extract XX and YY for enpXXsYY right away. two caveats though: first, you need to strip leading zeros, and second, the numbers given by lspci are in hexadecimal notation, while the ones in enpXsY must be in decimal, so you'll have to convert them.

as soon as you found out the name of your network interface, create /etc/netctl/wired with the following content:

Description='main ethernet connection'
Interface=enpXsY  # REPLACE THIS!

DNS=('' '')


you need to adjust the interface name, the ipv4 and ipv6 addresses and the network masks and dns servers correctly, obviously. you can also use dhcp if your hoster supports that. next, continue with:

netctl enable wired
pacman -S openssh grub lvm2
systemctl enable sshd.service

then you have to edit /etc/mkinitcpio.conf and insert mdadm_udev in the HOOKS = "..." line somewhere before filesystems. (otherwise, the system won't come up again as it won't be able to assemble the raid arrays.) next, edit /etc/ssh/sshd_config and add

PermitRootLogin yes

at its end. (otherwise you won't be able to login to the system at all, as root is the only user.)

then run:

mdadm -E --scan >> /etc/mdadm.conf
mkinitcpio -p linux
grub-install --target=i386-pc /dev/sda
grub-mkconfig -o /boot/grub/grub.cfg

finally, unmount and reboot:

cd /
umount -R /mnt

the unencrypted "rescue" system should be ready to go.

setting up the encrypted arch linux installation

log into the newly set up unencrypted system via ssh root@your-server and your root password you set above. (now is the time to change it if you don't trust the rescue system too much.)

now contine by installing two important packages and creating a temporary filesystem:

pacman -yS cryptsetup screen
mount -t ramfs -o size=1M none /mnt

start a screen session and continue in there. we'll need the temporary filesystem to transfer the master key for the encrypted partition without writing it to disk. instead of creating the master key on the headless server (which doesn't have enough entropy, probably), create it in your desktop computer:

dd if=/dev/random of=server-masterkey bs=1024 count=1
scp server-masterkey root@your-server:/mnt

back on your server, inside the screen session, create the encrypted disk:

cryptsetup --verbose --cipher aes-xts-plain64 --key-size 512 -h sha256 -i=10000 \
    --verify-passphrase --master-key-file /mnt/server-masterkey luksFormat /dev/md2

you have to enter a passphrase for your encrypted partition. use a longer randomly generated password and store it safely, or something which is long enough and you can remember. the setting -i=10000 i used is rather paranoid: hashing the password takes roughly 10 seconds on your server. this makes most brute-force attacks impossible, but also makes unlocking (and all other cryptsetup operations) slow. feel free to decrease the number, but since these operations need to be done only very seldom (like now while installing, and once after each reboot of your server) there's no need to do that.

then get rid of the master key temp filesystem and open the encrypted partition:

umount /mnt
cryptsetup luksOpen /dev/md2 cryptdisk

if you didn't wipe the space occupied by the encrypted partition earlier with random data, you can now run dd if=/dev/zero of=/dev/mapper/cryptdisk bs=1M. (do that inside a screen session, as it will take a lot of time!)

create a lvm on the encrypted partition:

pvcreate /dev/mapper/cryptdisk

vgcreate server /dev/mapper/cryptdisk

lvcreate --size 32G --name root server
lvcreate --contiguous y --size 4G --name swap server
lvcreate --extents +100%FREE --name home server
here, i'm creating:
  • a root volume with 32 gb,
  • a swap volume with 4 gb,
  • a home volume occupying the remaining space.

adjust the sizes to your needs. next, create filesystems and mount everything:

mkfs.ext4 /dev/md1   # the boot partition
mkfs.ext4 /dev/mapper/server-root
mkfs.ext4 /dev/mapper/server-home
mkswap /dev/mapper/server-swap

swapon /dev/mapper/server-swap
mount /dev/mapper/server-root /mnt
mkdir /mnt/boot /mnt/home
mount /dev/mapper/server-home /mnt/home
mount /dev/md1 /mnt/boot

you can now install arch linux:

pacman -S arch-install-scripts
pacstrap /mnt base
genfstab -U -p /mnt >> /mnt/etc/fstab

check /mnt/etc/fstab. it should have uuids this time.

continue with:

arch-chroot /mnt
echo your-server > /etc/hostname
ln -s /usr/share/zoneinfo/Europe/Zurich /etc/localtime
echo en_US.UTF-8 UTF-8 > /etc/locale.gen
pacman -Sy grub openssh screen cryptsetup sudo busybox base-devel
pacman -Sy wget dropbear mkinitcpio-nfs-utils
modprobe dm-mod
mdadm -E --scan >> /etc/mdadm.conf

obviously, replace your-server, Europe/Zurich and en_US with something more fitting. now check the bottom of /etc/mdadm.conf. does it contain all three raid arrays (or how many you created)?

next, run

cryptsetup luksAddKey /dev/md2

and add a key with a long random string as a password. i'll refer to this key as LONG_PASSWORD from now on. later, you can use this key to remotely unlock the disk on boot time. the new password should end up in slot 1. you can check the slots with:

cryptsetup luksDump /dev/md2

next, create the network configuration /etc/netctl/wired with the same content as in the unencrypted system. then edit /root/.ssh/authorized_keys (you might have to create /root/.ssh first) and paste in some public ssh keys you'll want to use for login later. (we'll disable root login with password below, so you really need to do this!)

then continue with:

netctl enable wired
ssh-keygen -t ed25519 -f /etc/ssh/ssh_host_ed25519_key
ssh-keygen -t rsa -b 4096 -f /etc/ssh/ssh_host_rsa_key

next edit /etc/ssh/sshd_config and add/change:

Protocol 2
HostKey /etc/ssh/ssh_host_ed25519_key
HostKey /etc/ssh/ssh_host_rsa_key

PermitEmptyPasswords no
PermitRootLogin without-password
StrictModes yes
AllowUsers root

(add other user names to AllowUsers which you will create later and want to use to remotely login via ssh. if you're using a less-modern openssh ssh client, or some non-openssh client, you might want to tweak the mentioned ciphers, macs and key exchange algorithms because you won't be able to connect to your server otherwise.)

remove all other host keys mentioned (or comment them out if they aren't). next, edit /etc/ssh/moduli and remove all lines with less than 4096 bits (the fifth column contains the bitlength). these last two steps (editing files in /etc/ssh) are not required, but do harden your system. next, run

systemctl enable sshd.service

so you can actually ssh your new system after reboot. we now want to set up remote unlocking. (also see this document if you want to know more.) first, you should generate an rsa key for ssh communication. don't create an ecc key here, as dropbear (which we'll use) doesn't support them (you can also use tinyssh, but that uses a different key format than openssh, so you'll have to do some more work). on your desktop machine, run:

ssh-keygen -b 4096 -t rsa -f ~/.ssh/id_rsa_server_unlocking
scp ~/.ssh/ root@your-server:/mnt/root/

store the private key somewhere safe; you'll need it (together with the password LONG_PASSWORD) to remotely unlock your server. now, run the following on the server:

mkdir -p /build
chgrp nobody /build
chmod g+w /build
cd /build
for i in mkinitcpio-netconf mkinitcpio-dropbear mkinitcpio-utils; do
    tar -xvzf $i.tar.gz
    chown -R nobody:nobody $i
    cd $i
    sudo -u nobody makepkg
    chown root:root $i-*.xz
    mv $i-*.xz ..
    cd ..
    rm -rf $i
mv *.xz /root
cd /root
rm -rf /build
cat /root/ > /etc/dropbear/root_key
for i in mkinitcpio-netconf mkinitcpio-dropbear mkinitcpio-utils; do
    pacman -U $i-*.tar.xz
make sure everything builds and installs fine. then edit /etc/mkinitcpio.conf:
  1. change the MODULES="" line to MODULES="dm_mod dm_crypt aes_x86_64 raid1";
  2. insert lvm2 mdadm_udev netconf dropbear encryptssh in the HOOKS="..." string before filesystems, and add shutdown at the end. the line should now look like HOOKS="base udev autodetect modconf block lvm2 mdadm_udev netconf dropbear encryptssh filesystems keyboard fsck shutdown".

next, modify /usr/lib/initcpio/hooks/dropbear so that the lines starting the server look like:

echo "Starting dropbear (on port 12345)"
/usr/sbin/dropbear -E -s -j -k -p 12345

i.e. add "-p 12345" to the dropbear call and printed text. this will be the port you have to connect with ssh to to remotely unlock. you can also skip this, then you'll have to use the standard ssh port (22).

continue with editing /etc/default/grub. modify the GRUB_CMDLINE_LINUX variable to

GRUB_CMDLINE_LINUX="cryptdevice=/dev/md2:server ip=:::::eth0:dhcp"

or, to be on the safe side, to


(i had trouble with the first variant some years ago). replace with your server's ip, yyy.yyy.yyy.yyy with the gateway, zzz.zzz.zzz.zzz with the hostmask and your-server with your server's hostname. you also might have to adjust eth0 in case your server has more than one network interface. (for me, eth0 always worked.)

next, set a root password, create the init ramdisk, and set up the boot loader:

mkinitcpio -p linux
grub-install --recheck /dev/sda
grub-mkconfig -o /boot/grub/grub.cfg

in case you're using a old mbr partition table, you might have to set the bootable flag for the boot partition.

then, exit the chroot and reboot:

umount -R /mnt
swapoff /dev/mapper/server-swap
cryptsetup luksClose cryptdisk

unlocking the encrypted arch linux installation

your server should now boot into the init ramdisk, start dropbear, and wait for a connection to unlock your encrypted partition. to unlock it, run:

echo LONG_PASSWORD | ssh -p 12345 -i ~/.ssh/id_rsa_server_unlocking root@your-server

this should unlock your encrypted disk (which takes around 10 seconds if you followed my steps to the letter), and then boot arch linux. you'll be able to log in as root with the ssh keys you inserted earlier. form that point on, you can configure the system like any random linux installation via ssh (for example, by using ansible).

if the system doesn't come up or doesn't start networking (you can use ping to see whether the network interface is up; as soon as it responds to ping after reboot, you can try the above ssh unlocking command), you can either reboot into your hoster's rescue system, mount and chroot the unencrypted system, and rewrite the boot loader to reboot in the unencrypted system, and/or use a serial console and/or kvm to find out what went wrong. anyway, debugging such a situation is really hard, so good luck! but if your system is close enough to mine and you followed the above steps correctly (and i didn't screwed something up), it should work.

yesterday, i read that hosttech, my dns hoster and registrar, finally supports dnssec. actually, they already support it since december 24th, according to an announcement i obviously missed.

anyway. setting it up went smooth, especially with the instructions from that blog post. only for it was a bit more tricky, since i also had to add the public key as otherwise denic didn’t like the record.

now the only thing missing is that cablecom actually provides a dnssec-capable dns resolver

posted in: computer

three days ago, let’s encrypt started their public beta. for those of you who don’t know: let’s encrypt is a certificate authority issuing free certificates for protecting https connections.

this is awesome!

for one, this allows me to get some “real” certificates (as opposed to my self-signed ones) without paying a larger sum of money per year (i’m using quite many subdomains of and two other domains, which results in quite some sum even when using cheap resellers of resellers of resellers).

then, their goal is to automate the whole process as much as possible. so instead of a lof of manual work (mostly filling out forms, handling payment of fees, reacting to emails or domain challenge requests, etc.) it should be possible to run one command, maybe even as a cronjob, to get a (renewed) certificate for a domain or a set of domains.

on thursday, when the beta officially started, i tried out the official client. as mentioned already by lots of others, it has some serious downside: it is a huge python program which needs to be run as root. (not necessarily on the webserver, though, even though in that case you cannot automate stuff anymore.) but there were already alternatives: a static website telling you what to do and doing some calculations in javascript, or a tiny python client. (both are by daniel roesler.)

that’s already much better, but still not what i want, as this is hard to automate when you don’t want to run that on the webserver itself. i’m prefering something which can run somewhere else, and can be integrated in an orchestration tool like ansible. well, so i took daniel roesler’s code (including a python 3 patch by collin anderson) and converted it into a more modular tool, which allows to split up the process so that with some more scripting, it can easily be used to do the process from remote. you can find the result on github. i also created an ansible role which allows to simply generate keys, certificate signing requests and get complete certificates from let’s encrypt with ansible; that project can also be found on github. i’m using it in production for my personal webserver: as a result you can now look at spielwiese without having to accept my self-signed certificate! maybe also others will find this useful.

this weekend, i spend a bit of time pimping my nginx tls/ssl configuration for https. my goal was to achieve much better on the ssl labs ssl server test. well, my top score will never exceed T due to my self-signed certificate, but fortunately it also shows the top score ignoring trust issues. and there, i finally got an A!

of course, there’s always a downside. since certain older clients are incapable of dealing with modern ciphers and protocols (like tls 1.2), you either have to support cipher/hash/… combinations which aren’t exactly secure, or drop support for these clients. if you want a good score from the ssl server test, you have to drop support for some clients.

in my case (and after doing quite some experiments), i decided to drop support for:

  • android 2.3.7 (and similar): no 256 bit ciphers, and no support of tls 1.1 or higher;
  • internet explorer 6 and 8 under windows xp: not even tls 1.0 (ie 6), or no tls 1.1 or higher (ie 8), and no 256 bit ciphers;
  • all kind of javas (java 6u45, 7u25, 8b132): while java 8 finally supports tls 1.2 (the others only up to tls 1.0), there are no 256 bit ciphers.

all other clients tested on the ssl server test have no problem connecting with my config, and all result in 256 bit ciphers with forward secrecy.

the total result is 100% for key exchange and ciphers, and 95% for protocol support (i guess supporting tls 1.0 is the problem, but that’s needed for quite some clients). you can see the result here. i probably would have gotten 100% for the certificate, too, if it would not have been self-signed (by my own ca), but by something “trustworthy”.

to achieve this, i used 4096 bit rsa keys and a 4096 bit dh setting. generating the server certificate (with the rsa keys) is pretty standard, but what i haven’t seen very often is the diffie-hellman key exchange parameters generation (in fact, i’ve first seen it here):

1 openssl genpkey -genparam -algorithm DH -out dhparam.pem -pkeyopt dh_paramgen_prime_len:4096

this generates a diffie-hellman setup with a 4096 bit prime. a smaller prime is fine for most scenarios, but if you’re paranoid enough, 4096 bits is a good start :-) note that the prime bitlength has a direct impact on the server (and client) load when a new tls/ssl connection with forward secrecy is initiated. the longer the prime is, the slower this will be. (the handshake is superlinear in the number of bits, and probably closer to quadratic than to the complexity-theoretic optimum of O(n1+ɛ) for every ɛ > 0.) for more modern clients, though, an elliptic curve based setting will be used, which is much more efficient since it uses way smaller finite fields.

anyway, here’s the config:

1 ssl_session_cache shared:SSL:5m;
2 ssl_session_timeout  5m;
4 ssl_dhparam /etc/nginx/dhparam.pem;
6 ssl_protocols TLSv1.2 TLSv1;
7 ssl_prefer_server_ciphers on;

this leads to the following list of ciphers:

1 prio  ciphersuite                  protocols      pfs_keysize
2 1     ECDHE-RSA-AES256-GCM-SHA384  TLSv1.2        ECDH,P-256,256bits
3 2     DHE-RSA-AES256-GCM-SHA384    TLSv1.2        DH,4096bits
4 3     ECDHE-RSA-AES256-SHA384      TLSv1.2        ECDH,P-256,256bits
5 4     DHE-RSA-AES256-SHA256        TLSv1.2        DH,4096bits
6 5     ECDHE-RSA-AES256-SHA         TLSv1,TLSv1.2  ECDH,P-256,256bits
7 6     DHE-RSA-AES256-SHA           TLSv1,TLSv1.2  DH,4096bits

(courtsey to cipherscan.)

i’d like to also use http strict transport security, but that won’t work well if you have a self-signed certificate, thanks to its specifications (see point #2 here). also, ocsp stapling makes no sense with a self-signed certificate and without a proper ca. finally, i’d like to use public key pinning in the future, but that’s rather experimental at the moment.

one thing i’m missing quite badly is proper elliptic curve support. with that i mean good (non-nist) curves, like the ones listed as “safe” on this page, especially the higher security ones (like curve41417, ed448-goldilocks, m-511 and m-521). unfortunately, i’m afraid it will take a long time until we can use them with tls, not only because they first have to get into a standard, but then the standard has to be implemented by clients and enough clients must be able to use it. consider for example tls 1.2, which was defined in august 2008. while finally all current browsers support it (that hasn’t been the case a couple of years ago, similar to tls 1.1 which has been around since april 2006), it took quite some time, and there are still a lot of older browsers out there which don’t support it. just consider many smartphones produced in the last years with android 4.3 an older (which includes my fairphone), which have only tls 1.0 support. or safari 6 included with osx 10.8, openssl 0.9.8, internet explorer mobile on windows phone 8.0, internet explorer up to version 10, and quite some search machine bots.

note that in my above config, the elliptic curve used for diffie-hellman is p-256, a nist curve. it’s one of these nsa generated curves, and it’s not exactly optimal (search for p-256 here). unfortunately, with current tls, there’s not much you can do about this… too bad.

posted in: computer

i’m very happy to announce that the lattice reduction library plll has finally been released as open source under the MIT license by the university of zurich. during my recent years at the university of zurich, i’ve been mainly working on this c++ library. it supports a wide range of lattice reduction algorithms and svp solvers, and makes heavy use of c++ templates and has support for c++11‘s move operations.

in 2011, i began implementing it since i wasn’t happy with some of the behavior of ntl‘s lattice reduction algorithms (mainly: in case of fatal numerical instability, they just terminate the program, and the library cannot be used in more than one thread at the same time). back then, ntl’s main competior fplll didn’t support bkz reduction, so i decided to try things out myself. after some time (years), my initial experiments grew into a full library supporting not only the more common lll and bkz algorithms as well as svp solving by enumeration, but also several different algorithms for lattice reduction and svp solving which are described in literature but for which it is sometimes quite hard to find a working implementation of. though the implementations of these algorithms are still more on the experimental side, the basic algorithms such as lll, bkz, potentially combined with deep insertions, and enumeration, are well-tested over the years. (in fact, some large-scale lattice reduction experiments i did for these algorithms yielded some results in the svp challenge’s hall of fame).

in case you’re interested in this library, feel free to play around with it! in case you have any questions, encounter problems, or want to give feedback, feel free to contact me by email.

last summer, after buying a new four terabyte harddisk for my main computer (replacing the old and notoriously full one terabyte harddisk), i wanted to try something new. instead of using ext2/3/4, i decided to switch to the btrfs filesystem. the main feature why i wanted to use btrfs was the ability to quickly create snapshots of the current disk content on the fly, thus being able to browse through how the disk looked some time ago. the snapshots are essentially only the difference between the old data and the new, thus they are essentially free if the disk content isn’t changing a lot between the snapshots. which, at least for me, is usually the case.
i’m using btrfs only for the /home partition, to which i added a subdirectory /home/backup to store backups. in this post, i want to explain how to set up a simple system which makes a snapshot every ten minutes, and cleans up older snapshots so that

  • for snapshots older than a day, only one snapshot is left for every hour, and
  • for snapshots older than a week, only one snapshot is left for every day, and
  • for snapshots older than a year, only one snapshot is left for every month.

so even with a lot of changes inbetween, the number of snapshots shouldn’t be too big, and thus not too much space will be wasted, while still allowing to access old (and deleted!) data. note that changing the interval from every ten to, say, every minute should be no problem. if you ever accidently delete something, you’ll have no problem to resurrect the file even if you only notice some hours, days, weeks or even months later. (providing that the file has already been around for at least a similar time interval.)

one note regarding btrfs in general. while btrfs is still marked experimental, it seems to be pretty stable in practice. the only caveat is that you should never fill btrfs disks too much. always make sure enough space is left. that shouldn’t be a problem for my four terabyte disk for quite some time, but in case you love to quickly fill space, better get more than one drive and join them (via raid zero or something like that). also, note that one btrfs filesystem can span over several partitions and disks, and that it can internally do several raid modes. in fact, that’s something i want to try out soon, by combining a bunch of older harddisks i’ve still lying around in a jbod array and putting a raid one btrfs filesystem over all of them. note that btrfs will in the future allow to configure this even more refined (like increasing redundancy, or also using different configurations per file), and that it’s always possible to update a filesystem on the fly while it is mounted.

creating read-only snapshots.

creating a read-only snapshot is simple: just run btrfs subvolume snapshot -r /home /home/backup/name_of_snapshot. (if you want snapshots you can also write to, drop the -r.) for example, you could create a little shell script:

1 #!/bin/bash
2 TIMESTAMP=`date +"%Y-%m-%d-%H%M%S"`
3 btrfs subvolume snapshot -r /home /home/backup/$TIMESTAMP
4 rm -rf /home/backup/$TIMESTAMP/backup/20*

this creates a read-only snapshot based on the current date, and cleans up the /backup subdirectory of /home/backup in the snapshot. after all, we don’t want to recursively increase the tree’s depth by having links to all older snapshots in each snapshot.

setting up your computer to execute this script regularly is quite simple. let’s say it is stored as /home/backup/ with read and execution priviledges for root; then you could run crontab -e as root and add a line like
1,11,21,31,41,51 * * * * root /bin/bash -c "/home/backup/ &>> /var/log/snapshot.log"
this runs the script at xx:01, xx:11, xx:21, xx:31, xx:41 and xx:51 for every hour xx on every day during the whole year. the script’s output (which should be essentially something like Create a snapshot of '/home' in '/home/backup/2014-04-27-000100') is stored in a log file /var/log/snapshot.log.

cleaning up.

cleaning up is a little more complicated. deleting a snapshot itself is easy: just run btrfs subvolume delete /home/backup/name_of_snapshot. to delete snapshots according to the rules i wrote up above, i wrote a little python script:

 1 #!/usr/bin/python2
 2 import os, os.path, datetime, subprocess
 4 class CannotParse(Exception):
 5     pass
 7 # Find all directories in /home/backup
 8 now =
 9 td_day = datetime.timedelta(days=1)
10 td_week = datetime.timedelta(weeks=1)
11 td_month = datetime.timedelta(days=31)
12 monthold = dict()
13 weekold = dict()
14 dayold = dict()
15 rest = dict()
16 for file in os.listdir('/home/backup'):
17     if not os.path.isfile(file):
18         # Interpret name as timestamp
19         data = file.split('-')
20         try:
21             if len(data) == 4:
22                 year = int(data[0])
23                 month = int(data[1])
24                 day = int(data[2])
25                 if len(data[3]) == 4:
26                     hour = int(data[3][0:2])
27                     minute = int(data[3][2:4])
28                     second = 0
29                 elif len(data[3]) == 6:
30                     hour = int(data[3][0:2])
31                     minute = int(data[3][2:4])
32                     second = int(data[3][4:6])
33                 else:
34                     raise CannotParse()
35                 timestamp = datetime.datetime(year, month, day, hour, minute, second)
36                 isodate = timestamp.isocalendar() + (hour, minute, second)
37             else:
38                 raise CannotParse()
40             age = now - timestamp
41             if age >= td_month:
42                 id = isodate[0:2]
43                 d = monthold
44             elif age >= td_week:
45                 id = isodate[0:3]
46                 d = weekold
47             elif age >= td_day:
48                 id = isodate[0:4]
49                 d = dayold
50             else:
51                 id = isodate[0:6]
52                 d = rest
53             if id not in d:
54                 d[id] = list()
55             d[id].append([timestamp, file])
56         except Exception:
57             pass
59 def work(d, title):
60     for id in d:
61         list = d[id]
62         list.sort()
63         if len(list) > 1:
64             for v in list[1:]:
65                 retcode =['btrfs', 'subvolume', 'delete', '/home/backup/' + str(v[1])])
66                 if retcode != 0:
67                     print 'Error! (Return code ' + str(retcode) + ')'
69 work(monthold, "MONTH OLD:")
70 work(weekold, "WEEK OLD:")
71 work(dayold, "DAY OLD:")
72 work(rest, "REST:")

i stored it as /home/backup/ and made it runnable by root, and scheduled it to be run every hour at a fixed minute offset (say, xx:59) by running crontab -e and adding
59 * * * * root /bin/bash -c "/home/backup/ &>> /var/log/snapshot.log"
again, the output is put into /var/log/snapshot.log.

posted in: computer

two weeks ago, i received my first smartphone ever: the fairphone. (which now makes me one of two fairphone owners i personally know of.) unpacking it was quite an experience, as the fairphone team paid a lot of attention to small details:

while unpacking i was accompanied by some of the cats, though they didn’t seem to have any interest in the phone itself:

well, the phone’s protective film told me to open it up, and so i did. after all, i own that phone. everything important is labelled:

exchaning the battery or adding up to two sim cards is easily possible. after re-inserting the battery and turning the phone on, i was greeted with a nice introductionary video. (video playback works fine, apparently.) the main home screen greets me with a large button to “enjoy some peace”, a nice mode disabling all network capabilities for an interval which can be set, allowing you to get away from emails, phone calls and other messages. there aren’t too many apps installed; in particular, all google apps (play store, maps, …) are missing, but can be installed by clicking a button. well, i thought about it for some time, but then decided against it, at least for now. there’s no need to start spilling my data around the world…
one of the first things i enabled, though, was the phone encryption – something like a hard disk encryption, i guess. by password. obviously, since typing in passwords on a phone is quite painful in the beginning, the password is not optimal yet, but that will change as soon as my typing capabilities got better :) (let’s see if they use something like luks, which would allow me to just type in a new password, or whether the whole thing has to be re-encrypted…)
anyway, since smartphones are quite low-security products, i’m not sure yet for what exactly i will use it after all… but it definitely won’t get full access to my server and other computers’ data, which probably should include email as well.