skip to main content.

posts about crypto.

this year, let's encrypt added two great features:

  1. they enabled the acme v2 protocol, and allow to obtain wildcard certificates through this.

  2. they improved their certificate transparency support by including signed certificate timestamp (sct) records in the certificates. chrome will, for example, require scts from april 2018 on.

i've already tried out both wildcard certificates and scts, and so far they work flawlessly! i've been using the acme v2 support in the letsencrypt module of ansible 2.5 (with a bugfix), into which i invested quite some work.

four days ago, arch linux switched to openssl 1.1.0. openssl 1.1.0 was originally released at the end of last august, but since it has some breaking api changes, it's only slowly creeping into new linux distributions.

this also means that i can finally test my let's encrypt library, let's encrypt ansible role and ocspbot against openssl 1.1.0. the let's encrypt code worked out of the box (i've already incorporated a change somewhen earlier, even without being able to properly test it), but ocspbot needed a bit more work. there's a command line syntax change between 1.0.x and 1.1.0 when specifying http headers to ocsp calls; the old syntax was -header name value, the new one is -header name=value. so i had to add a version detection (i.e. parsing the output of openssl version) to use the correct syntax depending on the used version. but now it works with both openssl 1.0.x and 1.1.0!

using openssl 1.1.0 on my server also allowed me to use x25519, using daniel j. bernstein's curve25519 in edwards form, for secret key negotation (i.e. ephemeral diffie-hellman). using it in nginx is pretty easy:

ssl_ecdh_curve X25519:secp521r1:secp384r1;

this uses x25519 as the default curve/key exchange, followed by the fallsbacks using ecdhe with a 521-bit nist curve and then a 384-bit nist curve as a third fallback. (btw, note the uppercase x in x25519 — if you use the lowercase variant, nginx won't load the config.) the third curve is the only one supported by almost every browser; only a few support the 521-bit one, and right now only chrome supports x25519.

introduction

classically, revokation of certificates was accomplished with certificate revokation lists (crls). the idea was that browsers regularly download crls from the certificate authorities (cas) and check whether certificates they see are on the list. this doesn't scale well, though. nowadays, there are many cas trusted by browsers in their default configuration, and crls tend to get huge.

a better solution is the online certificate status protocol (ocsp): a browser, when encountering a new certificate, asks the ocsp server of the browser (the url for it is contained in the certificate) whether the certificate is still valid. this has several downsides as well: first, ocsp servers are not always reliable. if a browser cannot connect to it (or doesn't get a reply), what should it do? deny access to the site? besides that, there's another large downside: privacy. the ocsp server knows which page you are visiting, because the browser tells it by asking whether its certificate is valid.

ocsp stapling was invented to improve upon this: the idea is that the webserver itself asks the ocsp server for the status of its certificate, and delivers the answer together with the certificate to the connecting browser. as the ocsp response is signed by the cas certificate, the browser can verify that the response is valid. also, the expiration time of ocsp responses is much less than the one for certificates, so if a certificate is revoked, existing ocsp responses will only be valid for a couple of more days.

this is pretty good already, except that a malicious webserver could simply not send the ocsp response with its certificate. if a browser cannot contact the ocsp server itself, it has no way to know whether the certificate is revoked or not. to overcome this, ocsp must-staple was invented. this is a flag in the certificate itself which says that the certificate is only valid with a valid and good ocsp response. so if a browser encounters a certificate with this flag, and the webserver isn't ocsp stapling, the browser knows that something is fishy.

unfortunately, there are some downsides. first, most the most common webservers for linux, apache and nginx, while having ocsp stapling support, do in some situations send replies without ocsp stapling. if the certificate has the ocsp must-staple flag set, these answers result in error pages shown in browsers. and that's something you really want to avoid, that visitors of your page thing there's something bad happening.

fortunately, at least for nginx, you can specify a file containing an ocsp response directly with the ssl_stapling_file directive. unfortunately, you have to make sure you always have a good and valid ocsp response at that place, and reload nginx in case the response is updated. other programs allow to specify an ocsp response in a similar way, such as exim with the tls_ocsp_file directive, and thus have the same problem. to solve this problem, i've started creating ocsp bot:

ocsp bot

ocsp bot is a python script which should be called frequently (as in: once per hour or so), which checks a set of x.509 certificates to obtain up-to-date ocsp responses. in case the current ocsp responses will expire soon, or aren't there, it will try to get a new response. it will only copy the new response to the correct place if it is valid and good. calling it frequently will ensure that in case of problems getting a new response, it will retry every hour (or so) until a good and valid response could have been obtained. so a user response is only necessary if the process fails several times in a row.

ocsp bot will signal with its exit code whether responses have been updated, allowing to reload/restart the corresponding service to use the new response.

you can install ocsp bot with pip install ocsp from pypi.

integration with ansible

i'm using ansible to configure my server. to copy certificates and obtain ocsp responses, i'm using a custom role.

the ansible tasks for the role are as follows. ocsp bot is installed in /var/www/ocsp:

- name: Create OCSP log folder
  file: dest=/var/www/ocsp/logs state=directory
- name: Create OCSP response folder
  file: dest=/var/www/ocsp/responses state=directory
- name: Install pyyaml
  pip: name=pyyaml
- name: Install OCSP response utility
  copy: src=ocspbot.py dest=/var/www/ocsp/ocspbot.py mode=0755
  # ocspbot.py is ocspbot/__main__.py from https://github.com/felixfontein/ocspbot/
- name: Install OCSP bash script
  template: src=ocspbot.sh.j2 dest=/var/www/ocsp/ocspbot.sh mode=0755
- name: Install OCSP response utility configurations
  template: src=ocspbot.yaml.j2 dest=/var/www/ocsp/ocspbot-{{ item.key }}.yaml
  with_dict: "{{ certificates }}"
- name: Install OCSP response cronjob
  cron: name="Update OCSP responses" hour=* minute=0 job=/var/www/ocsp/ocspbot.sh state=present

the variable certificates is defined as follows:

certificates:
  nginx:
    domains:
    - example.com
    - example.net
    reload:
    - nginx
    key_owner: root
    key_group: root
    key_mode: "0400"
  mailserver:
    domains:
    - mail.example.com
    reload:
    - dovecot
    - exim
    key_owner: root
    key_group: exim
    key_mode: "0440"

The template for ocspbot.sh:

#!/bin/bash
RC=0
{% for name, data in certificates|dictsort %}

# Renew OCSP responses for {{ name }}
/var/www/ocsp/ocspbot.py /home/ocsp/ocspbot-{{ name }}.yaml
RESULT=$?
if [ $RESULT -gt 0 ]; then
{%   for service in data.reload %}
    systemctl reload {{ service }}
{%   endfor %}
elif [ $RESULT -lt 0 ]; then
    RC=1
fi
{% endfor %}

exit $RC

the template for the configuration yaml files:

make_backups: True

minimum_validity: 3d
minimum_validity_percentage: 42.8

ocsp_folder: /var/www/ocsp/responses
output_log: /var/www/ocsp/logs/{{ item.key }}-{year}{month}{day}-{hour}{minute}{second}.log

domains:
{% for domain in item.value.domains|sort %}
  {{ domain }}:
    cert: /var/www/certs/{{ domain }}.pem
    chain: /var/www/certs/{{ domain }}-chain.pem
    rootchain: /var/www/certs/{{ domain }}-rootchain.pem
    ocsp: {{ domain }}.ocsp-resp
{% endfor %}

the certificates are copied with the following ansible tasks:

- name: copy private keys
  copy: src=keys/{{ item.1 }}.key dest=/var/www/keys/{{ item.1 }}.key owner={{ item.0.value.key_owner }} group={{ item.0.value.key_group }} mode={{ item.0.value.key_mode }}
  with_dependent:
  - "certificates"
  - "item.0.value.domains"
  notify: update OCSP responses
- name: copy certificates
  copy: src=keys/{{ item.1 }}{{ item.2 }} dest=/var/www/certs/{{ item.1 }}{{ item.2 }} owner=root group=root mode=0444
  with_dependent:
  - "certificates"
  - "item.0.value.domains"
  - '["-rootchain.pem", "-fullchain.pem", "-chain.pem", ".pem"]'
  notify: update OCSP responses

(here, the dependent loop lookup plugin is used.)

the handler update OCSP responses is defined as follows:

- name: update OCSP responses
  command: /var/www/ocsp/ocspbot.sh
  register: result
  failed_when: result.rc != 0
  notify:
  - reload nginx
  - reload exim
  - reload dovecot

i'm using this setup for some weeks now, and it seems to work fine. so far, i'm not using ocsp must-staple certificates (except for some test subdomains). if everything seems to be fine for some time, i'll switch to ocsp must-staple certificates.

yesterday, i read that hosttech, my dns hoster and registrar, finally supports dnssec. actually, they already support it since december 24th, according to an announcement i obviously missed.

anyway. setting it up went smooth, especially with the instructions from that blog post. only for fontein.de it was a bit more tricky, since i also had to add the public key as otherwise denic didn’t like the record.

now the only thing missing is that cablecom actually provides a dnssec-capable dns resolver

posted in: computer
tags:
places:

three days ago, let’s encrypt started their public beta. for those of you who don’t know: let’s encrypt is a certificate authority issuing free certificates for protecting https connections.

this is awesome!

for one, this allows me to get some “real” certificates (as opposed to my self-signed ones) without paying a larger sum of money per year (i’m using quite many subdomains of fontein.de and two other domains, which results in quite some sum even when using cheap resellers of resellers of resellers).

then, their goal is to automate the whole process as much as possible. so instead of a lof of manual work (mostly filling out forms, handling payment of fees, reacting to emails or domain challenge requests, etc.) it should be possible to run one command, maybe even as a cronjob, to get a (renewed) certificate for a domain or a set of domains.

on thursday, when the beta officially started, i tried out the official client. as mentioned already by lots of others, it has some serious downside: it is a huge python program which needs to be run as root. (not necessarily on the webserver, though, even though in that case you cannot automate stuff anymore.) but there were already alternatives: a static website telling you what to do and doing some calculations in javascript, or a tiny python client. (both are by daniel roesler.)

that’s already much better, but still not what i want, as this is hard to automate when you don’t want to run that on the webserver itself. i’m prefering something which can run somewhere else, and can be integrated in an orchestration tool like ansible. well, so i took daniel roesler’s code (including a python 3 patch by collin anderson) and converted it into a more modular tool, which allows to split up the process so that with some more scripting, it can easily be used to do the process from remote. you can find the result on github. i also created an ansible role which allows to simply generate keys, certificate signing requests and get complete certificates from let’s encrypt with ansible; that project can also be found on github. i’m using it in production for my personal webserver: as a result you can now look at spielwiese without having to accept my self-signed certificate! maybe also others will find this useful.

this weekend, i spend a bit of time pimping my nginx tls/ssl configuration for https. my goal was to achieve much better on the ssl labs ssl server test. well, my top score will never exceed T due to my self-signed certificate, but fortunately it also shows the top score ignoring trust issues. and there, i finally got an A!

of course, there’s always a downside. since certain older clients are incapable of dealing with modern ciphers and protocols (like tls 1.2), you either have to support cipher/hash/… combinations which aren’t exactly secure, or drop support for these clients. if you want a good score from the ssl server test, you have to drop support for some clients.

in my case (and after doing quite some experiments), i decided to drop support for:

  • android 2.3.7 (and similar): no 256 bit ciphers, and no support of tls 1.1 or higher;
  • internet explorer 6 and 8 under windows xp: not even tls 1.0 (ie 6), or no tls 1.1 or higher (ie 8), and no 256 bit ciphers;
  • all kind of javas (java 6u45, 7u25, 8b132): while java 8 finally supports tls 1.2 (the others only up to tls 1.0), there are no 256 bit ciphers.

all other clients tested on the ssl server test have no problem connecting with my config, and all result in 256 bit ciphers with forward secrecy.

the total result is 100% for key exchange and ciphers, and 95% for protocol support (i guess supporting tls 1.0 is the problem, but that’s needed for quite some clients). you can see the result here. i probably would have gotten 100% for the certificate, too, if it would not have been self-signed (by my own ca), but by something “trustworthy”.

to achieve this, i used 4096 bit rsa keys and a 4096 bit dh setting. generating the server certificate (with the rsa keys) is pretty standard, but what i haven’t seen very often is the diffie-hellman key exchange parameters generation (in fact, i’ve first seen it here):

1openssl genpkey -genparam -algorithm DH -out dhparam.pem -pkeyopt dh_paramgen_prime_len:4096

this generates a diffie-hellman setup with a 4096 bit prime. a smaller prime is fine for most scenarios, but if you’re paranoid enough, 4096 bits is a good start :-) note that the prime bitlength has a direct impact on the server (and client) load when a new tls/ssl connection with forward secrecy is initiated. the longer the prime is, the slower this will be. (the handshake is superlinear in the number of bits, and probably closer to quadratic than to the complexity-theoretic optimum of O(n1+ɛ) for every ɛ > 0.) for more modern clients, though, an elliptic curve based setting will be used, which is much more efficient since it uses way smaller finite fields.

anyway, here’s the config:

1ssl_session_cache shared:SSL:5m;
2ssl_session_timeout  5m;
3
4ssl_dhparam /etc/nginx/dhparam.pem;
5
6ssl_protocols TLSv1.2 TLSv1;
7ssl_prefer_server_ciphers on;
8ssl_ciphers "-ALL !ADH !aNULL !EXP !EXPORT40 !EXPORT56 !RC4 !3DES !eNULL !NULL !DES !MD5 !LOW ECDHE-ECDSA-AES256-GCM-SHA384 ECDHE-RSA-AES256-GCM-SHA384 DHE-RSA-AES256-GCM-SHA384 ECDHE-ECDSA-AES256-SHA384 ECDHE-RSA-AES256-SHA384 DHE-RSA-AES256-SHA256 ECDHE-ECDSA-AES256-SHA ECDHE-RSA-AES256-SHA DHE-RSA-AES256-SHA";

this leads to the following list of ciphers:

1prio  ciphersuite                  protocols      pfs_keysize
21     ECDHE-RSA-AES256-GCM-SHA384  TLSv1.2        ECDH,P-256,256bits
32     DHE-RSA-AES256-GCM-SHA384    TLSv1.2        DH,4096bits
43     ECDHE-RSA-AES256-SHA384      TLSv1.2        ECDH,P-256,256bits
54     DHE-RSA-AES256-SHA256        TLSv1.2        DH,4096bits
65     ECDHE-RSA-AES256-SHA         TLSv1,TLSv1.2  ECDH,P-256,256bits
76     DHE-RSA-AES256-SHA           TLSv1,TLSv1.2  DH,4096bits

(courtsey to cipherscan.)

i’d like to also use http strict transport security, but that won’t work well if you have a self-signed certificate, thanks to its specifications (see point #2 here). also, ocsp stapling makes no sense with a self-signed certificate and without a proper ca. finally, i’d like to use public key pinning in the future, but that’s rather experimental at the moment.

one thing i’m missing quite badly is proper elliptic curve support. with that i mean good (non-nist) curves, like the ones listed as “safe” on this page, especially the higher security ones (like curve41417, ed448-goldilocks, m-511 and m-521). unfortunately, i’m afraid it will take a long time until we can use them with tls, not only because they first have to get into a standard, but then the standard has to be implemented by clients and enough clients must be able to use it. consider for example tls 1.2, which was defined in august 2008. while finally all current browsers support it (that hasn’t been the case a couple of years ago, similar to tls 1.1 which has been around since april 2006), it took quite some time, and there are still a lot of older browsers out there which don’t support it. just consider many smartphones produced in the last years with android 4.3 an older (which includes my fairphone), which have only tls 1.0 support. or safari 6 included with osx 10.8, openssl 0.9.8, internet explorer mobile on windows phone 8.0, internet explorer up to version 10, and quite some search machine bots.

note that in my above config, the elliptic curve used for diffie-hellman is p-256, a nist curve. it’s one of these nsa generated curves, and it’s not exactly optimal (search for p-256 here). unfortunately, with current tls, there’s not much you can do about this… too bad.

posted in: computer
tags:
places:

already two and a half weeks ago, scott vanstone died at age of 66. scott intensively pushed, commercialized and invested in elliptic curve cryptography from its beginnings on. he also co-founded the ecc conference series, which i attended eight times.

rest in peace, scott.

posted in: daily life math
tags:
places:

today i was in bad säckingen to give a talk at the kinderuni (children university) hochrhein, a eu-funded joint project between the two cities bad säckingen in germany and stein (ag) in switzerland. in the talk, i tried to explain kids, age 8 to 12, a bit about cryptography.

starting with caesar-type ciphers and more general substitution ciphers, i then continued to explained how to crack such ciphers using frequency analysis. this included a live demonstration, which was quite fun thanks to all the contributions from the audience. after shortly giving hints on how to improve on ciphers, i quickly presented the advanced encryption standard before continuing with the second part of the presentation: public key cryptography.

i began by explaining the situation: two cats want to communicate / exchange something (like cat food :-) ), while a third cat is watching / able to intercept (eat). after mentioning diffie and hellman, i continued with a more practical example: a simple massey-omura three-pass protocol type exchange using a box and two padlocks. this was another great thing, asking the kids how they think this could work after presenting the box and the padlocks. and the sudden murmur of understanding when the second lock got added to the box and the box was sent back.
afterwards i asked the kids how they think this system could be attacked, and they both came up with the bruteforce (crack the box open) and the more tricky (man-in-the-middle attack) variant. great!

the last slides on factoring-based crypto and elliptic curves were quite hard to comprehend, as i knew beforehand, but at least they now know that there’s more out there for which they have to learn more about mathematics :-)

if you’re interested, you can download the slides here.

last week i was in leiden, attending a workshop on post-quantum crytpography and quantum algorithms at the lorentz center. it was a collection of talks and working in smaller groups, where we discussed certain topics, such as quantum attacks on ideal lattices in more detail, trying to find a way to use quantum computers to speed up attacks against primitives of post-quantum cryptography. as this is somewhat close to my research – i work on analyzing a quantum algorithm with pawel wocjan and am working on lattices and lattice reductions – i was very happy to attend this meeting. especially since there were not just some mathematicians, but also a lot of experts on various aspect of quantum computers and quantum algorithms. now i also know a lot more about quantum computing, both from the theoretical side – like having been explained grover’s algorithm, which is another of the fundamental quantum algorithms next to the period-finding algorithm family starting with shor’s algorithm – and the practical side – what the current technology with regard to building quantum computers is, and how people writing compilers for quantum computers can suffer and how complicated it can be to turn a “simple” algorithm into a circuit. i think this was one of the most productive workshops i’ve ever attended.
unfortunately, i neither took my camera with me (the little one i had with me last week, since it is somewhat broken (the sd card slot won’t keep the card anymore, similar to what is described here), nor did i really had time to take any pictures, as i spend most awake hours doing mathematics. i took a few shots with my mobile phone on the excursion/conference dinner on wednesday, which happend to be on a boat going through grachten around leiden and the city and also happened to be a very decicious and very spicy asian food buffet. and later, there was a great dessert buffet. one of the best and original conference dinners i had for quite some time :)

i got another external hard drive today. the main reason is that i want to encrypt my (current) backup harddisk, which requires reformatting the disk. but if i do so, i’m left with nothing but the original data on the laptop, and no backup. in case something goes terribly wrong, i’m screwed. i just created an encrypted partition on the disk; this is really pretty easy and not much command line typing is required, in particular if everything is set up: then linux will ask me for the password as soon as i plug the usb cable in, and automatically mount it using that password. that’s how it should be. and so far, it works perfect.
currently, rsync is mirroring my home directory onto the disk. as soon as it is done, i will copy some stuff from the other backup disk over (like my server’s backups) which i don’t have on the laptop’s harddisk (which is 180 gb smaller than each of the backup disks), and after that, my old backup disk will be reformatted as well and also filled.
after that, i will deposit one of the backup drives somewhere outside my apartment: in case something goes wrong (like house burns down, someone decides to break in, …), i still have a backup somewhere. and, as it is encrypted, nobody but me can read it. (even if someone breaks in here, and steals both laptop and backup, they can’t access the data without my password. and yes, i am aware of xkcd.)