skip to main content.

posts about www.

this year, let's encrypt added two great features:

  1. they enabled the acme v2 protocol, and allow to obtain wildcard certificates through this.

  2. they improved their certificate transparency support by including signed certificate timestamp (sct) records in the certificates. chrome will, for example, require scts from april 2018 on.

i've already tried out both wildcard certificates and scts, and so far they work flawlessly! i've been using the acme v2 support in the letsencrypt module of ansible 2.5 (with a bugfix), into which i invested quite some work.

four days ago, arch linux switched to openssl 1.1.0. openssl 1.1.0 was originally released at the end of last august, but since it has some breaking api changes, it's only slowly creeping into new linux distributions.

this also means that i can finally test my let's encrypt library, let's encrypt ansible role and ocspbot against openssl 1.1.0. the let's encrypt code worked out of the box (i've already incorporated a change somewhen earlier, even without being able to properly test it), but ocspbot needed a bit more work. there's a command line syntax change between 1.0.x and 1.1.0 when specifying http headers to ocsp calls; the old syntax was -header name value, the new one is -header name=value. so i had to add a version detection (i.e. parsing the output of openssl version) to use the correct syntax depending on the used version. but now it works with both openssl 1.0.x and 1.1.0!

using openssl 1.1.0 on my server also allowed me to use x25519, using daniel j. bernstein's curve25519 in edwards form, for secret key negotation (i.e. ephemeral diffie-hellman). using it in nginx is pretty easy:

ssl_ecdh_curve X25519:secp521r1:secp384r1;

this uses x25519 as the default curve/key exchange, followed by the fallsbacks using ecdhe with a 521-bit nist curve and then a 384-bit nist curve as a third fallback. (btw, note the uppercase x in x25519 — if you use the lowercase variant, nginx won't load the config.) the third curve is the only one supported by almost every browser; only a few support the 521-bit one, and right now only chrome supports x25519.

assume you want to allow users (or programs) to upload files/data/... to your website without having to write a script/cgi/... which handles the uploading. something very simple, which just stores the files somewhere so you can analyze them later. this is for example very useful to create a reporting endpoint for a content security policy without using specialized software.

if you use nginx as your webserver, there's a simple solution for this. the idea is to use the client_body_in_file_only directive, which allows you to dump uploads to disk to pass the filename to a reverse proxy, instead of asking nginx to cache the upload and pass it on to the reverse proxy, so that it looks like a regular post/put to the reverse proxy.

unfortunately, this doesn't work if you use a return xxx; instead of proxy_pass yyy;, which would have been my preferred solution. but there's a little trick: you can ask nginx to also listen on another port, say 4000, and simply return a fixed message there. then, for the main listener (on ports 80/443), you use client_body_in_file_only directive combined with proxy_pass http://127.0.0.1:4000. this looks as follows:

server {
    listen 127.0.0.1:4000;

    location / {
        return 200 "Thank you for your report.\n";
    }
}

limit_req_zone $binary_remote_addr zone=peripzone:10m rate=5r/m;

server {
    listen *:80;

    location = /csp-reporting {
        # We just allow POST actions. Add PUT if you want to
        # support PUT as well. (Not needed for CSP reports.)
        limit_except POST {
            deny all;
        }

        # Where to store the files on disk
        client_body_temp_path      /var/www/reports/csp/;
        # Store the file on disk, and don't delete it, no matter
        # what the proxy returns.
        client_body_in_file_only   on;
        # Store at most 64k on disk. That should be sufficient
        # for CSP reports.
        client_body_buffer_size    64K;
        client_max_body_size       64K;
        # Give the client 10 seconds to upload.
        client_body_timeout        10s;

        # Do rate limiting
        limit_req                  zone=peripzone burst=20 nodelay;

        # Now proxy to the small internal server we started
        # above, and don't pass the uploaded file to it.
        proxy_set_body             off;
        proxy_pass                 http://127.0.0.1:4000/;
    }
}

note that i added rate limiting (see the ngx_http_limit_req_module module), allowing on average five uploads per second for remote ips, with bursts up to 10 uploads. see the ngx_http_limit_req_module module's documentation for more information on adding more rate limiting. you can for example also add limits per server, instead of just one per remote ip.

posted in: www
tags:
places:

introduction

classically, revokation of certificates was accomplished with certificate revokation lists (crls). the idea was that browsers regularly download crls from the certificate authorities (cas) and check whether certificates they see are on the list. this doesn't scale well, though. nowadays, there are many cas trusted by browsers in their default configuration, and crls tend to get huge.

a better solution is the online certificate status protocol (ocsp): a browser, when encountering a new certificate, asks the ocsp server of the browser (the url for it is contained in the certificate) whether the certificate is still valid. this has several downsides as well: first, ocsp servers are not always reliable. if a browser cannot connect to it (or doesn't get a reply), what should it do? deny access to the site? besides that, there's another large downside: privacy. the ocsp server knows which page you are visiting, because the browser tells it by asking whether its certificate is valid.

ocsp stapling was invented to improve upon this: the idea is that the webserver itself asks the ocsp server for the status of its certificate, and delivers the answer together with the certificate to the connecting browser. as the ocsp response is signed by the cas certificate, the browser can verify that the response is valid. also, the expiration time of ocsp responses is much less than the one for certificates, so if a certificate is revoked, existing ocsp responses will only be valid for a couple of more days.

this is pretty good already, except that a malicious webserver could simply not send the ocsp response with its certificate. if a browser cannot contact the ocsp server itself, it has no way to know whether the certificate is revoked or not. to overcome this, ocsp must-staple was invented. this is a flag in the certificate itself which says that the certificate is only valid with a valid and good ocsp response. so if a browser encounters a certificate with this flag, and the webserver isn't ocsp stapling, the browser knows that something is fishy.

unfortunately, there are some downsides. first, most the most common webservers for linux, apache and nginx, while having ocsp stapling support, do in some situations send replies without ocsp stapling. if the certificate has the ocsp must-staple flag set, these answers result in error pages shown in browsers. and that's something you really want to avoid, that visitors of your page thing there's something bad happening.

fortunately, at least for nginx, you can specify a file containing an ocsp response directly with the ssl_stapling_file directive. unfortunately, you have to make sure you always have a good and valid ocsp response at that place, and reload nginx in case the response is updated. other programs allow to specify an ocsp response in a similar way, such as exim with the tls_ocsp_file directive, and thus have the same problem. to solve this problem, i've started creating ocsp bot:

ocsp bot

ocsp bot is a python script which should be called frequently (as in: once per hour or so), which checks a set of x.509 certificates to obtain up-to-date ocsp responses. in case the current ocsp responses will expire soon, or aren't there, it will try to get a new response. it will only copy the new response to the correct place if it is valid and good. calling it frequently will ensure that in case of problems getting a new response, it will retry every hour (or so) until a good and valid response could have been obtained. so a user response is only necessary if the process fails several times in a row.

ocsp bot will signal with its exit code whether responses have been updated, allowing to reload/restart the corresponding service to use the new response.

you can install ocsp bot with pip install ocsp from pypi.

integration with ansible

i'm using ansible to configure my server. to copy certificates and obtain ocsp responses, i'm using a custom role.

the ansible tasks for the role are as follows. ocsp bot is installed in /var/www/ocsp:

- name: Create OCSP log folder
  file: dest=/var/www/ocsp/logs state=directory
- name: Create OCSP response folder
  file: dest=/var/www/ocsp/responses state=directory
- name: Install pyyaml
  pip: name=pyyaml
- name: Install OCSP response utility
  copy: src=ocspbot.py dest=/var/www/ocsp/ocspbot.py mode=0755
  # ocspbot.py is ocspbot/__main__.py from https://github.com/felixfontein/ocspbot/
- name: Install OCSP bash script
  template: src=ocspbot.sh.j2 dest=/var/www/ocsp/ocspbot.sh mode=0755
- name: Install OCSP response utility configurations
  template: src=ocspbot.yaml.j2 dest=/var/www/ocsp/ocspbot-{{ item.key }}.yaml
  with_dict: "{{ certificates }}"
- name: Install OCSP response cronjob
  cron: name="Update OCSP responses" hour=* minute=0 job=/var/www/ocsp/ocspbot.sh state=present

the variable certificates is defined as follows:

certificates:
  nginx:
    domains:
    - example.com
    - example.net
    reload:
    - nginx
    key_owner: root
    key_group: root
    key_mode: "0400"
  mailserver:
    domains:
    - mail.example.com
    reload:
    - dovecot
    - exim
    key_owner: root
    key_group: exim
    key_mode: "0440"

The template for ocspbot.sh:

#!/bin/bash
RC=0
{% for name, data in certificates|dictsort %}

# Renew OCSP responses for {{ name }}
/var/www/ocsp/ocspbot.py /home/ocsp/ocspbot-{{ name }}.yaml
RESULT=$?
if [ $RESULT -gt 0 ]; then
{%   for service in data.reload %}
    systemctl reload {{ service }}
{%   endfor %}
elif [ $RESULT -lt 0 ]; then
    RC=1
fi
{% endfor %}

exit $RC

the template for the configuration yaml files:

make_backups: True

minimum_validity: 3d
minimum_validity_percentage: 42.8

ocsp_folder: /var/www/ocsp/responses
output_log: /var/www/ocsp/logs/{{ item.key }}-{year}{month}{day}-{hour}{minute}{second}.log

domains:
{% for domain in item.value.domains|sort %}
  {{ domain }}:
    cert: /var/www/certs/{{ domain }}.pem
    chain: /var/www/certs/{{ domain }}-chain.pem
    rootchain: /var/www/certs/{{ domain }}-rootchain.pem
    ocsp: {{ domain }}.ocsp-resp
{% endfor %}

the certificates are copied with the following ansible tasks:

- name: copy private keys
  copy: src=keys/{{ item.1 }}.key dest=/var/www/keys/{{ item.1 }}.key owner={{ item.0.value.key_owner }} group={{ item.0.value.key_group }} mode={{ item.0.value.key_mode }}
  with_dependent:
  - "certificates"
  - "item.0.value.domains"
  notify: update OCSP responses
- name: copy certificates
  copy: src=keys/{{ item.1 }}{{ item.2 }} dest=/var/www/certs/{{ item.1 }}{{ item.2 }} owner=root group=root mode=0444
  with_dependent:
  - "certificates"
  - "item.0.value.domains"
  - '["-rootchain.pem", "-fullchain.pem", "-chain.pem", ".pem"]'
  notify: update OCSP responses

(here, the dependent loop lookup plugin is used.)

the handler update OCSP responses is defined as follows:

- name: update OCSP responses
  command: /var/www/ocsp/ocspbot.sh
  register: result
  failed_when: result.rc != 0
  notify:
  - reload nginx
  - reload exim
  - reload dovecot

i'm using this setup for some weeks now, and it seems to work fine. so far, i'm not using ocsp must-staple certificates (except for some test subdomains). if everything seems to be fine for some time, i'll switch to ocsp must-staple certificates.

today, i've added two new plugins to nikola: sidebar and static tag cloud. these, together with filetreesubs, a doit-based file tree synchronization and text-based substitution tool, allow to use nikola to create a more dynamic-looking blog with a sidebar having current information, without the need to rebuild every single page every time something changes. (which can take a very long time for a large blog such as spielwiese).

the plugins create html fragments and, for the tag clouds, css files, which have to be included in all generated blog pages. one way to do this is to use javascript, but that wouldn't yield a proper static blog as i imagine it. the sidebar and the tag cloud should always be there, and not depend on javascript being enabled. (i for myself use noscript and javascript in my browser is off by default.)

for spielwiese, a small tag cloud is created (one per language) and included in the sidebar. a large tag cloud is created and included in the tag overview page (chde version). also, a small and large place cloud is created and included in the sidebar and place overview page, respectively.

the configuration for the static tag cloud plugin looks as follows:

RENDER_STATIC_TAG_CLOUDS = {
    # Small tag cloud
    'small': {
        'name': 'tcs-{0}',
        'filename': 'tagcloud-{0}.inc',
        'taxonomy_type': 'tag',
        'style_filename': 'assets/css/tagcloud-{0}-small.css',
        'max_number_of_levels': 15,
        'max_tags': 40,
        'minimal_number_of_appearances': 5,
        'colors': ((0.4,0.4,0.4), (1.0,1.0,1.0)),
        'background_colors': ((0.133, 0.133, 0.133), ),
        'border_colors': ((0.2, 0.2, 0.2), ),
        'font_sizes': (6, 20),
        'round_factor': 0.6,
    },
    # Large tag cloud
    'large': {
        'name': 'tcl-{0}',
        'filename': 'tagcloud-{0}-large.inc',
        'taxonomy_type': 'tag',
        'style_filename': 'assets/css/tagcloud-{0}-large.css',
        'max_number_of_levels': 100,
        'minimal_number_of_appearances': 3,
        'colors': ((0.25,0.25,0.25), (1.0,1.0,1.0)),
        'background_colors': ((0.133, 0.133, 0.133), ),
        'border_colors': ((0.2, 0.2, 0.2), ),
        'font_sizes': (8, 35),
        'round_factor': 0.3,
    },
    # Small place cloud
    'places-small': {
        'name': 'pcs-{0}',
        'filename': 'placecloud-{0}.inc',
        'taxonomy_type': 'place',
        'style_filename': 'assets/css/placecloud-{0}-small.css',
        'max_number_of_levels': 15,
        'max_tags': 40,
        'minimal_number_of_appearances': 3,
        'colors': ((0.4,0.4,0.4), (1.0,1.0,1.0)),
        'background_colors': ((0.133, 0.133, 0.133), ),
        'border_colors': ((0.2, 0.2, 0.2), ),
        'font_sizes': (6, 20),
        'round_factor': 0.6,
    },
    # Large place cloud
    'places-large': {
        'name': 'pcl-{0}',
        'filename': 'placecloud-{0}-large.inc',
        'taxonomy_type': 'place',
        'style_filename': 'assets/css/placecloud-{0}-large.css',
        'max_number_of_levels': 100,
        'minimal_number_of_appearances': 2,
        'colors': ((0.25,0.25,0.25), (1.0,1.0,1.0)),
        'background_colors': ((0.133, 0.133, 0.133), ),
        'border_colors': ((0.2, 0.2, 0.2), ),
        'font_sizes': (8, 35),
        'round_factor': 0.3,
    },
}

the generated css file for the large tag cloud can be found here; the generated html fragments aren't uploaded as filetreesubs doesn't copy them to the output folder.

the configuration for filetreesubs looks as follows:

source: output-spielwiese
destination: final-spielwiese

# Substitutions
substitutes:
  # For all HTML pages: include sidebar
  '.*\.html':
    '<!-- include:sidebar-en -->':
      file: sidebar-en.inc
    '<!-- include:sidebar-chde -->':
      file: sidebar-chde.inc
  # For specific pages, also include tag/place clouds
  'tag/index.html':
    '<!-- include:tagcloud:en:large -->':
      file: tagcloud-en-large.inc
  'place/index.html':
    '<!-- include:placecloud:en:large -->':
      file: placecloud-en-large.inc
  'chde/schlagwort/index.html':
    '<!-- include:tagcloud:chde:large -->':
      file: tagcloud-chde-large.inc
  'chde/ort/index.html':
    '<!-- include:placecloud:chde:large -->':
      file: placecloud-chde-large.inc

# The substitution chains allow the sidebar to include
# the small tag and place clouds.
substitute_chains:
- template: sidebar-en.inc
  substitutes:
    '<!-- include:tagcloud:en -->':
      file: tagcloud-en.inc
    '<!-- include:placecloud:en -->':
      file: placecloud-en.inc
- template: sidebar-chde.inc
  substitutes:
    '<!-- include:tagcloud:chde -->':
      file: tagcloud-chde.inc
    '<!-- include:placecloud:chde -->':
      file: placecloud-chde.inc

# Create index.html files in all folders which don't have one yet.
create_index_filename: index.html
create_index_content: |
  <!DOCTYPE html>
  <html lang="en">
    <head>
      <title>there's nothing to see here.</title>
      <meta name="robots" content="noindex">
      <meta http-equiv="refresh" content="0; url=..">
    </head>
    <body style="background-color:black; color:white;">
      <div style="position:absolute; top:0; left:0; right:0; bottom:0;">
        <div style="width:100%; height:100%; display:table;">
          <div style="display:table-cell; text-align:center; vertical-align:middle;">
            there's nothing to see here. go <a href=".." style="color:#AAA;">here</a> instead.
          </div>
        </div>
      </div>
    </body>
  </html>

# Everything is UTF-8.
encoding: utf-8

# I want to be able to run different things in parallel.
doit_config:
  dep_file: '.doit-spielwiese-subs.db'

this allows me to use html-style comments such as <!-- include:tagcloud:en --> to indicate where the html fragments should be included. i also create index files in foldes which otherwise would be empty, such as /photos/ (see here for how it looks). the result is, from my point of view, a much more polished version of the blog than the raw version produced by nikola without postprocessing.

this should give you an idea on how to produce a similar result with nikola, my plugins and filetreesubs.

posted in: www
tags:
places:

yesterday, i read that hosttech, my dns hoster and registrar, finally supports dnssec. actually, they already support it since december 24th, according to an announcement i obviously missed.

anyway. setting it up went smooth, especially with the instructions from that blog post. only for fontein.de it was a bit more tricky, since i also had to add the public key as otherwise denic didn’t like the record.

now the only thing missing is that cablecom actually provides a dnssec-capable dns resolver

posted in: computer
tags:
places:

three days ago, let’s encrypt started their public beta. for those of you who don’t know: let’s encrypt is a certificate authority issuing free certificates for protecting https connections.

this is awesome!

for one, this allows me to get some “real” certificates (as opposed to my self-signed ones) without paying a larger sum of money per year (i’m using quite many subdomains of fontein.de and two other domains, which results in quite some sum even when using cheap resellers of resellers of resellers).

then, their goal is to automate the whole process as much as possible. so instead of a lof of manual work (mostly filling out forms, handling payment of fees, reacting to emails or domain challenge requests, etc.) it should be possible to run one command, maybe even as a cronjob, to get a (renewed) certificate for a domain or a set of domains.

on thursday, when the beta officially started, i tried out the official client. as mentioned already by lots of others, it has some serious downside: it is a huge python program which needs to be run as root. (not necessarily on the webserver, though, even though in that case you cannot automate stuff anymore.) but there were already alternatives: a static website telling you what to do and doing some calculations in javascript, or a tiny python client. (both are by daniel roesler.)

that’s already much better, but still not what i want, as this is hard to automate when you don’t want to run that on the webserver itself. i’m prefering something which can run somewhere else, and can be integrated in an orchestration tool like ansible. well, so i took daniel roesler’s code (including a python 3 patch by collin anderson) and converted it into a more modular tool, which allows to split up the process so that with some more scripting, it can easily be used to do the process from remote. you can find the result on github. i also created an ansible role which allows to simply generate keys, certificate signing requests and get complete certificates from let’s encrypt with ansible; that project can also be found on github. i’m using it in production for my personal webserver: as a result you can now look at spielwiese without having to accept my self-signed certificate! maybe also others will find this useful.

this weekend, i spend a bit of time pimping my nginx tls/ssl configuration for https. my goal was to achieve much better on the ssl labs ssl server test. well, my top score will never exceed T due to my self-signed certificate, but fortunately it also shows the top score ignoring trust issues. and there, i finally got an A!

of course, there’s always a downside. since certain older clients are incapable of dealing with modern ciphers and protocols (like tls 1.2), you either have to support cipher/hash/… combinations which aren’t exactly secure, or drop support for these clients. if you want a good score from the ssl server test, you have to drop support for some clients.

in my case (and after doing quite some experiments), i decided to drop support for:

  • android 2.3.7 (and similar): no 256 bit ciphers, and no support of tls 1.1 or higher;
  • internet explorer 6 and 8 under windows xp: not even tls 1.0 (ie 6), or no tls 1.1 or higher (ie 8), and no 256 bit ciphers;
  • all kind of javas (java 6u45, 7u25, 8b132): while java 8 finally supports tls 1.2 (the others only up to tls 1.0), there are no 256 bit ciphers.

all other clients tested on the ssl server test have no problem connecting with my config, and all result in 256 bit ciphers with forward secrecy.

the total result is 100% for key exchange and ciphers, and 95% for protocol support (i guess supporting tls 1.0 is the problem, but that’s needed for quite some clients). you can see the result here. i probably would have gotten 100% for the certificate, too, if it would not have been self-signed (by my own ca), but by something “trustworthy”.

to achieve this, i used 4096 bit rsa keys and a 4096 bit dh setting. generating the server certificate (with the rsa keys) is pretty standard, but what i haven’t seen very often is the diffie-hellman key exchange parameters generation (in fact, i’ve first seen it here):

1openssl genpkey -genparam -algorithm DH -out dhparam.pem -pkeyopt dh_paramgen_prime_len:4096

this generates a diffie-hellman setup with a 4096 bit prime. a smaller prime is fine for most scenarios, but if you’re paranoid enough, 4096 bits is a good start :-) note that the prime bitlength has a direct impact on the server (and client) load when a new tls/ssl connection with forward secrecy is initiated. the longer the prime is, the slower this will be. (the handshake is superlinear in the number of bits, and probably closer to quadratic than to the complexity-theoretic optimum of O(n1+ɛ) for every ɛ > 0.) for more modern clients, though, an elliptic curve based setting will be used, which is much more efficient since it uses way smaller finite fields.

anyway, here’s the config:

1ssl_session_cache shared:SSL:5m;
2ssl_session_timeout  5m;
3
4ssl_dhparam /etc/nginx/dhparam.pem;
5
6ssl_protocols TLSv1.2 TLSv1;
7ssl_prefer_server_ciphers on;
8ssl_ciphers "-ALL !ADH !aNULL !EXP !EXPORT40 !EXPORT56 !RC4 !3DES !eNULL !NULL !DES !MD5 !LOW ECDHE-ECDSA-AES256-GCM-SHA384 ECDHE-RSA-AES256-GCM-SHA384 DHE-RSA-AES256-GCM-SHA384 ECDHE-ECDSA-AES256-SHA384 ECDHE-RSA-AES256-SHA384 DHE-RSA-AES256-SHA256 ECDHE-ECDSA-AES256-SHA ECDHE-RSA-AES256-SHA DHE-RSA-AES256-SHA";

this leads to the following list of ciphers:

1prio  ciphersuite                  protocols      pfs_keysize
21     ECDHE-RSA-AES256-GCM-SHA384  TLSv1.2        ECDH,P-256,256bits
32     DHE-RSA-AES256-GCM-SHA384    TLSv1.2        DH,4096bits
43     ECDHE-RSA-AES256-SHA384      TLSv1.2        ECDH,P-256,256bits
54     DHE-RSA-AES256-SHA256        TLSv1.2        DH,4096bits
65     ECDHE-RSA-AES256-SHA         TLSv1,TLSv1.2  ECDH,P-256,256bits
76     DHE-RSA-AES256-SHA           TLSv1,TLSv1.2  DH,4096bits

(courtsey to cipherscan.)

i’d like to also use http strict transport security, but that won’t work well if you have a self-signed certificate, thanks to its specifications (see point #2 here). also, ocsp stapling makes no sense with a self-signed certificate and without a proper ca. finally, i’d like to use public key pinning in the future, but that’s rather experimental at the moment.

one thing i’m missing quite badly is proper elliptic curve support. with that i mean good (non-nist) curves, like the ones listed as “safe” on this page, especially the higher security ones (like curve41417, ed448-goldilocks, m-511 and m-521). unfortunately, i’m afraid it will take a long time until we can use them with tls, not only because they first have to get into a standard, but then the standard has to be implemented by clients and enough clients must be able to use it. consider for example tls 1.2, which was defined in august 2008. while finally all current browsers support it (that hasn’t been the case a couple of years ago, similar to tls 1.1 which has been around since april 2006), it took quite some time, and there are still a lot of older browsers out there which don’t support it. just consider many smartphones produced in the last years with android 4.3 an older (which includes my fairphone), which have only tls 1.0 support. or safari 6 included with osx 10.8, openssl 0.9.8, internet explorer mobile on windows phone 8.0, internet explorer up to version 10, and quite some search machine bots.

note that in my above config, the elliptic curve used for diffie-hellman is p-256, a nist curve. it’s one of these nsa generated curves, and it’s not exactly optimal (search for p-256 here). unfortunately, with current tls, there’s not much you can do about this… too bad.

posted in: computer
tags:
places:

from today on, i’m enforcing https for (almost) all my web pages. i’ve added an automatic redirect which redirects all http:// pages to their corresponding https:// pages.

despite the tons of problems ssl/tls have – essentially, everything less than TLS 1.2 is unsafe, but only very few browsers actually support TLS 1.2 even though it has already been standarized in 2008 –, it is better than using no encryption at all.

and yes, i know that “just” having a self-signed certificate is only partially helpful. but i don’t have a better solution at the moment, as i don’t want to dump tons of money into CAs which i don’t really trust anyway. (maybe i’ll change my mind eventually. but not right now.) so for the moment, you have to accept my self-signed certificate (whose sha-1 fingerprint is 69:02:33:1D:F7:E3:9C:DA:D2:7D:9E:1D:4A:C6:40:99:A3:F8:B2:58, and whose md5 fingerprint is E5:DA:7D:4E:11:34:20:BD:7C:9E:3B:CD:E1:C9:6A:1B. you can compare them in firefox, for example, by clicking the padlock and then clicking “more information…” and then “view certificate”, and in chromium/chrome by clicking the padlock and then “certificate information”).

posted in: computer
tags:
places:

as you might have noticed, musikwiese has its categories displayed as a tree. i did this using the wp-dtree plugin. this is pretty nice, and i thought that i might also be able to use it for spielwiese‘s archives list – after all, that list contains like 40 entries. so it would be nice to have a tree with the years as top-level nodes, which can be expanded to get a list of months. unfortunately, wp-dtree doesn’t do this. so i started programming myself, creating a small plugin which outputs the code wp-dtree should create to display such an archive. and, it seems to work fine! i also included a noscript fallback for people with disabled javascript; in that case, the “classical” archives will be displayed.
if you are interested in my plugin, ask me, and i will send it to you or maybe also upload it somewhere here.