Eclectic Dreams

A Web Design and Development Blog

Now with HTTPS

December 8th, 2015

So, if you’ve been paying attention over the last few years, you’ll have noticed more of the web going encrypted. This is a good thing. It keeps your data more secure and stops proxies and malicious wifi providers eavesdropping or injecting ads into your content.

Of course for those who don’t have money to burn on expensive certificates there was always a blocker to going https. The cost. Even cheaper certificates to secure your site cost about three times as much as the domain name. Plus the notoriously headachey setup steps for getting a secure certificate working on your site

All that changed last year, when Let’s Encrypt announced their service. Free certificates and a simple client you could use to set them up. Pretty much the ideal solution if they could pull it off, and with board members from the likes of Cisco, E.F.F. and Mozilla. It’s been in beta since the summer and at the start of December they went public beta.

So I decided to give it a whirl. I’ve always left SSL config to somebody else before, so this should be “fun”.

Getting started

First off you need ssh/console access to your server and the ability to install software. I have a Centos server at Digital Ocean (who I recommend by the way) and can go in and switch to root to install stuff.

The instructions on the Let’s Encrypt docs are pretty thorough. You’ll probably need to install some dependencies with yum (or apt-get or whatever). You might need to do:

sudo yum install gcc libffi-devel python-devel openssl-devel

Though running ./letsencrypt-auto as root should sort these for you, but I found that my servers memory and CPU were a little low for some of the compiling steps, particularly the python cryptography package used. So I waited for a lull time before installing that manually with:

pip install cryptography

Also, my system had an older python install that grumbled about a few things and requires using the –debug flag to run the client.

Installing the certs

Although Let’s encrypt supports sorting the server setup for some platforms and web servers, my combo of Centos and nginx wasn’t, so I needed to just create a cert int he client and manually install. I needed my web root directory and domain, the command looked like this:

./letsencrypt-auto certonly --webroot -w <webroot> -d <> --debug

This popped up a query for some info (email and so on), then quickly sorted the certs and told me where it put them. Simple.

Setting up nginx was a case of adding an appropriate virtual server and pointing it at the cert/key combo:

server {
    listen 443;
    server_name <domain>;
    ssl on;
    ssl_certificate /etc/letsencrypt/live/<domain>/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/<domain>/privkey.pem;


This required a restart of nginx. Went to https:/<domain> and things seemed to work.

Post install massaging

So you’ve got your secure cert. What next? Well I decided to check the connection against the tool provided by Qualsys to be sure it was secure enough and up to scratch.

Oh dear, only grade C.  Seems there’s some more work to do post install.

Fortunately the report gives some advice on what to fix. For me it was out of date protocols/ciphers still be available and an older form of

I set the protocols in nginx  config with:

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

then the ciphers with:


ssl_prefer_server_ciphers on;

That last bit came from the detail over here, which also recommended setting up a “strong DH group” by running:

openssl dhparam -out dhparams.pem 2048

and then pointing my server at the file with an update to nginx config:

ssl_dhparam /etc/nginx/dhparams.pem;

All that done and my server goes from C rating to A, and avoids lots of known exploits of older SSL technologies. After checking it all worked, I set up a http to https redirect in the old http server config:

 if ($scheme = http) {
        return 301 https://$server_name$request_uri;

And that’s it. If you can view this page, you can see it’s working… You can see it right?


The web will always be a moving target

June 10th, 2015

The future is already here — it’s just not very evenly distributed. – William Gibson

The web moves fast. Faster every year, what with evergreen browsers across the board. It’s certainly a far cry from the bad old days, when we went 5 years between Internet Explorer updates. It would be convenient to think that because we live in a world where people’s browsers are regularly updating, that we live in a world where the web is in a reliable state.

Oh yes, a quick check of your web stats may show that IE8 is the new IE6, and even that is on its way out. We’re nearly at a stage where there’s a baseline of CSS 3 available, which for those of us who remember trying to get CSS working at all in Netscape 4 is a huge shift.

But the only constant is change. Yesterday’s cutting edge is todays common baseline. The web moves on with new Browser APIs, new CSS and new HTML elements. HTML 5 becomes HTML 5.1, CSS gets its CSS 4 selectors and ECMAScript reaches version 7.

This stuff doesn’t arrive all at once though.

It arrives in dribs and drabs, with different browser development teams focusing on differing priorities. Chrome and Firefox have Web RTC already, but it’s still under development for IE, who knows when it’ll hit Safari mobile. Want to use it for a project? Go visit and you’ll be hit by the most common conundrum in web development:

How do I make this work where I need it to?

This isn’t new. This has been going on since there were multiple browsers. From the days of trying to make DHTML work with IE and Netscape’s different layer models to the days of having Promises in some versions of Android on mobile, but not in Safari on iOS 7. The future is like the past, only there’s more of it.

Which brings me to the point. The web is a continually moving target. It probably changed in the time it took me to write this. If you work with web stuff you need to embrace this fact. It will be the only constant in your career. When I’m old and grey and building hypermedia virtual experiences in HTML 10, it’ll be no different, except for maybe some silver space-age jumpsuit and a dodgy prostate.

On the web progressive enhancement is and will always be, the methodology of choice. It makes your site robust to the shifting sands of the web front end. You don’t control your audience’s choice of browser, operating system, connection speed, device, ability to interact with technology or understanding thereof. You don’t control the flow of new features to those browsers, the priorities of their developers and organisations. You don’t get to decide if a feature will be implemented well or buggily or partially.

All you can do is pick a good baseline, and enhance for those who have the shiny.

You do get to work on the most globally available, unpredictable, diversely interacted with communication platform in the world. Enjoy that.

Fun times with Appcache

August 20th, 2014

When we started work on our responsive web app for managing your library service, Soprano, we had planned offline support since day one. However  we waited to get the basic product launched before sorting out the offline side, as even two years ago it was known to be a difficult beast. We’d actually abandoned a previously attempted offline version of our catalogue due to complexities of retrofitting appcache onto a nice REST-ful site. So we allocated a majority of my time at the tail end of ’13 for the offline-ification.

Offline webapps are… Lets be honest, a great idea marred by a particularly bizarre implementation, poor documentation and more gotchas than I can count.  If you’re looking for a solid intro to doing it right, see this great article series from the Financial Times . If you want to learn about some of the gotchas, then most have been nicely documented at AppcacheFacts and A List Apart. Hopefully near-future tech like Service Workers will make the process less of a pain, but they are a while off…

This blog post is mostly a grab bag of stuff I haven’t seen documented elsewhere, and may help others beating the technology with a stick until it works.

Let’s start with the fun of SSL. Imagine you want to build and test your app locally with a similarly secure connection as it does in the real world. To do this you create a self signed certificate.  Well for the love of all that is good make sure you properly install the SSL cert. Recent versions of Chrome and Firefox refuse to store an appcache from an invalid SSL cert. This includes certs that have expired or aren’t trusted, or have mismatched domains, or basically anything that might trigger security warnings. And no , you can’t just click that nice Proceed Anyway button on the SSL warning screen, that won’t work. In Firefox it will die silently and not give you any debug info as to why the appcache download failed. Chrome and Safari will give you a little more info in dev console. You need to properly install the certificate, on a Mac install it to your keychain for Chrome and Safari anyway, Firefox wants to use its own store hidden away in the  nest of preferences panes.

Next, be careful of putting expires headers on, well, anything. You know how all the articles on cache manifest talk about you should never, ever cache the main manifest file or the whole offline version is permanently cached? Well, turns out you have to be a bit careful with Firefox, as if you add expires headers for right now or just in the past, then well the cache instantly expires and won’t run the full initial update/download. But be careful of Chrome and Safari, as they do need expires headers or the manifest may get stuck. Yeah, I swallowed my pride and did some browser sniffing for that. Oh, and make sure none of the files referenced in your manifest have past or very short expiry as Firefox respects that and downloading the manifest can fail because of those files.

If you are using appcache, the best way is to have a separate webpage with the manifest and wrap it in an iframe. The main advantage of this is that it ringfences the cache. Normally, when you just add an appcache directly to a page every (matching) GET request you make via AJAX is added to the cache. Leave a one page app for a while and watch those GETs rack up and fill the cache. This means appcache has a built in DDOS mode, whereby when you expire a manifest every file in it gets re-downloaded right then. Using an iframe means you get more control of what gets cached.

That said, if you want IE10 on Win 7 to work with it, iframes might not be the way to go. That particular combo seems to have more trouble actually detecting going offline. IE11 seems to handle it fine.

One fun thing I haven’t seen much mention of is the concept of foreign files in appcache. At one point in development we accidentally ended up with two manifest files in different iframes embedded on the same page (due to a mis-built bit of JS). This causes all kinds of fun as you try and work out which version of a file you have between two appcaches, and which cache is the one being used for the page you are viewing. So if you see a file flagged as foreign in Chrome debug, that’s likely what’s happening.

A note on dev tooling. Chrome and Safari tell you loads about what’s going on with an appcache. What’s downloaded, where it failed, and so on. It’s dev tools also let you know if something’s origin is the cache, not a real request. Plus also the url chrome://appcache-internals/ makes flushing bad or wrong ones easy. You’ll need to do this lots, since once you add an appcache every http error will go to the fallback cache page, that’s fun when debugging issues outside the front-end…

Firefox gives the most unusable debug tool for app cache,  hidden somewhere off in the depths of a sub-dev tools command line. Seriously, the Firefox tool’s appcache validate gives really obtuse errors or just fails. It does allow you to quickly flush an appcache, which can be handy, but that’s about all it’s good for. I’m seriously hoping this gets improved before this stuff gets superseded by the new shiny service worker caches.

Anyway, enough negative, here’s positive things: We have a live system using app cache for offline functionality, which is pretty nifty for a responsive back-office application often used where connectivity is unreliable. Lots of dev headache for a really useful thing to people doing a job on the floor.

Who says enterprise software can’t do HTML5?