Eclectic Dreams

A Web Design and Development Blog

Follow People Not Brands

December 31st, 2017

I have a rule on social media. Well, I say rule, more of a guideline or a guiding principle. I’ve occasionally mentioned it in passing to folks at meet ups, and through my 10 years on assorted platforms I’ve found it invaluable. It’s this:

Follow people not brands.

Occasionally some social media wanker guru will claim that Twitter or Facebook is for ENGAGING THE BRANDS TM, but really I much prefer people. People make the world go round. Even the best corporate accounts have that stilted, half-human, slightly precarious uncanny valley that makes them difficult to parse, filter and give tuppence about. Largely because they are ultimately run by somebody who is heavily restraining themselves in order to adhere to a corporate hymn sheet and not get the company in trouble. Even the nicer ones have that slightly sociopathic grimace of somebody pretending really hard to be something they’re not.

So anyway, if I find a project or company I like I tend to follow somebody involved who works on the part of it I think is cool. The main bonus is they tend to talk in a more natural way and around a subject, about interesting tangents and lead you off other places. This is cool (possibly the coolest bit about social media). They act as a great filter, because if the project/company/organisation does do something genuinely interesting, they’ll likely retweet it anyway – so you still find out the best bits of the corporate account without being subjected to on-brand waffle.

Oh sure, everybody on Twitter is putting on some kind of mask, how they want to be seen as much as how they are. That’s another post entirely. But the point here is that real people are way better at filtering the corporate crud out of the web than any algorithm.

Now with HTTPS

December 8th, 2015

So, if you’ve been paying attention over the last few years, you’ll have noticed more of the web going encrypted. This is a good thing. It keeps your data more secure and stops proxies and malicious wifi providers eavesdropping or injecting ads into your content.

Of course for those who don’t have money to burn on expensive certificates there was always a blocker to going https. The cost. Even cheaper certificates to secure your site cost about three times as much as the domain name. Plus the notoriously headachey setup steps for getting a secure certificate working on your site

All that changed last year, when Let’s Encrypt announced their service. Free certificates and a simple client you could use to set them up. Pretty much the ideal solution if they could pull it off, and with board members from the likes of Cisco, E.F.F. and Mozilla. It’s been in beta since the summer and at the start of December they went public beta.

So I decided to give it a whirl. I’ve always left SSL config to somebody else before, so this should be “fun”.

Getting started

First off you need ssh/console access to your server and the ability to install software. I have a Centos server at Digital Ocean (who I recommend by the way) and can go in and switch to root to install stuff.

The instructions on the Let’s Encrypt docs are pretty thorough. You’ll probably need to install some dependencies with yum (or apt-get or whatever). You might need to do:

sudo yum install gcc libffi-devel python-devel openssl-devel

Though running ./letsencrypt-auto as root should sort these for you, but I found that my servers memory and CPU were a little low for some of the compiling steps, particularly the python cryptography package used. So I waited for a lull time before installing that manually with:

pip install cryptography

Also, my system had an older python install that grumbled about a few things and requires using the –debug flag to run the client.

Installing the certs

Although Let’s encrypt supports sorting the server setup for some platforms and web servers, my combo of Centos and nginx wasn’t, so I needed to just create a cert int he client and manually install. I needed my web root directory and domain, the command looked like this:

./letsencrypt-auto certonly --webroot -w <webroot> -d <> --debug

This popped up a query for some info (email and so on), then quickly sorted the certs and told me where it put them. Simple.

Setting up nginx was a case of adding an appropriate virtual server and pointing it at the cert/key combo:

server {
    listen 443;
    server_name <domain>;
    ssl on;
    ssl_certificate /etc/letsencrypt/live/<domain>/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/<domain>/privkey.pem;


This required a restart of nginx. Went to https:/<domain> and things seemed to work.

Post install massaging

So you’ve got your secure cert. What next? Well I decided to check the connection against the tool provided by Qualsys to be sure it was secure enough and up to scratch.

Oh dear, only grade C.  Seems there’s some more work to do post install.

Fortunately the report gives some advice on what to fix. For me it was out of date protocols/ciphers still be available and an older form of

I set the protocols in nginx  config with:

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

then the ciphers with:


ssl_prefer_server_ciphers on;

That last bit came from the detail over here, which also recommended setting up a “strong DH group” by running:

openssl dhparam -out dhparams.pem 2048

and then pointing my server at the file with an update to nginx config:

ssl_dhparam /etc/nginx/dhparams.pem;

All that done and my server goes from C rating to A, and avoids lots of known exploits of older SSL technologies. After checking it all worked, I set up a http to https redirect in the old http server config:

 if ($scheme = http) {
        return 301 https://$server_name$request_uri;

And that’s it. If you can view this page, you can see it’s working… You can see it right?


The web will always be a moving target

June 10th, 2015

The future is already here — it’s just not very evenly distributed. – William Gibson

The web moves fast. Faster every year, what with evergreen browsers across the board. It’s certainly a far cry from the bad old days, when we went 5 years between Internet Explorer updates. It would be convenient to think that because we live in a world where people’s browsers are regularly updating, that we live in a world where the web is in a reliable state.

Oh yes, a quick check of your web stats may show that IE8 is the new IE6, and even that is on its way out. We’re nearly at a stage where there’s a baseline of CSS 3 available, which for those of us who remember trying to get CSS working at all in Netscape 4 is a huge shift.

But the only constant is change. Yesterday’s cutting edge is todays common baseline. The web moves on with new Browser APIs, new CSS and new HTML elements. HTML 5 becomes HTML 5.1, CSS gets its CSS 4 selectors and ECMAScript reaches version 7.

This stuff doesn’t arrive all at once though.

It arrives in dribs and drabs, with different browser development teams focusing on differing priorities. Chrome and Firefox have Web RTC already, but it’s still under development for IE, who knows when it’ll hit Safari mobile. Want to use it for a project? Go visit and you’ll be hit by the most common conundrum in web development:

How do I make this work where I need it to?

This isn’t new. This has been going on since there were multiple browsers. From the days of trying to make DHTML work with IE and Netscape’s different layer models to the days of having Promises in some versions of Android on mobile, but not in Safari on iOS 7. The future is like the past, only there’s more of it.

Which brings me to the point. The web is a continually moving target. It probably changed in the time it took me to write this. If you work with web stuff you need to embrace this fact. It will be the only constant in your career. When I’m old and grey and building hypermedia virtual experiences in HTML 10, it’ll be no different, except for maybe some silver space-age jumpsuit and a dodgy prostate.

On the web progressive enhancement is and will always be, the methodology of choice. It makes your site robust to the shifting sands of the web front end. You don’t control your audience’s choice of browser, operating system, connection speed, device, ability to interact with technology or understanding thereof. You don’t control the flow of new features to those browsers, the priorities of their developers and organisations. You don’t get to decide if a feature will be implemented well or buggily or partially.

All you can do is pick a good baseline, and enhance for those who have the shiny.

You do get to work on the most globally available, unpredictable, diversely interacted with communication platform in the world. Enjoy that.