When you see the Your connection is not private message in your browser, it's generally because the website uses a certificate that your browser does not approve. It can be because the certificate expired or because it's signed by a Certificate Authority (CA) unknown or not trusted by your browser or it can be self-signed.
Today, I ran in to an issue with a self signed certificate I'm using on a testing server using a sub-domain that employs HTTP Strict Transport Security (HSTS), I was locked out. Using Firefox and Chromium, there was no way to add an exception, no way for me to manually verify, accept the risk, etc etc.
Computer Says No
According to RFC 6797 section 12.1, browsers must not allow users to bypass errors When connecting to a Known HSTS Host.
I'm all for warnings, I'm very glad that Let's Encrypt exists (it's the only way for me to use 3rd party signed certificates), but I think I have a serious problem with this RFC. My computer, even running Free Software, is starting to give me attitude.
"Computer Says No", or the "Computer says no attitude", is the popular name given to an attitude in customer service in which the default response is to check with information stored or generated electronically and then make decisions based on that, apparently without using common sense, and showing a level of unhelpfulness whereby more could be done to reach a mutually satisfactory outcome, but is not.
Yes, it's possible to get a free certificate, but do I want to do that every single time ? Thins brings me to a much more serious problem that we've been neglecting.
Privacy and Autonomy
Should we need permission to use secure HTTP ? That's where we are heading.
Do we have to let the world know about our internal sub-domains (until we get wildcard certificates...). Can my browser let me test things using fictional domains ? And what about CRLs, okay, we now have OCSP Stapling (website admins, please enable this), however the server still regularly queries the CA.
And if you're tempted to try to avoid encryption (!not safe, but sometimes you know what you're doing), hold your horses because HTTP2 might only want to be encrypted.
The web is moving closer to an all encrypted environment, which is a very good thing, however, that also means that RFCs and other practices support more than ever the current broken CA situation.
The bottom line is: We need permission from a Certificate Authority to publish on the world wide web. And that, that is a very bad thing.
What if we use..... DNS ?
We could authenticate encryption certificates via DNS. I've rambled about it (October 2010), there's even an RFC draft with a very similar idea (March 2011). Sadly, it seems like this idea did not stick (if it did, it would kill the CA industry).
Instead, we're continuing with the old way of doing things, we're adding more layers of complications to the wrongest part of the standardised Internet. The irony is that someone implemented the concept of using DNS to verify certificates, but only to verify Certificate Authorities !!!
The Open Web is Without Certificate Authorities
Some, like Netcraft and Sslstore, have pondered about phishing and fraud sites using certificates, they seem to believe it's the job of CAs to police the web. Sslstore even asked Let's Encrypt to block any sub-domain containing "paypal"...
Let's Encrypt has emitted concerns about their new given role. The EFF are the good guys, regardless, they are in a position of authority now. They also check with Google Safe Browsing API before issuing certificates...
In short, to be able to run a website with encrypted access we must be allowed by a Certificate Authority (and Google). This needs to end.