So I guess we are at that point where I’m going to have to turn on https for all the other ‘shared’ services I have. Old links should still work, just expect to be redirected.
Old machines, however… yeah I guess you guys just got cut out.
I just modified the crazy regex hell I have to patch up AltaVista / UTZOO so at least that ought to be working. The source archive seems operational as well. When everything is working, HAPROXY is freaking awesome.
> Really?
Yes, really.
This was announced in 2016 and will only get more aggressive, down to full red warning sign by default eventually (see https://security.googleblog.com/2016/09/moving-towards-more-secure-web.html, which links to https://www.chromium.org/Home/chromium-security/marking-http-as-non-secure).
I’m honestly not too upset about this because John Average will be more safe from people injecting exploit JavaScript on some public wifi.
It sucks for those of using ancient stuff, even though the majority of services are impossible for older platforms to really deal with. I guess we still have gopher.
Really, this is what MITM attacks (or adding crypto coprocessors to old machines) are for.
Downgrade the connection using a proxy inside your own network, then it’s fine.
I’m working on an update for Web Rendering Proxy that allows obsolete browsers to work with this HTTPS nonsense.
And something to deploy to containers…. ?
Yeah I want to package WRP in to a container but for now there is no point, as it simply doesn’t work for most websites due to HTTPS. Unfortunately implementing proxy for this is non trivial as it requires CONNECT method. I’m thinking perhaps I will revert WRP back to a web server rather than a proxy.
Yet they are doing nothing about so called “security” software that purposely MITMs to inspect HTTPS traffic and completely breaks certificates. The end user has no clue where the host’s certificates are coming from in that case and one can forget about EV certs working that that configuration.
To me that is the biggest surprise, that they don’t have their own CA’s along with other fingerprints to at least know when .google.com is being tampered with.
They do – that’s how the Diginotar breach was detected years ago. However locally installed CAs override the detection to not (completely) break the internet in US enterprises that MitM their own employees.
Ye olde leader, Shitting Bull at its best (=google://)
(Ezekyel 25:17)
“Not explaining a drastic change is always less suspicious.”
(Silent Bob)
“Since 2018 AD, thou can trust in ze internets”
(Julius Caesar)
#shame #notmyssl #bringbacktcp80 #shameonaboutblank #aboutprank #aboutcohnfig #skynet #digitalapocalypse #hal9000 #nofate
Could you make the redirect from http:// to https:// conditional on some browser headers, e.g. only do it for modern browsers or ones known to complain about http://, and perhaps web crawlers? You could go the other way and not do the redirect for Lynx, Netscape, Internet Explorer 6 and earlier, etc. but that list seems like it would be longer.
The problem with that approach is that a man in the middle could spoof being an older browser to prevent HTTPS upgrade, for modern browsers.
I think the real solution is really putting stripping proxies between your old computers and the modern web, if you want to keep them on the modern web.
I’ve been thinking of a reverse proxy into stuff although that doesn’t help me on a fresh install of NT 4.0 with Internet Explorer 2.
The BBS is another option, I put it on VMware to see how it’ll perform. I suppose FTP may be a good fallback to at least get a browser.
I really want a proxy that can connect over Https… Much like how stunnel can bridge that crypto gap for regular TCP.
Don’t browsers which would complain, attempt to upgrade http links to https on the client end anyway? You shouldn’t have to do anything server side to redirect, the browser is supposed to do it for you.