Google Sites, Onion Routing, Hidden Services, Privacy, Security, OPSEC
Google Sites - Took a positive turn. Now they show "Review changes and publish" step, where you can easily see what has been changed and updated on the site, before publishing it. This is great progress. Let's see if the publish step performs any better than it did earlier. Because it was quite slow to publish large sites. I'm also eager to see, if they've improved the HTTP headers. Earlier it seemed like the Google's web publishing team would be highly incompetent. No last modify, etags or other basic info being provided per page. Well, it seems that headers still suck hard, which was kind of totally expected. Also when reviewing long pages, highlighting the actual changes would be great. But this is still a lot better than no review at all. - Well, publishing is faster now. Good job, at least something has been improved. Maybe I'll write a script which rips the site from Google Sites and publishes it with the improvements made which Google can't manage. Maybe some day I'll get so annoyed by Google that I'll move the content to self hosted CMS, or start using some nice static site generator like Hugo @ Wikipedia, which would be perfect for site like this. I'm already hosting my own RSS feed on my own server, because RSS seems to be too hard for Google as well.
Tested Cloudflare's Onion Routing aka Tor Exit Enclave with Single Hop mode with my tests site. Seemed to work, and there are some extra headers added by CF like the alt-svc header which includes information about the onion address. Yet Cloudflare seemed to easily serve 403 to technical probes not generated by browser(s).
Another interesting article, - Finding The Real Origin IPs Hiding Behind Cloudflare or Tor @ Secjuice. Some comments: 1.1. Obviously if using CF as front, all other traffic not originating from CF addresses is blocked. Secondly, you can operate sites hiding behind CF front, with IPv6 addresses. So scanning IPv4 address space won't help. 1.2. Obviously .onion sites don't need https certificates (or ssl or tls, whatever term you'll prefer), because Tor is already securing traffic. 2. Obviously in this specific case you'll change the IP address, if it's required. Sure MX records are visible, but those are usually handled by 3rd party services. And if not, then the administrator is hopefully knowing what they're doing, because they're already running their own email subsystem. 3. Sure, this is exactly why all direct access is being blocked. 4. Sure, this is real risk. That's why the systems should be configured so that the server knows nothing about it's "external presence". So it's configured for localhost and served via Tor. Like in my tests above, the address was localhost:8080, the web-server doesn't even have external address. All routing is done locally via the Tor gateway being run on separate hardware and local port forwarding. Keeping the systems clean and dedicated is very important. So even if those are hacked, there's nothing to be found about the administration. 5. Sure, content should be clean and separated as well. Yet using Google Analytics or reCAPTCHA with private / secret service sounds like a really bad idea to begin with. So simply no.
Outlook support responded to my messages. It's interesting to see if they can resolve anything. My experience is that in most of cases most of help desk are totally toothless. As assumed, practically nothing happened.
There's no way making systems secure, if the users do not understand the key concepts and mechanisms of security.
Curve had a duplicate charge event. Ha. It just happens. This is one of the reasons why transactions should have ID so something like this won't happen. Yet their post really sucks. As coder I really hate requirements like that. Duplicate transaction, as new transactions. Well, if the transactions are new, how do you know if it's a duplicate transaction? I do encounter all the time requirements like this from clients, and every time I have to tell them, that how about you deciding what you want. I can process transactions without any problems. But I can't filter "duplicate transactions", there's no way of doing that, if there's no unique transaction identifier. Sure, I can try to GUESS if the transaction is duplicate or not, but that's kind of really sucky approach. Yet people still keep asking silly things like this all the time.
When I started again about this topic, about silly requests and bad logic. I had several panic calls during the weekend due to "a system malfunction". Nope, sorry. It's not a malfunction, it's working just as requested and it's a feature. It's not my fault, that the logic you're team has specified literally sucks and what you're now experiencing as "a malfunction" is totally expected outcome. How about deciding what you want and reading the freaking documentation before whining?
Something different? - Naval Strike Missile (NSM) and Exocet. Both links @ Wikipedia.
2021-01-03