H2, Electron, multihash, rootkit, JSON, HTML5 data, Covert Comms, Reddit Place
Post date: Mar 4, 2018 8:15:38 AM
- A Comprehensive Guide To HTTP/2 Server Push - Awesome, I've got my personal guesses on this topic. Let's see if I can agree with the article and if I actually learn something new from it. - All the old lovely anti-patterns, like in-lining junk, splitting large images into many small images, etc. - Pushing way too many assets. Is good that they brought that up. Because I was assuming that when developers hear about push, they start pushing everything. Here's how Cloudflare's HTTP/2 (H2) push works.
- Electron is flash for the desktop - Hahah! Been there done that. I've been wondering CPU & RAM requirements of Electron applications for a long time. I said its utter bleep bleep. And some guys said, they haven't noticed it. Well, even if you don't notice it, it still seriously sucks. I'm totally agreeing about bloatware and this post. I also really deeply hate programs, which consume a lot of CPU while idling. Why, why and why? - I've also written about that concept, of testing software with real environments. Instead of super high end test environments. Which is of course ridiculous. - Also it was a nice summary, that there are just so many programmers out there, whom don't have any idea what they're doing.
- Wondered if some remote rootkit tools still require booting? Because some devices have been booting at times which arise suspicion. I've written about this earlier. But yep, there's certain kind of pattern emerging. And I've got extensive logs to be analyzed. Usually that leads to device reboot and IP address change. I've been collecting all possible data from these evens, for possible further analysis. Disk images & a few random RAM dumps.
- Optimized some multihash decoding code. So that as long as there's only single supported hash type, the useless hash type identification isn't being stored. It's bad if the prefix is always stored, because it might reduce hash indexing performance, because every hash would start with same two byte prefix. Now only the base58 / ASCII encoded version contains the two byte prefix and the actual binary hash is stored as it is, plain binary blob. Of course when the binary blob is mapped back to the base58 / multihash format, the prefix is added back before encoding. b'\x12\x20' = SHA-256 @ 32 bytes = 256 bits.
- Some people say that JSON is so nice. But actually it's hard. Good old binary protocols are just fine, especially when dealing with extremely low resource embedded systems. It doesn't make that much difference if protocol is clear, if the data is dumped as JSON or sane proprietary binary format. But it seems that many 'new age developers' aren't that good, with binary data at all. Even if it's very very easy to deal with and allows much more compact storage of data. They often get confused by several options being stored in single or a few bytes using bit array. But that's just very very basic computing stuff. Let's store 2k (kilobytes, not kibibytes) boolean values in JSON {'value1': True, 'value2': True}, instead of using bit array of 2k bits. = 250 bytes. It expands data just like XML and overhead is quite impressive.
- Studied using HTML5 data attributes. Let's see if I encounter a project where I need those.
- Tested in VM one Freemail like covert communication application, which used IPFS and GnuPG under the hood. It felt pretty simple and good. Yet the project hasn't been yet publicly announced, so no more about that. But I really liked the concept.
- A nice post how Reddit Place was built. Yet, nothing surprising or especially cool in there. That's all just basic considerations when designing / building something. Nice and obvious basic optimizations. These are just the kind of optimizations I've been talking about with data synchronization and related issues. Luckily they had existing infrastructure for the WebSockets, because that's the part I've been personally very worried about. I might have used the CDN to batch updates. I'm using batching very often. So instead of live streaming over web socket, periodic requests to get updates from CDN would worked. This is also beneficial because the same data batch can be delivered to multiple clients, even from CDN cache. If no real-time data transmission is strictly required.