Matrix, IPv6, Sites, FS, Bugs
Matrix home server is unreachable, the media files (images, attachments and any other than text messages) aren't spooled by the Element (Desktop & Android) clients to be sent later. That's a clear lapse in off-line functionality.
One wonderful thing with IPv6 is that systems on LAN always got working IP address. No more problems with DHCP changing the addresses all the time.
Read a long post about how 14kb web-page is faster than larger web pages and it made many assumptions, like 10 packets cwnd. Well, I've set my defaults larger, I've also configured system to save metrics and disabled TCP slow start when possible. As example with hosts with cached metrics,r existing TCP session which has been just idling for a while. These things improve performance a lot in many cases, because there's no need to slow start. You can go straight into action with large rwin. And if you don't believe that, it's good idea to tcpdump some traffic and see that some hosts with fast network but with high latency use cwnd 40 - 256 and so on as default. This is especially visible with hosts which host content to users on other continents latency is in the range around the globe once at light speed.
Again, Google's Sites incompetent developers write such a bleepingly bad code. I just found out with sites again, that it sends update requests in such a flood, that then it triggers flood protection and bans your IP for a while. What kind of bleeping bleep bleeped dev writes such code? How about detecting situation where changes are so frequent that it would trigger flood protection and then batching those so that updates are sent every 5, 15,60 seconds? Too hard for Google devs, clearly. But implementing something like this fusion rocket tech, is of course computationally so challenge so it goes wayyyyy over their heads. Ref: "Can't save your changes. Please copy your recent edits, then revert your changes."
With modern drives and file systems, data separation into partitions usually doesn't matter so much. But encountered situation where Linux was on DM-SMR (@ Wikipedia) drive, it all was okish to bad, until there was a distribution upgrade. After that it got to really bad. Because now all the small files were scattered around the whole drive which was almost full with some holes before the upgrade. Also similarly when the old files were deleted, it opened up free space scattered around the drive causing first free space fragmentation and then data fragmentation. Ugh! I have to confess, so much fail! No, running any normal defrag won't fix this issue, because it's much deeper. Only sane way would be to have the OS and all of the data on separate partitions, which also means that the data would be separated into different shingles on media. Now those are mixed up and well, it is what it is. Maybe I'll move all data off the drive, Repartition it and then move data back to it.
For some reason this reminds me about some code project, with so called quality devs. First their program crashed due to networking error. Ok, how about you guys fix that? And after that, if the networking was completely lost, the program made about 1 750 000 requests per second (which all failed) and logged that, until disk space did run out. Again, good question, are they really so effing stupid, or are they intentionally trying to troll everyone? - Haha, hey, I found out how to implement the request in stupidest possible way, let's see how pissed out they're after this. - Nice attitude guys! - And when the users get back due to systems crashing with out of disk space, they're laughing insanely in the coffee room. We did it, we got em. Ha, what other systems were using the same disk, did we manage to bring those down too? - Oh yeah! - Yes, of course there are ways to dealing with this, like limiting log size and using alternate logging destinations and so on, and limiting per application disk space quota and so on. But that code still doesn't make any sense.Â
Something different? Aluminum Sulfur Salt Battery - I hope this is going to provide reasonably priced battery capacity for energy storage. Yet it seems to require slightly elevated temperature to work out, but that's not a problem with big units, which can be cheaply (compared to capacity) insulated.
2023-10-15