NTFS compression, SQL Backup, TLS certs, Privacy
Curious case with NTFS (@ Wikipedia) compression. It seems that pre-allocated files on compressed volume take up much more space than expected. It seems likely that the pre-allocated space isn't actually being used by the data being written to compressed file. Therefore writing 100 gigs NTFS compression wiht pre-allocation depending on compression ratio, will take up to 200 gigs of disk space. - Duh, ok, solution clearly is to disable the compression and then you'll only need 1x disk space, which is efficiently pre-allocated. Of course expectation would have been that if the file isn't pre-allocated, it's written directly to disk as compressed using 50% or something and if it's pre-allocated then it would use 50% and when file handle is closed the pre-allocation is freed and or the extra space is freed from allocation as usual with compressed files. - Had some discussion and wrote following clarification: It's only temporary, after file is closed and chkdsk is run, it'll all returns back to normal. I've noticed this earlier, but I didn't really realize how bad it is. But it's especially evident in cases where you have a partition which is created especially for a single large file, like sql server database dump or so. But it's confusing that if you're writing 100 gb file to 150 gb partition, WITH COMPRESSION.... You'll run out of diskspace while dumping.
It was actually so annoying I tested it, it seems that SQL Server backup dump does something strange. Because if I create large compressed sparse file, yes, it consumes space as expected. But if I write into the sparse space, some which is highly compressible, the reserved disk space doesn't grow. Quite interesting question, what does the SQL Server do differently than my program, which just creates large compressed sparse file and then writes highly compressible data into it. Because this wasn't helpful, let's try the same with random uncompressible data, to see if it changes anything. Now 1073741824 bytes file, consumes 2134900736 bytes of "free disk space'. There it is, confirmed. Keep in mind, that using NTFS compression can temporarily double the disk space requirements, when sparse files are used. Boom! I didn't know this, did you? Somehow this feels bit crazy. Can anyone tell me what's the design goal or reason here? - Yes, I made it sure, if I disable compression, everything works exactly as expected and it doesn't matter if I write null or random data. 1 gb file is 1 gb file and needs 1 gb to be stored. Just to clarify it out.Â
So frustrated about the TLS certificate (@ Wikipedia) sh... show. Sometimes I feel it's quite silly and totally insane show, wasting everyone's resources and not bringing any real additional value to the table. I wish that there would be some more sane methods that could be used, but it seems that the previous alternatives have been all removed with TLSv1.3. Also it's just utterly stupid that certificates need to be renewed all the time, it makes trusting certificate fingerprint impossible. I wonder why everyone isn't rotating their SSL and OpenPGP keys every three months or so, because if it's essential to the security. Someone asked how so? Well... Mostly pointless, weakens security, potentially exposes closed private systems to the internet. Or gives systems excess rights which undermine security hugely, also the closed systems are then listed in certificate transparency... Optionally adding lot of complexity, making systems brittle, reducing availability, centralized certificate renewal system and then relaying the certificates to the destination systems. In theory making the private keys available to non-necessary parties. Someone said that I should create new CA. No, I don't want to do that, it also weakens security. Because there's no proper CA access control measures. - After long discussion, it was made clear to me, that officially security certified parties are so much more trusted than as example a key given directly by the party which is self-signed. Boom. I don't get it. But it seems that I'm just lost with this. If I generate two keys, for system a and system b, which communicate with each other, it's insecure. But if I expose both systems to internet and get officially certified security certificate for both servers, then it magically becomes secure. Does not compute. - Also trusting certificate directly would be much wiser, because now if there's DNS hijack... Well it's all lost. But with directly trusted certificate it would be meaningless.
In many privacy oriented messengers, platforms and trust relation models, it's explicitly stated that you should NOT trust central authority, claiming to the only official truth. If you can and you should create a direct trust between peers, it's much better than indirect trust via directory. But maybe I'm just seriously misguided.
2023-04-02