Verified Credentials, Digital Identity, Self-sovereign identity
Verifiable Credentials (@ Wikipedia). It's basically a formalized and standardize version of the "sign this token" concept which I've been talking about for decades. You (Holder) can have your keys officially signed by Government (Issuer) keys, which you can then use to sign a token, which is then trusted by the third party (Verifier). But why isn't this being done, is the big question. kw: Authentication, Authorization, Identity Management, Cryptographic Signatures and Key Management.
Finnish Digital Identity (@ vm.fi - In Finnish). - Finally something I really can like at least as general concept. It remains to be seen, how they still can ruin it and make it practically unusable. But let's hope for the best. It's designed to work online and offline. I'm still wondering, if it supports over phone authentication. I've been wondering lack of this kind of project for around twenty years. Anyway, it would be a dream if they would have competent team which could implement these extremely challenging features like: Service specific strong pseudonymous authentication token. Remote identity verification in situations where session authentication is challenging, as example when talking over phone. And basic public key / document / message signing features. But I'm probably demanding way too much from officials, because implementing features lie this would require someone who's actually able to use OpenPGP client, haha. kw: Digitaalinen Henkilöllisyystodistus
Self-sovereign identity (SSI) (@ Wikipedia) and Decentralized identifier (@ Wikipedia) - kw: European Self-Sovereign Identity Framework (ESSIF)
... Never ending authentication, password, identity, authentication, key management, encryption, signature, discussion. Yawning while laughing ...
I used to use RAM disk for lot of media files, but constant growth of media file size has made that unfeasible in sane terms. There's no point of using RAM disk / tmpfs if it leads to swapping. Also copying to RAM disk usually leads to temporary extra memory consumption and access, because the read needed to save to RAM disk is also cached and utilizes RAM. Therefore I found out that on desktop system which doesn't have too much disk i/o anyway, got got plenty of ram in utilization terms. It's just best to copy the files to /dev/null, which then makes the files directly accessible from the system disk cache, instead of needlessly replicating data to RAM disk. Works like a charm. I've done that earlier, but now I just made it single click option in the menu. Cache from media disk, and done.
Someone said that processing stuff in parallel will make things faster. Well, this is actually the CPU time consumed by same process run sequentially vs in parallel: Thread count, total execution time: 1, 3.51; 2, 3.68; 4, 4.33; 8, 8.10. System is 4 cores with 8 threads. And it's obvious that running the tasks sequentially consumes much less CPU time than running in parallel. The tasks were completely independent, but required access to RAM. No locks were used. Nothing new or any news here. I just wanted to confirm again, something I knew and assumed. But to be honest, the results weren't as bad as those were back with Q6600 platform, where memory bus was completely swamped by single highly memory active thread. And of course it was totally expected that hyper threading 4 -> 8, is what it is and the gain was marginal.
It seems that envs.net Pleroma robots.txt is so restrictive: https://pleroma.envs.net/robots.txt that I configured my own lazy mirroring of the ATOM feed here: https://s.sami-lehtinen.net/rss/pleroma.atom (@ s.sami-lehtinen.net). - Job done, now the atom feed can be indexed, even if original posts can't. The feed is only updated daily, to keep it light, but it's better than nothing. kw: robotstxt, envs, ATM, feed, RSS
Something different? X-ray crystallography (@ Wikipedia) and 5G NR Dynamic Spectrum Sharing (@ Wikipedia).
2023-07-23