GnuPG, Heisenbug, TFFS, UUID, Security, Log-structured storage
Post date: Jul 8, 2018 3:46:42 AM
- Advanced intro GnuPG - Info: RFC7748 - Recommends using Curve22519 and Curve448. Also checkout RFC4480bis - The lastst OpenPGP Message Format draft.
- Btw. That comment regarding SSH that nobody checks keys is not true. Of course keys are checked, if it's about something important. Afaik, valid certificates with TLS is worse than actually checking the keys. That's why we're using plenty of self-signed keys, with fingerprint verification. Of course you can also use valid cert and still do fingerprint verification separately.
- Heisenbug. Have you ever encountered really elusive bug, check this out.
- The new Finnish surveillance law is progressing. It's interesting what kind of data sets they'll be collecting. Why collecting? Well, there's no reason for the US to ask email information from Visa applicants. Unless they have the information and access to it. So that pretty much proves that they do have data and access.
- Security best practices? Always carry all confidential files on USB stick in your shirt pocket. Encryption? What encryption. Come on, don't be so ridiculous. It's always nice and interesting to notice what the differences are between reality and the illusion show running in the security theater.
- Tuxera Flash File System (TFFS) - Checked it quickly out. It's like all other Flash File Systems (?). They don't provide nearly enough technical information to make any difference. They claim it's designed for UFS, eMMC, MMC, SSD and SD. Support for HD video, life time extension write amplification / erase cycles / wear-leveling. Built-in check & repair + no data loss. And of course several times, high performance is mentioned. Anything can be high performance, unless any details are provided. POSIX File System compatible. More details (PDF). But that's also extremely technically disappointing document. Anyone can make those claims in the document. As we know there are a few common techniques used for flash storage, and those might be combined to produce 'optimal' results. Different allocation for small random and large contiguous data etc.
- Had long discussion with colleagues about UUID as primary key. I'm opposing that. Because it's simply inefficient in many use cases. Especially if it's stored as string. UUID is just blob of bits with certain structure, which can have that all so familiar presentation form.
- Security as usual. All doors open or unlocked and no one's watching. Does it matter how elite VPN tech or in transit encryption you're using. If anyone can walk in and pickup all the systems physically with the data? Or simply copy data directly from systems, without leaving any warning signs? That's the way to go. Of course we can safely assume that nobody would do it. It would still be a crime. But where goes the line of criminal negligence?
- Log-structured storage - Nice summary, nothing new. I think the post is so short, that it seriously confuses and mixes and matches things. SQL vs not-SQL doesn't have anything to do with segmenting. Nor replicated vs not replicated, etc. That's always a problem. If you go and summarize or simplify things. It's almost guaranteed it's going to be wrong (not just inaccurate) on some level.
- Log-structured storage is one of the most simplest structures of all. That's why I'm very often using it. Especially for data which is stored, and it's very highly likely, it won't be ever accessed again. That's why I used the log-structured compressed block storage. It's an compromise between block storage and log-storage. There's a index file which tells which data block hash is in which storage block. And then compressed storage blocks are stored separately. This is for data which is likely to be accessed "some times". Like once a month or so. If there's data which is even less likely to be needed, I also often skip the indexing part. So it's just compressed stream of data. Like compressed log file. And getting data from it is very much like grepping the required message(s) and just finding last one of those. If required log-structured storage can be optimized, but there's no point of doing it, if it's not required. Combining log and blocks, also allows garbage collection, and so on implemented if required. Yet in my case, I just start a new log every N megabytes, or N time units. And the old logs are deleted when not required anymore. So the log exists for debugging purposes, because all data in the log expires at the same time, there's no need to compact or garbage collect it. Which usually means saving data partially. Or compacting data on log is cycled, or copying non expired data and so on. Yet that can be really inefficient in case, there's lot of non-expired data, write performance will suffer greatly.
- Something different? - Gerald R. Ford-class aircraft carrier.