Keybase FS, IPX, Zero Knowlege Proofs, Security Systems sensors psychology, CPU Load
Post date: Mar 20, 2016 6:10:23 AM
- Checked out Keybase Filesystem. Pretty neat stuff. Nothing new really, but it's new that it got such integrated encryption and identity management.
- Quickly reminded my self about IPX when talking with friends about legacy stuff.
- Nice post, Top 10 Python idioms I wish I'd learned earlier.
- Zero Knowledge Proofs: An illustrated primer - Yet another awesome post link from my backlog.
- Reread Wikipedia articles: Computational creativity, Automated reasoning, Decision support system and Evolutionary computation, Cognitive Network, Security Information and Event Management (SIEM)
- Security vs Surveillance by Bruce Schneier - This is interesting topic. Can't wait or guess what's coming, but it remains to be seen. Attitudes in Europe have been also changing lately for multiple reasons.
- Played with friends little with ultrasonic, radar and infrared motion detectors and different combinations and what kind of measures can be used for detection evasion. As well as which measures could be used to trigger false alarm remotely without entering the intended monitoring perimeter for target desensitization. Play, study or experimentation? All the same stuff. Basically it means knowing exactly the how things actually work in different kind of situations and not just reading the usually bad documentation. As well add knowing timings and signals so well you know when there's an anomaly. Most of the sensors do not send back raw sensor data, which means that the most of information which would be useful for later analysis isn't available. Which is sad or great depending from the aspect the issue is being monitored from. Having full sensor data could easily reveal that the system is being manipulated by some external energy source triggering it. Of course most advanced devices could be also protected against these kind of attacks. Many of the attack can be also used to blind the sensor. So even if it triggers, it keeps triggering. Depending how interested the security staff is, they might even leave the whole system disabled, because they can't disable individual sensors or or get it to work due to invisible remote triggering. I've unfortunately seen that happening also. Nobody bothers to trouble shoot malfunctioning system in the middle of night. Especially if it happens repeatedly and staff debugging it during the day can't find anything wrong with it. How unsurprising is that? Ha. Do they realize that they're being played? Most likely not. Do they realize that if the security system isn't working, they should get extra staff on hand and do continuous patrols? Most likely not. Or maybe they do, but do they actually do it? Nope.
- I guess you've noticed that some posts are seriously out of order in my blog. As well as some stuff has been delayed for months or years, if it's such stuff that it hash some 'timely meaning'. I might write something in store when it happens, but it's ok to publish it only months or years later. When it's already general and published information.
- Catched up a few issues of The Economist. Great stuff, over and over again!
- Something different: More steath fighters, Mitsubishi X-2 Shinshin, KAI KF-X, Boing F-15SE Silent Eagle, JL-2. Defense systems: Terminal High Altitude Area Defense (THAAD)
- Julia Evans post about CPU Load Averages - Well, I don't completely agree. These are very complex topics and whenever you write anything which isn't a book about the topic, it's probably more or less wrong. That's the problem with simplifications even if everyone naturally loves simplifications because getting to the root of complex matters is very time consuming and requires superb care or well, then it's just estimate and more or less wrong. This is the issue I've been bringing up with almost every ICT related article. Measuring things like memory consumption or resource consumption is inherently complex matter. Like CPU with 8 threads, when you run 4 threads and CPU load is "50%" among all threads, adding double the work load doesn't actually bring it to 100% level. It probably brings it near 100% but the truth is that the amount of the work CPU should get done in that time is already way over 100% because the rest of 4 threads isn't nearly as efficient executing the tasks as the 4 first threads. And so on. Memory bus blocking, shared caches, etc. There are multiple ways why simple kitchen match just won't do it. Also in some cases adding additional tasks can drastically drop performance. Like with HDD disks. Reading one file gets you 100MB/s, well if I add second reader, should I get 100MB/s, 200MB/s or 50MB/s? Reality is that you could be getting something like 30MB/s in some cases. Which naturally means that adding more parallelism just made the situation and performance much worse.