Digitalization, Rescue Mode, Data Erasure, BOM, Async, DDoS CRE, Data Studio
Post date: Feb 12, 2017 7:00:53 AM
- Digitalization is here, we can reduce number of different systems they're. Provide raw data for their data lake approach. Integrate to most of other key systems quite easily. As well as improve and simplify their management, supplier and order processes in general. Also improved control and visibility of things, will probably lead to large cost savings. If there's no proper tracking, it's just so common in retail that there's surprising amount of stock loss. But as I've written for a decade, all this is totally normal and nothing new.
- Watched a documentary called Building Artificial Human. Very interesting aspects on robotics, AI, etc. But we're still very far from real androids.
- Wow, some servers in OVH needed to be booted in Rescue Mode. But interestingly booting rescue mode Linux took several hours. That's just crazy. They probably had some kind of serious platform related issue there. Also booting to rescue mode was clearly a mistake, because there wasn't any real problem with the server but the platform.
- Gave a lecture about proper and practical data erasure and security procedures and provided written documentation which can be followed to ensure that confidential data is properly and practically erased. Without going to ultimate paranoid tinfoil hat lengths.
- One article says that software development is going to 2x in just three years. That's quite a growth for business sector. I agree, almost every project contains more and more integration and automation, etc. Mobile apps, Web Shops, CRM, ERP, BI and so on.
- Reminded my self about Unicode Byte order mark (BOM), because there's one project where I need it, even if I usually don't use it.
- Studied a few more existing e-Receipt APIs. Sorry, can't name those projects. But based on earlier experiences, I could say nothing new. They got rocket science like credit card tokenization using hashing, wow.
- Another interesting article about Python's post-async/await. So much blah blah. Btw. As far as I've seen, none of the articles properly integrate multiprocessing with this stuff. I've seen so many programs to suffer from GIL. Async IO won't help. Your stuff becomes un-usable when GIL hits you and that's it. If it would be something worthwhile, it should trivially integrate proper multiprocessing in the standard implementation. I've seen so many projects fail with this pattern. Also it's usually quite hard to fix it. Those buffering issues mentioned in article are fun. Been there done that. Many single to many data piping systems can be easily crashed by just sending way too much data. - I liked this article, many important business as usual fail examples. Unique IDs in logs, should be obvious. - Once again, you can add something cool, create a big mess and have horrible problems. Or you can write extremely boring code, which works reliably and delivers. Yep, not cool or exiting. But I really do love boring when it comes to programming & project management & sales.
- How to avoid a self-inflicted DDoS Attack - CRE life lessons - Interestingly there's nothing new in the article. I've implemented all the mentioned tricks in my integrations and software implementations for a long time. Because all of those are totally obvious. (Backing off, Jitter, Priority, Queue length limit)
- Started to study Google Data Studio - Sigh, first. I needed to use proxy to gain full access. Google Data Studio isn't available in your country. What an insult, isn't Interwebs global? Anyway, this is very basic. But that's what data visualization often is. I still like Tableau very much over this, but of course it's on totally different level. Basic visualization and reports are quite easy to do with Google's Data Studio. Even easier than with Open Office Calc or MS Excel. Yet these two are seriously capacity limited. Also wrote internal memo about this. I'll be doing some more testing bit later. Just for fun wrote a Python script which takes data from current database and uploads new data / changes to Firebase database for analytics and visualizations. This layer also contains transformation / filter layer, so it's possible to select what's uploaded as well as consolidate data if required.