Outside-Invest Website and Blog

Based on my last article New Youtube-Channel Outside-Invest it is clear that such a channel also needs a web site. And as the video description is pretty terse, it makes sense to have a web site and blog where more details and links can be shared. 

Well so, welcome to the new Outside-invest website https://www.outside-invest.de with the same visual look and feel as the channel. The website includes an about page, more details on the motivation and channel intro and a list of social media channel besides Youtube: Pinterest and Twitter at the moment. Furthermore there is also space for having references to other blogs that I listen or watch. I think that is also important because I want to cite or refer to other channels for more detailed videos that I do not provide. We’re not in the self-marketing business overall.

A bit of disclosure for the material used there, the nice free background is from https://www.pexels.com/. On the story why the Youtube channel and this website has been  started, can be found, of course on the website, in https://www.outside-invest.de/about/.

The Problem with AI and ML today

I have to admit, that I’m not an expert in AI or machine learning (ML) but I think that I understand it on a certain high level good enough. In the end I did some work in BigData, Hadoop and have been reading on AI and ML quite a bit already. And since the start I had this uncertain feeling, that the current state of AI even with deep learning is not really intelligent. Yes it seems to work to a certain level, you see this with the current progress with automated driving or also with use cases in IIoT like visual inspection or material checks that are based on AI models and deep learning. 

But what always struck me, is that the system that does all great functionality is really dump and it has no idea what it has learned. Nobody can look at the “mental” model of the AI model and explain why it can detect an object or recognise a pattern. It just works based on pure data. That is exactly this, AI today works on detecting something interesting just based on the input data it has been trained with. 

So a couple of weeks ago I bought a book as I stumbled over it on Amazon. This week I started reading “The book of Why” from Judea Pearl and Dana Mackenzie. The book is about the theory of causal relations and the need for causation in artificial intelligence.

Already the first chapter struck me like a lightening bolt. Judea explained exactly what I always felt, that the current AI is level 1 of the ladder of causation. Level 1 means that learning is based on associations that are found in the data by the algorithm.  The mechanism for this is in the end statistics, probability, that’s all.

The Book of Why: The new science of cause and effect | @TAragonMD

Associations are detected in the data because the AI model has been trained with some similar pattern and when it sees it again it can detect it. But the pattern needs to be at least similar to something learned, that is why it is so important to have good and tons of training data. If there is a completely new pattern in the data that the algorithm hasn’t learned yet, it cannot detect it. That is why the intelligence of such an algorithm is on the level of an animal but any small child with 3 years is more intelligent. 

And worst, the model doesn’t really know what it has learned, the representation is just factors in e.g. a neural network. There is really no knowledge representation as such.

Now I do have since quite some time one topic that is always in my focus and that is semantic web technology and the way how knowledge can be represented in a knowledge graph and how to work with that in real-world technology. Before in the space of IT management, now in the area of IIoT.

Now and here is the point that struck me like a hit with a lamp post. On the one hand there is the classical AI technology with the ability to automatically learn and detect patterns. On the other hand there is semantic technology with its semantic data models and query mechanisms on a formal machine readable knowledge representations. 

And the difference to the next level 2 in Judea’s ladder of causation is exactly that one has a causal model not just data. The causal model is represented as a directed graph of causal relations with numerical factors on the edges.

Book review: The Book of Why: The New Science of Cause and Effect (Judea  Pearl, Dana Mackenzie) – Clear Language, Clear Mind

Now that sounds very familiar to me, that is easy to represent as a semantic graph in RDF or OWL! Causal relations represented as relations in a semantic model as one of the most important relations. 

Technically there are of course a couple of questions practically how an AI ML model can work with a semantic graph model. Probably one needs to transform the knowledge graph into a ANN first. It would be interesting to speak with an AI expert on this.

I would even go so far as to it would be a benefit to represent learned associations in such a model as well. Knowledge is in the end different types, there is fact knowledge, rules, causal relations, associations and other relations, that are not causal. If we represent all these in a semantic model, we come closer to how we see the human brain. Because as human beings we do record these relations as well and we are aware of them, we can search and access them, just like a knowledge graph!

Maybe this is in the end the way how we can bring computers to at least level two of the ladder of causation and doing this also for our applications in IIoT.

 

New Youtube-Channel Outside-Invest

All the thing I ever did in life as a profession had been super interesting, be it IT management, IIoT, IT security or innovation management. But even when I worked in my own “startup” until 2015, I effectively was not working for myself but for a company, and be it mine. In the end in my own company, I’d been responsible for research and development and all kinds of technical functions but not for finance, that was the only area I kept my nose and fingers out. I though other would be better suited to do this area and I should stick to my business, which is technology. 

Rich dad poor dad
That was until last year, when I got the book “Rich Dad Poor Dad” from Robert Kiyosaki as a gift. Surely one thinks that this is yet another book from an American author that repeats one simple message over and over again to fill 200 pages of paper. While to a certain extend this is true of course, this book literally changed my life!

For the first time I got aware of the rat race I’m in as an employee of a company, be it my own or a renown big one as today. In the end one works the full 40 hours and much more before I started looking at work-life balance and looks forward for the next vacation time or weekend, just as everyone else does, right?

Having a good job, a decent house this is the way so many employees spend their life without looking at their own business and without trying to get out of that rat race seriously. Well when reading this book I realised how much I fit in this picture and that it is time to change something fundamentally. That doesn’t necessarily means not working at a good company as I do today. But it means also looking into my financial situation and how I can improve in order to at least work less at some time, even when being financially free seems very far away at the moment.

So I started to look seriously into the topic of financial education for myself and as always I tend to do such learning with very much energy and dedication, just like the other topics like security before. So one year I was thoroughly learning about finance, investing and real estate using books, youtube videos and blogs. 

Finally I was knowledgable enough to notice missing topics on youtube in the income investor community and I was doing investments that are absolutely not common to that investors do after short time. So the next phase starts, that I can myself start to publish content about finance and investment that bring value to the community and not just consume information.

outside-investThat is why I now started in September 2020 my own Youtube channel, called “outside-invest” a new channel for income investors. A new adventure and opportunity to learn how to create videos that provide value beyond company-inside videos in IT security. The introduction video provides details about motivation and what can be expected from the channel. The plan is of course to provide regularly new videos, though not weekly, regarding investment topics and systematic summaries of financial concepts. I do not intend to duplicate other content of other channels, that provide deep analytics of single stocks or funds. I leave this to the experts in the finance industry. But what I think i can provide is summarising and systematically present certain concepts as I learn them. And second transfer knowledge from areas outside of finance into the area and thus contribute to a broader understanding, such as in the first video about risk management.

Note, that the videos are currently in German but the slides display are at least partially in English.

Please feel free to watch the videos and don’t forget that even a new channel can be subscribed, you know that means the world to me! 

See you there on Youtube, on the inside-invest channel for income investors!

Books, that shape the thinking

Since last year, I’ve read plenty of books on the topic of innovation management, startups and business models. Among them classics like Eric Ries’ lean startup or the innovation dilemma from Clayton Christensen. Good basics, no doubt, but the books that really impressed me to the level that I can say they are pieces of literature that change the way one thinks are the following:

The book from Alberto Savoia (ex-Google “innovation agitator”) THE book about pretotyping. Pretotyping is the concept to validate that one builds the right “it” (e.g. product or service) before thinking and spending a lot of time, money and effort in defining how to build the product (building it right). All too often we start by building a prototype of a product and then try to verify it in the market. But even building a prototype is sometimes quite some effort, especially in bigger corporations. Pretotyping tries to make the validation of product-market fit before building a prototype with minimal cost (metric dollar to data, $TD), minimal time (hours to data, HTD) and minimal effort (distance to data, DTD). Alberto presents some, after you’ve heard it, obvious tools like market engagement hypothesis (MEH), XYZ hypothesis, hypozooming and the importance of collecting your own data (YODA). A must read, valuable for each innovator from the first to the last page! If I’d only had read this book before delving into the world of startup.

The small cross-industry innovation book is more a collection of pointer to ideas and lots of example of how to translate concepts that are established in one domain or part of an industry into a seemingly completely unrelated other industry. This is something that I already did several times in my life, so this book was a late confirmation that this type of innovation is really a very valid and relevant one and not just a dump tactical technique. 
Especially in industries, like manufacturing, that seem years behind other areas of IT, this is a very interesting source of innovation. I do think though that it requires a extremely open mind and is not easy, if one is deeply involved in a domain. Being an expert, say in manufacturing or logistics, probably makes it very hard to recognize that there are shortcoming in areas that you take for granted, that have long been solved in other domains. 

Finally the best, Simon Wardley with his wardley maps. Not yet a book, only a series of online blog articles but probably the most significant contribution to thinking about and visualizing strategy, that I’ve ever seen. He thinks so differently but sharp like a knife, that you have to be alert when reading the articles every minute, in order not to miss one of the important points. I would rate this work as one of the epocal ones and most inspiring that I’ve ever read. 


And guess what, I’ve started to use wardley maps immedately to map out the innovation landscape or solutions at work in order to understand them. But I have to say, it is somewhat difficult to create maps that other understand without explanation. One automatically creates the maps based on the own way of thinking, which might not be how others look at such a “landscape”. So it is an extremely valuable basis for having a conversation or explanation but can’t just be forwarded without adding words by mail. 

So, Simon Wardley’s articles and videos (youtube) are an absolute MUST READ for someone that is trying to find tools for detecting opportunities and evaluate innovations.

*-Papier Notstand in Deutschland

Heute habe ich den Realitätscheck Hamsterkäufe in Deutschland gemacht, nachdem ich bereits letzte Woche kein Clopapier oder Küchenpapier in dem einen Supermarkt (dm) zu finden war, in dem wir normalerweise Hygieneartikel einkaufen. Glücklicherweise ist Clopapier momentan bei uns noch keine Mangelware aber Küchenpapier ist aus. Also auf in den Krieg, Küchenpapier in Weil der Stadt finden!

Leider scheint es so, dass die Leute auch eine Woche nach Beginn der Hamsterkäufe, oder bin ich naiv und es hat schon früher begonnen, Papier als Brotersatz verwenden. Anders lässt es sich nicht erklären, das auch heute in 4 von 5 angefahrenen Supermärkten (dm, Edeka, Norma, Lidl) absolut kein Clopapier oder Küchenpapier zu finden ist. Und das trotz gestiegener Preise und Limits auf der Anzahl der kaufbaren Papierrollen pro Person. Schliesslich bin im Netto auf eine neue Palette von beiden Papiersorten gestoßen und konnte zumindest meinen akuten Bedarf decken.

Ich frage mich aber immer noch wie das entsteht? Also ich habe eigentlich immer einen Vorrat an Papier im Haus, d.h. ich kaufe immer im Abstand von einigen Monaten 3-4 Großpackungen Clopapier und Küchenpapier. Nun weiss ich dass andere nur den akuten Bedarf kaufen normalerweise. D.h. es müssen jetzt ein Großteil der Kunden in kurzer Zeit auf ein Model umgestellt haben, bei dem sie 3-5 Großpackungen kaufen aber das jede Woche? Irgendwann müssen ja alle Haushalte rund um Weil der Stadt gesättigt sein mit Papierrollen und der Trend muss sich abflachen?

Das Problem bei den Hamsterkäufen ist natürlich, dass wenn plötzlich kein Papier mehr für den akuten Bedarf zu bekommen ist, man vorsichtshalber nicht die eine Packung wie normal, sondern gleich 3-5 Packungen kauft. Weil man weiss ja nie wie es nächste Woche aussieht. Damit werden plötzlich alle zu Hamsterkäufern und natürlich kommt es zu Lieferengpässen. 

Vielleicht gibt es, insbesondere in der älteren Generation, auch Erfahrungen mit Hamsterkaufpatterns während des Kriegs. Während das für uns als Nachkriegsgeneration eine ganz neue Erfahrung ist. 

Aus Gründen des Ansteckrisikos ist es natürlich auch nicht sinnvoll durch 4 ausverkaufte Supermärkte rennen zu müssen um etwas zu kaufen was sonst im Centbereich verramscht wird. Lieber mehr kaufen und dann länger zuhause bleiben. Aber das funktioniert halt nur für die, die aus Erfahrung?, dies schnell verstanden haben. Ich vermute Deutschland ist, im Gegensatz zu anderen Ländern, eher ein Ort, wo Einzeloptimierung auf Kosten der Anderen ein verbreitetes Problem ist, wo es um Solidarität und gesundem Menschenverstand nicht so gut bestellt ist. Italien und andere Länder werden uns auch so tollen Deutschen, die gerne auf die Anderen herunterschauen von ihrem hohen Ross der boomenden Wirtschaft, das mal wieder zeigen.

In diesem Sinne, Mahlzeit, lasst Euch das Clopapier schmecken!

BarCamp Stuttgart 2019 (#bcs12)

BarCamp Stuttgart #12This weekend Saturday 14th and Sunday 15th of September the 12th yearly BarCamp Stuttgart took place again. As usual the event communications happens via twitter, see https://twitter.com/bcstuttgart hashtag #bcs12. I had paused for 5-6 years since last attending BCS. While the topic focus has shifted a bit since that time, it became more open and non-technical IMHO, it had been a really interesting and enjoying event again. I had the impression that this event is a place where some of the most motivated and engaged people from Stuttgart meet once a year. 

As usual, the topics on the open BarCamp had been very diverse but there were many new inputs and things to learn. The most technical session was probably on the Python scripting language, that answered some of the questions I had from my fight with python 2 versus 3 and virtualenv.
While the most physical and practical session has been Augen-Yoga:

A bit worrying is that the number of participants declined from last year’s 250. The Hospitalhof could have easily hosted more participants. So is the format of a BarCamp out of vogue eventually? It would be a pity as the organisation team did a great job again and in the contrary the BarCamp format could be used as a hack to the culture of a company potentially. No other format of event is so open, free and basic democratic that it could be an alternative or addition to corporate management updates or question and answer sessions.

So join next year again when BarCamp is again in Stuttgart or anywhere else!

OWASP SecurityRAT

SecurityRAT is a OWASP open-source project (github), the RAT stands for requirements automation tool. Currently the version 1 (1.7.8 as of the time of writing) is the productive version, but a version 2 is in the making with new architecture. SecurityRAT is based on JHipster Java rapid application development framework. Version 1 is a classic application, version 2 will be JHipster micro-service based.

SecurityRAT is used to create a set of security requirements for a supporting asset, the main part of a security concept. In the end it is nothing more than a replacement for MS Excel a way to get a filtered list of requirements with status and details provided by the development team on the implementation of the requirement in the asset. That status and the details are the important part, because having a list of requirements if fine but without knowing, whether they have been implemented or not and how, they don’t help a lot. Spending time on the details is important, they should contain additional information on the implementation, such as a link to the Jira or TFS issue, a link to a wiki page with implementation details or prove of implementation.

The cool thing is that one can define its own fields and values as long as it fits into the general idea of SecurityRAT. The central concept is a list of requirement skeletons classified by categories and tags that have columns and belong to a project type. The requirements in the database are skeletons or templates, from which instance of requirements for a given supporting assets will be created at run time. Those requirement instances are never actually stored in the database but exist only in memory on the client (browser) side. As soon as one has understood this, it’s a done deal in understanding how the tool works. You pour a pre-classified list of requirement templates into a database and instantiate them for a supporting asset at runtime with the additional benefit of filtering the list down to the relevant ones using category questions, filtering and tag selection at runtime.

We use SecurityRAT as an expert tool. This means, that not all developers work with the tool all day but only selected security lead experts, do. SecurityRAT spits out a Excel document with the requirement instances together with the up-to-date status and comment (optional columns). This is what other people work with. You put the Excel into the wiki for documentation or generate sub-tasks in Jira etc. SecurityRAT can also directly create stories and sync them bi-directionally, which would be really cool. Unfortunately this doesn’t work at my place, don’t ask why, it’s a sad big enterprise problem.

But working with Excel, or better CSV files, has some advantages, too. You can convert it to markup or generate task language in Jira from it easily with a little script. I use Groovy for it, but that’s a matter of personal taste.

SecurityRAT comes out of the box with a SQL dump for the OWASP ASVS (Application Security Verification Standard) requirements catalog. We have in the mean time at work also version 4.1 and many other catalogs that we pour into SecurityRAT instances. Version 1 somehow requires one RAT instance per catalog, although you can of course put multiple catalogs that have the same structure into one big instance, e.g. CIS Benchmarks. That ends up in a big list of instances for e.g.

  • OWASP ASVS
  • IEC 62443
  • DIN SPEC 27072
  • Corporate security requirement lists
  • CIS Benchmarks (you need to be member to get the XLS files)
  • Own catalogs e.g. for RabbitMQ

Apart from really filling in security requirements, SecurityRAT can be mis-used very well for other tasks. Things I use it for are among others:

  • Vulnerability assessment according to the OWASP Testing Guide (OTG). Excellent, you set the status to passed/failed and fill in the findings and get a nice Excel
  • Security maturity assessment according e.g. BSIMM or OWASP SAMM, answer the questions pre-filtered by level that should be achieved and get a nice Excel
  • Threat modelling using STRIDE – that stretches a bit the idea but works, when you have a list of threat skeletons instead of requirements.

Using SecurityRAT for status tracking with the OTG or ASVS are a good example where it makes sense to fill multiple, testing or requirement guides, into one instance, e.g. web services, mobile and IoT. The make a category or project types for these (one support asset can be either a web service or a mobile app or a IoT device). This way the user selects the type of asset in the initial “question” aka collection instance list implicitly.

In the end you could do a lot of this with Excel but you have IMHO the following advantages by SecurityRAT:

  • It’s not a document that rots somewhere on a share but a server with a nice web-based interface
  • The definable collection categories allow you to pre-filter the requirements at the beginning using customizable questions. Yes you could filter in Excel but you don’t have this very usable two-step process that helps in reality. Using Tags you can also filter when the requirements list has been generated, as well.
  • You can save the working results in a YAML file, load that again and continue, e.g. by adding also custom requirements. So only one place.

Being a server and database solution, filling an instance with data could either be done by UI, but you will quickly skip this idea for larger catalogs, even when the batch operations are really handy. Just use (again Groovy or other) scripts to convert a CSV source into SQL statements or directly insert it into the DB, which is then a bit more work. Unfortunately the entities in the SecurityRAT do not have surrogate keys so your script needs to manage the uniqueness of the database IDs by itself, which is sometimes, well a mess.

The down-side of the tool is a bit the missing calculation and missing colors for status fields that you have in a spreadsheet. E.g. for risk assessments it would be cool to calculate a risk factor from likelihood and impact automatically. But that is not possible.

BTW, SecurityRAT runs with docker out of the box, using MySQL or for license sake MariaDB is no problem. Problems will manifest themselves with endlessly long Spring Java exceptions, that will require a bit of digging into. We run everything in docker-compose, backup with mysqldump using docker exec and a nice landing page for the different instances.

Overall SecurityRAT, thumbs up!

The Practice of Network Security Monitoring

The second book from Richard Bejtlich in short time: “The Practice of Network Security Monitoring” has been read. This one is a bit newer, though not totally up to date, from 2014. The practical part of the book is based on the Security Onion (SO) distribution. Unfortunately a lot has happened with SO in the mean time. The book is still based on ELSA as part of SO, which has been swapped with the Elastic stack in the meantime. So the installation part could be skipped, also due to the fact, that I have already several times performed a SO installation at home. 

Just as with the TAO of network security monitoring book a lot of space is dedicated to various, in the mean time, well-known sniffing tools such as Wireshark, Bro, Argus, Sguil, Squert, Snorby etc. Nevertheless in the last third these tools are used for various real-life scenarios such as binary extraction with Bro, detecting server- and client-side intrusions, that were especially helpful. 

Security Onion is definitively the first choice for a real NSM with Sguil as real-time NSM console. For a home NSM a more historic Elastic stack-based NSM will probably be more useful, as I will not constantly monitor a NSM console all day long :-). The problem is a bit that SO is a big system, unfortunately a bit too heavy for the old laptop that I can dedicate to the NSM server part at the moment. Therefore I switched to SELKS, also a NSM distribution from Status Networks, also based on Elastic stack but a bit more light-weight. ELSA, based on syslog-ng doesn’t fit well anymore when you would like to use filebeat/packetbeat as logfile shipper. 

BSides Stuttgart 2019

This post is a bit delayed, on the weekend, 25th and 26 of May, the first BSides Stuttgart took place in the Wizemann location. I was lucky to have been there, because after monitoring the site months and weeks before, there was no program published and no way to buy tickets. But when looking on it 2 weeks before again, it was already sold out. As this was, as you can see, a Bosch-organized event, I still managed to get listed as a guest, thanks to dear colleague from Bosch CC. 

Security BSides conferences were originally a way to give those a platform whose presentations had been reject by the large conferences like DefCon or BlackHat, but in the mean time this is a grass-roots DIY conference format world-wide. And the contents are not second class in any way, in the contrary as this event has demonstrated!

BSides Stuttgart as the first of its kind in Stuttgart in 2019 took place in the previous industrial facility Wizemann co-working space. Same place as a digitalisation hackathon form Bosch before, just smaller. Great atmosphere and well prepared by CC security people from Bosch.

Co-organized by the ASRG (Automotive Security Research Group) and being hosted in Stuttgart, the event was pretty automotive oriented in general. BUT there was a general track with interesting presentations on cyber security in general. As you can image, this was the track I’ve been mostly following. 

Many colleagues from Bosch PSIRT and CERT and other (automotive) Bosch GBs attended the conference together with people from other companies such as Daimler.

These are the topics I had attended and are noteworthy on day 1:

  • How does ASCII and Unicode affect our Security
    Very interesting presentation on how Unicode and Punycode tricks can be used for DNS squatting and opening vulnerabilities for buffer overflows
  • Elastic Stack for Security Monitoring in a Nutshell
    Workshop on using ELK and Beats to build a SIEM more powerful than commercial products
  • OpSec++ the FastTrack
    Security testing using OSSTMM methodology
  • Cyber Threat Intelligence for Enterprise IT and Products
    A presentation from @Wagner Thomas Daniel (Bosch PSIRT)form PSIRT on a concept for product CTI
  • Weaponizing Layer 8
    How to treat users not as DAU but involve them into building a security culture in the organization.
  • Introduction to Osquery
    Very interesting workshop on osquery a service that exposes system information such as processes, filesystems, etc. via a SQLliste-compatible SQL interface. Also works with docker (as a companion to Sysdig?) and spits out logs.

On the second day, the sunny Sunday, I’ve been listening to the following presentations:

  • What to log? so many events, so little time
    On a tool from a Microsoft lady to catalog and filter the many events that the Windows OS produces with mapping to MITRE Att@ck techniques. Interesting approach and using sigma for generating SIEM queries for the relevant events. 
  • Security Onion
    Workshop on Security Onion, a Linux distribution specially for security monitoring, forensics and incident response, just like Kali is for pentesting.
    Included some real-live example how an attack could be detected and handled based on network logs using the various tools bundled in the distribution.
  • NoSQL Means no Security?
    Insights on the security posture and evolution of MongoDB, Redis and Elasticsearch. This will get us some ideas on hardening our NoSQL databases potentially.
  • Scale your Auditing Events
    Again from Elastic but on the Linux auditd sub-system and how to process its audit events with Auditbeat and Elastic stack for security monitoring.

Slides have been published on a bsidesstuttgart gitlab site or are posted on the bsidesstuttgart twitter

I’ve learned so many new tools, and new information especially in the areas of network security and security monitoring for getting OpSec started.

What’s pretty sure is that BSides Stuttgart will continue next year, maybe growing and giving also you a chance to grep a seat. I’s cool that we finally have a cheap and open security conference right here in Stuttgart, thanks to the organizers from Bosch for the great event! See you there next year, mark your calendar already for May 14 -16 2020!