The TAO of Network Security Monitoring

Wow, that was a thick book, the Tao of Network Security Monitoring, beyond intrusion detection from the guru of NSM, Richard Bejtlich. This book is considered the bible of NSM. The book is from 2004 and thus a bit out of date, especially as it is filled with tons and tons of tool, one will find that some of these do not yet exist anymore or development has stopped years ago. But the intention of the book is not to serve as a tool reference but to show which tools are available and what they can be used for. So the brain needs to translate the samples to what tools we have today available. And anyhow in each category we still have enough candidates.

The story line of the book is basically along the different types of network security monitoring data that one can capture along with the tools that provide it:

  • full content data (packet capture, e.g. from tcpdump, wireshark)
  • Packet headers
  • Session or flow data (e.g. from Argus, flow-tools)
  • Alert data (e.g. from Bro)
  • Statistic data

Bejtlich explains the use of these types of data and the corresponding tools using real-life samples of attacks. This is cool, although following the packet dumps without in-depth protocol knowledge of IP, UDP, TCP, DNS etc. is really a bit hard. Luckily he explains it after the printed dump, so one can be a bit lazy. Probably that is not a good idea, as one missing some learning, but that would probably require a second round of reading.

But the real learning from this book is understanding what a well-configured NSM system and especially stored session data can really give you to detect all kinds of attacks, if you just watch closely enough. The interesting question for me still to answer is how can I transfer this knowledge to cloud-based NSM, where we have some packet capture abilities but all the rest of the tools, how to make use of them in such an environment is left as an exercise. 

Summary: definitively worth a read, although it could get an update once a while.

IoT Hackers Handbook

Somewhere, I don’t know where, I was getting aware of the book “IoT Hackers Handbook” from Aditya Gupta. Well, bought it, read it. That wasn’t quite a long job as the font size is a bit larger than normal. There are two reasons you do this, either you want to avoid that older reader need their glasses (me?) or there’s not too much content but you still want to make it look like a in-depth book on the topic.

It was indeed a bit different than expected. Not bad, but different, which also tells you something. I’m a software guy, looking into hardware-near topics like BLE sniffing is interesting but not my homeland, so to say. But this book really started with hardware hacking after some introductory chapter on penetration testing IoT devices. I mean UART communication, JTAG debugging. Then it went slowly up in direction software, via firmware hacking, mobile apps (Android), software define radio (SDR) to Zigbee and BLE sniffing and packet resend. It didn’t get higher than this. That’s ok, as there had been topic, I hadn’t touched so far except for BLE sniffing. Especially the SDR part was quite interesting and encouraged my to maybe dig a bit into this topic. Understanding the communication of garage door openers etc. sounds interesting over all.

Don’t get me wrong, for consumer IoT devices, this is all important stuff to understand, test and hack. But IoT is a bit more than hardware, firmware and communication, at least in my mind. IoT lives from software, and not just hardware-near software. That is what brings the value and the new business models for IoT. Sure the book touched mobile apps as important part of a IoT solution but there is all the cloud connectivity and the software stack on the IoT device that I find the interesting part. And that was not covered beyond ZigBee and BLE. So not bad and helpful but surprising regarding the direction of what IoT pentesting should be like and maybe telling something about how IoT is regarded still today. 

To be fair, the book did dig into some use-cases of what you could do when having access to the device and being able to manipulate it at will, which wasn’t really difficult with the examples provided by the author. Weather stations, door openers, garage openers, the usual smart light bulb and beacons. I learned still a lot about tools and techniques for these low-end IoT devices and how easy it is to break them with just a little bit of knowing some tools and reading specifications. And unfortunately you can transfer this experience to more complex “IoT” devices like PLCs in IIoT or gateways. Just the specifications are a bit thicker and complex. But the door is equally wide open for white as well as black hats. 

HTTP Health Check for Docker

I just published a little tool called htcheck (docker-health-go project) on github: https://github.com/pklotz/docker-health-go . This being my first Go program, be kind to me, should there be better ways to implement it. 

It is intended as a very simple HTTP health check client for use in Docker health checks. Motivation, why this program?

When using docker directly or via docker-compose, you can and should define a health check, so that docker knows that the process it is running is doing well. There are a couple of libraries to provide a HTTP health endpoints for Go, such as [https://github.com/docker/go-healthcheck] and also Java offers with Spring Boot Actuator corresponding framework.

But on the client side, you still need to use curl or the outdated wget to perform the check. If you ever checked, which dependencies curl and thus libcurl4 brings with it, you might wonder if this is worth the ballast just to do a simple HTTP get with an exit code. Libcurl brings openldap libraries into the image and what not. So this little decent project provides a special-purpose HTTP client to use for health checks in docker or elsewhere instead of throwing a general-purpose HTTP client at the job.

It supports making a HTTP GET request to a URL and reading a JSON document back and checking for a value in it using a jq-like path expression.

Simple sample usage in Dockerfile:

COPY ./htcheck /usr/bin/

HEALTHCHECK --interval=5m --timeout=3s CMD htcheck -u http://localhost/ || exit 1

Sample usage for Spring Boot actuator health endpoint, which normally serve a JSON document in the form:

{
    "status" : "UP"
}
So using the JSON path feature, we can compare against a expected value:
COPY ./htcheck /usr/bin/

HEALTHCHECK --interval=5m --timeout=3s CMD htcheck -u http://localhost/health -p .status -v UP || exit 1

Licenses are checked and documented in the README. Thanks to the dependency projects [https://github.com/savaki/jq] and [https://github.com/spf13/pflag] that were made use of. Probably some features are still missing, but as a first shot, it should serve. S’il vous plaît!

Dualcomm 10/100/1000Base-T Gigabit Ethernet Network TAP

As I only have only a unmanaged switch Netgear FS116 at home, I don’t have a SPAN port to do network sniffing on the home LAN. In the course of building up a NSM (network security monitoring) setup for my home network, I needed a way to tap the wired LAN. Therefore I looked at network taps, which tend to be extremely expensive for home use. Finally I found some recommendation and bought a Dualcomm 10/100/1000Base-T Gigabit Ethernet Network TAP. It’s not cheap but better quality than a throwing star tap and offers full duplex passive sniffing of network traffic for affordable price. 

Setup is absolutely seamless, as there is no setup, just put the tap between the home router and the switch in order to get all internal traffic coming from outside, LAN cable to the sniffing ethernet interface and that’s it. The little box is powered by USB, so just put the USB cable to a monitor’s USB and that worked fine enough. 

Currently there are two options, either I use my RPI 3B LAN interface as sniffing interface and the RPI’s WLAN as management interface. Or I can also attach it to a old laptop, that I use as a monitoring collection station with SELKS distribution on it. I use SELKS instead of Security Onion (SO), because the laptop is just too old and SO freezes on this hardware. SELKS also has ELK stack and suricata installed and runs decently. Not optimal performance, but for testing it works. Also here WLAN is the management interface as the laptop also only has one wired LAN interface. And sniffing interfaces are not managed and don’t get an IP, as they are input only.

Long-term it could be interesting to replace the unmanaged switch with a managed switch so that one can move the tap to any other place  and use the SPAN port of the managed switch for e.g. the RPI. With the new RPI 4 model B one gets true gigabit LAN and that should be able to handle all traffic that the switch provides without any problems in such a home setup.

The packet-foo blog contains the probably best article series on network package capture and analysis including network taps, that you can find. 

Trying to build packetbeat for Raspberry PI (arm64)

After my previous article on building filebeat for Raspberry PI 3 B+ (arm64), I now wanted to get a binary for packetbeat, the second most interesting module of elastic beats. I tried the same approach with cross-compiling using GOARCH=arm64 but it fails, while a straight compile for amd64 works. It fails with a message that it excludes all Go files due to build constraints. Issue is that there is probably native C code involved and you cannot cross-compile this beat. I searched posts and tried all options for 2 hours, it does not work.

I tried again on the PI directly, the build is running but if you do a “go install”, it finally gets out of memory (“cannot allocate memory”). Problem is that the PI 3 has only 1 GB of memory and that does not seem to be enough. I tried all kinds of tricks, like setting GOMAXPROC=1, GOGC=70 but njet. The problem also seems to be related to the C build using gcc. You need to install “libpcap-dev” for the “pcap.h” header file using “apt-get install”, otherwise one gets a compile error earlier.  If using “go build -v -x” directly you get this “cannot allocate memory” message from gcc. When using “make” the build gets killed instead. Nevertheless it’s rare are there are reports from people that compile Kubernetes on arm64 RPI 3B like mine. Probably K8s does not contain native C parts like the libpcap in packetbeat. So I finally gave up, because …

But … there is good news ahead! Since a few days, the new Raspberry PI 4 Model B has been released! With up to 4 GB memory that will hopefully work. Also it has now true GB LAN, which is for network sniffing, not a bad idea either, when attaching it to a real network tap. So that is a clear buying plan for Juli!

Modsecurity Handbook

Modsecurity (from SpiderLabs) is probably the best known open-source web application firewall (WAF) originally (and still is) a module from the Apache web server. But in the mean time it is also available as module for Nginx (nginx-modsecurity) and IIS and other integrations. I came into contact with modsecurity in the context with Nginx.

The second important project in conjunction with modsecurity is the OWASP core rule set (CRS) a set of modsecurity rules for a WAF. You meet these two in many unexpected places, e.g. the Azure application gateway is based on Nginx with the CRS. Or the Kubernetes ingress controller is an Nginx with CRS and modsecurity WAF module included.

This is why I recently bought the book “Modsecurity Handbook” from Feisty Duck and the authors Christian Felini and Ivan Ristić (see https://www.feistyduck.com/books/modsecurity-handbook/).

This is really “THE” book on modsecurity from its authors, the bible to to say and goes into the depth of writing rules youself. It is not an explanation of the CRS, for this there no books, you have to read the rules in the github repository. This book does prepare you to do this, something that look daunting at first when you ever looked at the CRS ruleset without preparation.

That is probably one of the most important learnings one gets from the book, because I don’t know yet, whether I will write my own modsecurity rulesets myself. Although the second interesting insight were the use cases that you could cover with a WAF. The book has a long chapter on this topic and delves into detailed implementation ideas on use cases such as

  • IP address tracking and blacklisting
  • Session tracking, blocking, forced renegotiation and restricting session lifetime as well as detecting session hijacking (a well-known attack technique)
  • Brute force attack detection
  • Denial of service (DoS) detection
  • Periodic security testing and alerting
  • User tracking
  • Whitelisting of application operations
  • File inspection
  • Dynamic patching of application vulnerabilities or for exploits

The idea that nginx logs are an important source for security and audit logs for a SIEM is certainly not surprising. But being able to actively detect during runtime certain vulnerabilities and constantly reporting them as security alert is interesting. Think about missing security headers or wrongly configured content security policies (CSP) in HTTP. Instead of detecting it during vulnerability assessments or penetration tests, such inspection can happen during operations and thus provide a 100% coverage of all operations.

Also having a tool to quickly mitigate a vulnerability before the development team can come up with a fix and new release for a backend sounds interesting. You can even inject content into a response, e.g. Javascript. I just have some doubts about the complexity to introduce new rules and such mitigations quickly in environments where modsecurity and CRS are realistically found these days, such as Kubernetes ingress or WAF. An Azure application gateway for example does not expose the full functionality of modsecurity directly but hides most juwels under some own configuration portal.

At the end the book contains an extensive reference part with explanation of all the directives, variables, operators and actions of the modsecurity rule language. This way the book serves well when you need to actually develop rules in practice beyond what the Internet provides as reference resources.

Creating your own rulesets has its quite hard complexities in my opinion, but it is a tool in your defensive toolset. At least using the CRS with a WAF out-of-the box with just slight tuning, such as disabling rules that produce false positive or unneeded rulesets should be possible. That is anyhow the only thing that environments such as Azure application gateway allow you to do. Going beyond that needs a good reason. The disadvantage is really that such configuration is decoupled from the protected service or application and if we can fix a vulnerability there quickly it certainly is the preferable option before we turn to using custom modsecurity rules. In times of continuous deployment that should be fast enough to avoid dynamic patching. For old applications where there is no team anymore maintaining security or that has half-year release cycles this is still a valid option in the security control portfolio.

I’m curious if I will finally use it one time or not.

Yours, Peter

Ramen (ラーメン)

I hear the term “Ramen” already before and about the hype on Ramen shops all over the world but never payed attention to it. Now it happened a couple of weeks ago that, when dining out with my girl friend at Karl’s kitchen in Breuninger Stuttgart, they offered Ramen soup. Always open to sumptuous experiments, I tried … and was positively surprised. It tasted delicious.

Now what followed is, that’s my style, a thorough research about Ramen. Where does it come from and how to prepare it yourself. I quickly found out some basic receipts for the Japanese fast-food and set out to go practical. 

First of all, there are 3 steps to get a ramen soup:

  1. Base broth
  2. Spice broth
  3. Soup with toppings

When you watch receipts in youtube, you see that often for sake of efficiency the distinction between base and spice broth is ignored and one broth according to the local habits is prepared only. Here we want to stick to the original as close as we can.

Base Broth

First is to note, that ramen is used in all of Japan and this is a country that stretches from the sub-tropical south of Okinawa up to frozen north of Hokkaido. So it is natural that depending on region the basic style is different and adapted to what’s available there. This is why one distinguishes different ramen types:

  • Shōyu ramen (醤油, “soy sauce”) with soy sauce
  • Shio ramen (塩,”salt”) based on fish and seafood
  • Miso ramen (味噌) based on fermented soy beans (miso paste)
  • Karē ramen (カレー,”curry”) with curry
  • onkotsu ramen (豚骨ラーメン) based on pork meat and bones

You can find quite original receipts on Lecker (onkotsu ramen) and Chefkoch. For the first time, I did indeed follow the basis receipt with pork meat and bones, as that’s what I could get. Getting pork bones is rare actually as they do not keep up very well and you don’t use it for cooking sauce or broth normally but rather beef bones. So lucky coincidence.

So I used:

  • Pork bones with meat
  • Mixed vegetables for soup (mirepoix)
  • Garlic
  • Chicken wings
  • Kombu alga – hard to get because it is obviously expensive

For detailed preparation see the receipts, I had to exchange Kombu alga for Wakame alga. You cook for 2 hours. The broth can be frozen, taken care to not fill the bottles or other containers very full in order to avoid that they burst. 

Spice Broth

The spice broth is made, short term, from

  • Soy sauce
  • Flares of bonito (dried thin sliced tuna) – one can buy in the local Asia shop
  • The meat from the base broth

The spice broth does not have to be cooked long up-front but, when needed as it does not take too much time. The spice broth is added to the base broth to create the soup’s broth.

Soup and Toppings

Now you can start creating the soup itself. With ramen it’s like Pizza, you can add what you like if not following some traditional receipts. Here some ideas:

  • Sautéd Mushrooms are always good: Shiitake, Enoki or other asian mushrooms are must have. 
  • Spring onions, I like them sautéd as well
  • Pak Choi, again sautéd shortly
  • Sprouts, sautéd
  • Meat, fish from the broth or shrimp
  • Roasted vegetables like corn, thin sliced carrots
  • Cooked eggs
  • Pumpkin
  • Sesame paste

and of course noodles, either ramen noodles or other asian noodles like soba or udon. The ramen noodles are made of wheat, soba from buckwheat. Arrange everything neatly, with the half eggs on the top and ready is the ramen soup. It’s not really fast to prepare with all the stuff to roast and the hour-long broth cooking but well prepared you can re-use the frozen broth and then it’s not too much effort.

Delicious and good for a whole meal, enjoy!

And next time another variant …

Peter

Produkte digital-first denken

Barbara Hoisl, ist eine freiberufliche Business- und Strategieberaterin und eine lang-jährige Freundin aus alten Zeiten, als ich bei Hewlett-Packard (HP Openview Software, ein Bereich, den es in dieser Form nicht mehr gibt) gearbeitet hatte.  Barbara ist eine, nun ja visionäre, Expertin für Software-Produktmanagement, Finanzierung von Startups und den Software-Business. Ich hatte die positive Erfahrung Barbara früher bei HP eine kurze Zeit als Chefin zu haben. 

Letztes Jahr hat Barbara doch tatsächlich ein eigenes Buch geschrieben, “Produkte digital-first denken“, auf Deutsch. Ich schätze mich glücklich zu denjenigen zu gehören, die Anfang des Jahres eine (kostenlose) Ausgabe ihres neuen Buches bekommen hat. Daher wollte ich hier darüber berichten, wie das Buch geworden ist und was ich daraus gelernt habe.

Erst ist man irritiert, muss man ein deutsches Buch über das Thema Digitalisierung schreiben? Aber ich habe auch in der Arbeit schon öfter festgestellt, man vergisst schnell, dass ich Jahre-lang bei einer amerikanischen Firma gearbeitet habe und die Verwendung von English als Umgangssprache für mich zur Selbstverständlichkeit geworden ist, aber für doch noch viele, die nicht aus der Softwarebranche kommen eher noch ein Problem darstellt. Und ihr Buch wendet sich ganz klar an deutsche mittelständige Unternehmen, wo Deutsch doch noch die Fachsprache darstellt. Bis vor wenigen Jahren war das bei meinem Arbeitgeber (Bosch) auch noch der Fall.

Das ist auch schon einer der interessanten Punkte, warum dieses Buch eine Lücke im Portfolio der Bücher über Digitalisierung darstellt, es ist wirklich für den Personenkreis geschrieben, der die Digitalisierung und die Einführung von Softwareprodukten, IoT und IIoT durchführen muss um fit für die Zukunft zu werden. Und den Zielgruppen-gerechten Schreibstiel hat Barbara auf faszinierende Weise getroffen. Da sind auf der einen Seite doch die vielen anglophilen Ausdrücke, die für uns Softwerker so selbstverständlich sind, für das Zielpublikum aber hole Phrasen darstellen. Aber hinter den “Phrasen” stecken eben wesentliche Konzepte der Softwarewelt, welche die heute großen IT-Player (GAFA = Google Apple Facebook Amazon) eben erfolgreich gemacht haben und die ohne eine Anpassung der etablierten Produktionsfirmen in Deutschland in Zukunft auch deren Geschäft gefährden werden. Das heisst, wenn sie eben nicht die Digitalisierung und die Einführung von Softwareprodukten ernst nehmen.

Und genau das erklärt Barbara in verständlichen Worten, erklärt die Sätze wie “Software is eating the world”, “Winner takes it all” Effekt in Platform-Geschäftsmodellen, “Think big, smart small” und “Sell the future” Strategie. Interessant ist dabei, dass ich, der sich auch schon intensiv mit Software-Platform Geschäftsmodellen auseinandergesetzt habe und der all diese Prinzipien der Softwarewelt als gegeben und als klar versteht, dabei immer noch etwas lernen kann. Man wird sich über die Unterschiede zwischen den deutschen erfolgreichen Produktionsunternehmen und den (meist amerikanischen) IT-Unternehmen noch klarer und erkennt den Handlungsbedarf Produkte digital neu zu erschaffen.

Bosch ist eine solches Unternehmen, mit hunderten Produktionswerken und unglaublichem Wissen über Fertigung und Logistik und ein Unternehmen, dass sich ganz klar auf den Weg zum Software-Unternehmen befindet. Mein Geschäftsbereich “Bosch Connected Industry” ist an vorderster Front mit dabei. Aber ich habe auch schon, auf Messen oder in Gesprächen, bemerkt, dass dies durchaus nicht für die Masse der kleineren mittelständischen Unternehmen, insb. in Baden-Württemberg gilt. Dabei gibt es hier viele heutige Weltmarktführer in hunderten technischen Nischenmärkten. Und genau diese muss das Wissen über die wirkliche, Buzzword-erklärte Bedeutung erreichen. Barbara’s Buch ist einzigartig darin, genau das hoffentlich erreichen zu können.

Was mich dabei fasziniert hat ist, durch die Darstellung im Buch wieder klar zu werden, wie wichtig dabei die richtige Denkweise (“Digital Mindset”) zu bekommen (zu erlernen?). Zu verstehen wie die neuen großen innovativen IT-Player denken gegenüber den traditionellen etablierten aber langsamen Unternehmen. Barbara erklärt dabei viele Modelle, wie den Produkt-Lifecycle, Moore’s Law und exponentielles Wachstum, 3 Horizonte der Innovation, Innovator’s Dilemma, 10 Types of Innovation, 6D-Modell. Die beiden letzen waren z.B. auch für mich neu und ich habe mir gleich die dazugehörende Literatur besorgt.

Das schöne an ihrem Buch ist, dass sie die abstrakten Modell immer mit praktischen Beispiele aus B2B und B2C Märkten erklärt. Bosch Software Innovations (mein erster GB bei Bosch) kommt übrigens auch darin vor (sic!). Lieblingsbeispiel Tesla, wo es für mich auch noch etwas zu lernen gab. 

Schliesslich gibt sie auch noch einige Empfehlungen am Ende des Buchs wie man die Transition zu einem Unternehmen, das “digital-first” denkt organisieren sollte. Nicht, dass jedes Unternehmen das so angehen würde und man sieht die Probleme, die dadurch in der Umsetzung entstehen im eigenen Unternehmensbereich. Alles in allem eine bereicherndes Buch, dass ich jedem der im IIoT Bereich unterwegs sind oder sein sollten, und das sind eben alle traditionellen Produktionsunternehmen, wärmstens and Herz legen kann. 

Viele interessante Erkenntnisse beim Lesen!

Peter

Bulletproof SSL and TLS

As I’m currently involved with lots of openssl automation at work, I bought the book “Bulletproof SSL and TLS” from Ivan Ristić. See the book’s site at https://www.feistyduck.com/books/bulletproof-ssl-and-tls/. Attention, it looks like on Amazon there is only the 2014 edition available, while on Ivan’s blog (https://blog.ivanristic.com/2017/07/announcing-bulletproof-ssl-and-tls-2017-revision.html), which I found out after the purchase of course, there is a 2017 version mentioned. Nevertheless that the book edition I read was a bit dated, I learned a lot, despite having been engaged with openssl before. 

It is a difference being able to generate and sign some certificates and knowing the history, the vulnerabilities and mechanisms of the protocol itself. This book is definitively the “bible” of TLS from the founder of the (Qualys) SSL Labs with the famous SSL server test tool (btw. also available as standalone tool: ssllabs-scan on github). So there is quite some expertise and mastership behind this book.

What can one learn from the book? Well first a thorough basis and the insight, or maybe reminder, that TLS is not just encryption but also certificate-based authentication and provides integrity and session management. So it’s a bundle of security functionality that can be used not only for HTTPS but also any other protocol that you can run over TCP. There are many articles about TLS port forwarding, but with the book, I have finally gotten the differences. 

There is by the way also a github repository to the book that contains among other resources configuration files for setting up a own root CA for self-signed certificates. That being a task that I’m just involved in and this thus very handy to verify my configuration taken from other sources in the web. Clearly for public customer or browser-facing endpoints one will always have to use purchased certificates from a public CA. But in the innards of a system, behind a reverse proxy or from the application backend to a infrastructure service, such as RabbitMQ or a DB, self-signed certificates, well-configured, serve well their purpose. And you save money and have the full control over expiration time and what not.

Especially interesting for were the details on OSCP and OSCP stapling and all the other initiatives that there are. Certainly a topic that one would like to explore at work for getting an additional grain of security into especially cloud-hosted services. Another concept that was covered were the different ways of pinning and what it really means. It is not so a sophisticated concept that nobody uses, anyhow. 

What I found especially helpful were, beyond some openssl command-line examples also a in-depth chapter on configuring Nginx with TLS, something that I happen to just do at the moment at work, too. What a coincidence. That adds well to the Nginx TLS documentation, which is more reference than tutorial. Especially the securing of a down-stream connection to backend services in a reverse proxy scenario.

Well, this is thick book and it took a while to get through but it was worth it and I’m now feeling much better prepared for practical work with TLS, openssl and juggling with certificates.

Yours, Peter

Books for Learning

Books, printed books, despite the promise that e-books will kill them in the age of the Internet, they’re still there and sold in millions by Amazon and the like. But I observe, that they’re still on the decline. Not because everyone would only read them in electronic form only. For this the smartphone is simply not really convenient. You need a tablet/iPad to enjoy reading e-books IMHO. No, it’s because books aren’t used anymore for learning. 

When I look around me at work, nobody other than me has books besides his desk and seems to read any book for work. Maybe they only read prose for pleasure, fine as well. But even that you see less and less in public transportation. What I see is that people don’t use books for learning but use other media instead. They watch videos, listen to podcasts or read articles in the world-wide web as they need it. You have an issue, you search and find some resource that tells you how to get ahead. On-demand learning, so to say. 

A book is something longer-term, you buy, you watch is waiting for you and you spend weeks, if not months to digest it in one piece if it is a good one. That takes time and effort and persistence to do. That’s not like a two-page article or 20min video that comes right to the point. I guess this is really the point, these other internet-based media are easier to digest and solve your problem of learning that you have right in this moment.

But this type of learning is a shallow one. You don’t really learn the fundamentals of the technology or topic. You learn how to solve exactly this one problem and the next day you’re as dump as you were. This is why people always say “I have no idea”. Really I avoid this phrase like hell. I do want to have a clue, an idea on the topics, I speak about, otherwise I shut my mouth. 

And for this you need deep knowledge, expert knowledge. Books are written usually by experts, at least if it’s a good book. They build up the topic from the grounds and systematically consolidate the matter using examples and give you reasons and arguments. At the end you are maybe not an expert yourself, experience is missing. But at least you have the feeling that you have profound knowledge to start from. 

There is the model of “cone of learning” from Edgar Dale, I think. It explains how good media are for learning. The book is doing pretty bad in this model. It is passive learning and you remember only small parts of what you read. In contrast a video or podcast is remembered much more. And that is probably right in the general. How long do you remember what you read a year ago in a book? Nevertheless the depth is a different in a book in contrast to other media. and I would say it needs to stay in the learning mix also these days, electronic or not. 

There is one more aspect, that I reflected about. Exactly that is the point, writing a book is equally more effort and takes more time than creating a podcast, video or writing an article (like this one, lol). So there is so much preparation going into writing a (good) book. A good friend of mine, Barbara Hoisl, wrote a book last year. Content-wise this is a completely own article, but she worked for more than a year only on the book, left apart the time for thinking about it and preparing the steps to get to start at all. But this is not only time, it is reflection and thinking time. And a book includes this reflection and thinking of months and years. It is maybe courageous to talk about wisdom, but a book certainly captures more wisdom than other media. And this is why one should read books in addition to consuming internet-based media.

Well, think about it, yours

Peter