OWASP SecurityRAT

SecurityRAT is a OWASP open-source project (github), the RAT stands for requirements automation tool. Currently the version 1 (1.7.8 as of the time of writing) is the productive version, but a version 2 is in the making with new architecture. SecurityRAT is based on JHipster Java rapid application development framework. Version 1 is a classic application, version 2 will be JHipster micro-service based.

SecurityRAT is used to create a set of security requirements for a supporting asset, the main part of a security concept. In the end it is nothing more than a replacement for MS Excel a way to get a filtered list of requirements with status and details provided by the development team on the implementation of the requirement in the asset. That status and the details are the important part, because having a list of requirements if fine but without knowing, whether they have been implemented or not and how, they don’t help a lot. Spending time on the details is important, they should contain additional information on the implementation, such as a link to the Jira or TFS issue, a link to a wiki page with implementation details or prove of implementation.

The cool thing is that one can define its own fields and values as long as it fits into the general idea of SecurityRAT. The central concept is a list of requirement skeletons classified by categories and tags that have columns and belong to a project type. The requirements in the database are skeletons or templates, from which instance of requirements for a given supporting assets will be created at run time. Those requirement instances are never actually stored in the database but exist only in memory on the client (browser) side. As soon as one has understood this, it’s a done deal in understanding how the tool works. You pour a pre-classified list of requirement templates into a database and instantiate them for a supporting asset at runtime with the additional benefit of filtering the list down to the relevant ones using category questions, filtering and tag selection at runtime.

We use SecurityRAT as an expert tool. This means, that not all developers work with the tool all day but only selected security lead experts, do. SecurityRAT spits out a Excel document with the requirement instances together with the up-to-date status and comment (optional columns). This is what other people work with. You put the Excel into the wiki for documentation or generate sub-tasks in Jira etc. SecurityRAT can also directly create stories and sync them bi-directionally, which would be really cool. Unfortunately this doesn’t work at my place, don’t ask why, it’s a sad big enterprise problem.

But working with Excel, or better CSV files, has some advantages, too. You can convert it to markup or generate task language in Jira from it easily with a little script. I use Groovy for it, but that’s a matter of personal taste.

SecurityRAT comes out of the box with a SQL dump for the OWASP ASVS (Application Security Verification Standard) requirements catalog. We have in the mean time at work also version 4.1 and many other catalogs that we pour into SecurityRAT instances. Version 1 somehow requires one RAT instance per catalog, although you can of course put multiple catalogs that have the same structure into one big instance, e.g. CIS Benchmarks. That ends up in a big list of instances for e.g.

  • OWASP ASVS
  • IEC 62443
  • DIN SPEC 27072
  • Corporate security requirement lists
  • CIS Benchmarks (you need to be member to get the XLS files)
  • Own catalogs e.g. for RabbitMQ

Apart from really filling in security requirements, SecurityRAT can be mis-used very well for other tasks. Things I use it for are among others:

  • Vulnerability assessment according to the OWASP Testing Guide (OTG). Excellent, you set the status to passed/failed and fill in the findings and get a nice Excel
  • Security maturity assessment according e.g. BSIMM or OWASP SAMM, answer the questions pre-filtered by level that should be achieved and get a nice Excel
  • Threat modelling using STRIDE – that stretches a bit the idea but works, when you have a list of threat skeletons instead of requirements.

Using SecurityRAT for status tracking with the OTG or ASVS are a good example where it makes sense to fill multiple, testing or requirement guides, into one instance, e.g. web services, mobile and IoT. The make a category or project types for these (one support asset can be either a web service or a mobile app or a IoT device). This way the user selects the type of asset in the initial “question” aka collection instance list implicitly.

In the end you could do a lot of this with Excel but you have IMHO the following advantages by SecurityRAT:

  • It’s not a document that rots somewhere on a share but a server with a nice web-based interface
  • The definable collection categories allow you to pre-filter the requirements at the beginning using customizable questions. Yes you could filter in Excel but you don’t have this very usable two-step process that helps in reality. Using Tags you can also filter when the requirements list has been generated, as well.
  • You can save the working results in a YAML file, load that again and continue, e.g. by adding also custom requirements. So only one place.

Being a server and database solution, filling an instance with data could either be done by UI, but you will quickly skip this idea for larger catalogs, even when the batch operations are really handy. Just use (again Groovy or other) scripts to convert a CSV source into SQL statements or directly insert it into the DB, which is then a bit more work. Unfortunately the entities in the SecurityRAT do not have surrogate keys so your script needs to manage the uniqueness of the database IDs by itself, which is sometimes, well a mess.

The down-side of the tool is a bit the missing calculation and missing colors for status fields that you have in a spreadsheet. E.g. for risk assessments it would be cool to calculate a risk factor from likelihood and impact automatically. But that is not possible.

BTW, SecurityRAT runs with docker out of the box, using MySQL or for license sake MariaDB is no problem. Problems will manifest themselves with endlessly long Spring Java exceptions, that will require a bit of digging into. We run everything in docker-compose, backup with mysqldump using docker exec and a nice landing page for the different instances.

Overall SecurityRAT, thumbs up!

The Practice of Network Security Monitoring

The second book from Richard Bejtlich in short time: “The Practice of Network Security Monitoring” has been read. This one is a bit newer, though not totally up to date, from 2014. The practical part of the book is based on the Security Onion (SO) distribution. Unfortunately a lot has happened with SO in the mean time. The book is still based on ELSA as part of SO, which has been swapped with the Elastic stack in the meantime. So the installation part could be skipped, also due to the fact, that I have already several times performed a SO installation at home. 

Just as with the TAO of network security monitoring book a lot of space is dedicated to various, in the mean time, well-known sniffing tools such as Wireshark, Bro, Argus, Sguil, Squert, Snorby etc. Nevertheless in the last third these tools are used for various real-life scenarios such as binary extraction with Bro, detecting server- and client-side intrusions, that were especially helpful. 

Security Onion is definitively the first choice for a real NSM with Sguil as real-time NSM console. For a home NSM a more historic Elastic stack-based NSM will probably be more useful, as I will not constantly monitor a NSM console all day long :-). The problem is a bit that SO is a big system, unfortunately a bit too heavy for the old laptop that I can dedicate to the NSM server part at the moment. Therefore I switched to SELKS, also a NSM distribution from Status Networks, also based on Elastic stack but a bit more light-weight. ELSA, based on syslog-ng doesn’t fit well anymore when you would like to use filebeat/packetbeat as logfile shipper. 

BSides Stuttgart 2019

This post is a bit delayed, on the weekend, 25th and 26 of May, the first BSides Stuttgart took place in the Wizemann location. I was lucky to have been there, because after monitoring the site months and weeks before, there was no program published and no way to buy tickets. But when looking on it 2 weeks before again, it was already sold out. As this was, as you can see, a Bosch-organized event, I still managed to get listed as a guest, thanks to dear colleague from Bosch CC. 

Security BSides conferences were originally a way to give those a platform whose presentations had been reject by the large conferences like DefCon or BlackHat, but in the mean time this is a grass-roots DIY conference format world-wide. And the contents are not second class in any way, in the contrary as this event has demonstrated!

BSides Stuttgart as the first of its kind in Stuttgart in 2019 took place in the previous industrial facility Wizemann co-working space. Same place as a digitalisation hackathon form Bosch before, just smaller. Great atmosphere and well prepared by CC security people from Bosch.

Co-organized by the ASRG (Automotive Security Research Group) and being hosted in Stuttgart, the event was pretty automotive oriented in general. BUT there was a general track with interesting presentations on cyber security in general. As you can image, this was the track I’ve been mostly following. 

Many colleagues from Bosch PSIRT and CERT and other (automotive) Bosch GBs attended the conference together with people from other companies such as Daimler.

These are the topics I had attended and are noteworthy on day 1:

  • How does ASCII and Unicode affect our Security
    Very interesting presentation on how Unicode and Punycode tricks can be used for DNS squatting and opening vulnerabilities for buffer overflows
  • Elastic Stack for Security Monitoring in a Nutshell
    Workshop on using ELK and Beats to build a SIEM more powerful than commercial products
  • OpSec++ the FastTrack
    Security testing using OSSTMM methodology
  • Cyber Threat Intelligence for Enterprise IT and Products
    A presentation from @Wagner Thomas Daniel (Bosch PSIRT)form PSIRT on a concept for product CTI
  • Weaponizing Layer 8
    How to treat users not as DAU but involve them into building a security culture in the organization.
  • Introduction to Osquery
    Very interesting workshop on osquery a service that exposes system information such as processes, filesystems, etc. via a SQLliste-compatible SQL interface. Also works with docker (as a companion to Sysdig?) and spits out logs.

On the second day, the sunny Sunday, I’ve been listening to the following presentations:

  • What to log? so many events, so little time
    On a tool from a Microsoft lady to catalog and filter the many events that the Windows OS produces with mapping to MITRE Att@ck techniques. Interesting approach and using sigma for generating SIEM queries for the relevant events. 
  • Security Onion
    Workshop on Security Onion, a Linux distribution specially for security monitoring, forensics and incident response, just like Kali is for pentesting.
    Included some real-live example how an attack could be detected and handled based on network logs using the various tools bundled in the distribution.
  • NoSQL Means no Security?
    Insights on the security posture and evolution of MongoDB, Redis and Elasticsearch. This will get us some ideas on hardening our NoSQL databases potentially.
  • Scale your Auditing Events
    Again from Elastic but on the Linux auditd sub-system and how to process its audit events with Auditbeat and Elastic stack for security monitoring.

Slides have been published on a bsidesstuttgart gitlab site or are posted on the bsidesstuttgart twitter

I’ve learned so many new tools, and new information especially in the areas of network security and security monitoring for getting OpSec started.

What’s pretty sure is that BSides Stuttgart will continue next year, maybe growing and giving also you a chance to grep a seat. I’s cool that we finally have a cheap and open security conference right here in Stuttgart, thanks to the organizers from Bosch for the great event! See you there next year, mark your calendar already for May 14 -16 2020!

The TAO of Network Security Monitoring

Wow, that was a thick book, the Tao of Network Security Monitoring, beyond intrusion detection from the guru of NSM, Richard Bejtlich. This book is considered the bible of NSM. The book is from 2004 and thus a bit out of date, especially as it is filled with tons and tons of tool, one will find that some of these do not yet exist anymore or development has stopped years ago. But the intention of the book is not to serve as a tool reference but to show which tools are available and what they can be used for. So the brain needs to translate the samples to what tools we have today available. And anyhow in each category we still have enough candidates.

The story line of the book is basically along the different types of network security monitoring data that one can capture along with the tools that provide it:

  • full content data (packet capture, e.g. from tcpdump, wireshark)
  • Packet headers
  • Session or flow data (e.g. from Argus, flow-tools)
  • Alert data (e.g. from Bro)
  • Statistic data

Bejtlich explains the use of these types of data and the corresponding tools using real-life samples of attacks. This is cool, although following the packet dumps without in-depth protocol knowledge of IP, UDP, TCP, DNS etc. is really a bit hard. Luckily he explains it after the printed dump, so one can be a bit lazy. Probably that is not a good idea, as one missing some learning, but that would probably require a second round of reading.

But the real learning from this book is understanding what a well-configured NSM system and especially stored session data can really give you to detect all kinds of attacks, if you just watch closely enough. The interesting question for me still to answer is how can I transfer this knowledge to cloud-based NSM, where we have some packet capture abilities but all the rest of the tools, how to make use of them in such an environment is left as an exercise. 

Summary: definitively worth a read, although it could get an update once a while.

IoT Hackers Handbook

Somewhere, I don’t know where, I was getting aware of the book “IoT Hackers Handbook” from Aditya Gupta. Well, bought it, read it. That wasn’t quite a long job as the font size is a bit larger than normal. There are two reasons you do this, either you want to avoid that older reader need their glasses (me?) or there’s not too much content but you still want to make it look like a in-depth book on the topic.

It was indeed a bit different than expected. Not bad, but different, which also tells you something. I’m a software guy, looking into hardware-near topics like BLE sniffing is interesting but not my homeland, so to say. But this book really started with hardware hacking after some introductory chapter on penetration testing IoT devices. I mean UART communication, JTAG debugging. Then it went slowly up in direction software, via firmware hacking, mobile apps (Android), software define radio (SDR) to Zigbee and BLE sniffing and packet resend. It didn’t get higher than this. That’s ok, as there had been topic, I hadn’t touched so far except for BLE sniffing. Especially the SDR part was quite interesting and encouraged my to maybe dig a bit into this topic. Understanding the communication of garage door openers etc. sounds interesting over all.

Don’t get me wrong, for consumer IoT devices, this is all important stuff to understand, test and hack. But IoT is a bit more than hardware, firmware and communication, at least in my mind. IoT lives from software, and not just hardware-near software. That is what brings the value and the new business models for IoT. Sure the book touched mobile apps as important part of a IoT solution but there is all the cloud connectivity and the software stack on the IoT device that I find the interesting part. And that was not covered beyond ZigBee and BLE. So not bad and helpful but surprising regarding the direction of what IoT pentesting should be like and maybe telling something about how IoT is regarded still today. 

To be fair, the book did dig into some use-cases of what you could do when having access to the device and being able to manipulate it at will, which wasn’t really difficult with the examples provided by the author. Weather stations, door openers, garage openers, the usual smart light bulb and beacons. I learned still a lot about tools and techniques for these low-end IoT devices and how easy it is to break them with just a little bit of knowing some tools and reading specifications. And unfortunately you can transfer this experience to more complex “IoT” devices like PLCs in IIoT or gateways. Just the specifications are a bit thicker and complex. But the door is equally wide open for white as well as black hats. 

HTTP Health Check for Docker

I just published a little tool called htcheck (docker-health-go project) on github: https://github.com/pklotz/docker-health-go . This being my first Go program, be kind to me, should there be better ways to implement it. 

It is intended as a very simple HTTP health check client for use in Docker health checks. Motivation, why this program?

When using docker directly or via docker-compose, you can and should define a health check, so that docker knows that the process it is running is doing well. There are a couple of libraries to provide a HTTP health endpoints for Go, such as [https://github.com/docker/go-healthcheck] and also Java offers with Spring Boot Actuator corresponding framework.

But on the client side, you still need to use curl or the outdated wget to perform the check. If you ever checked, which dependencies curl and thus libcurl4 brings with it, you might wonder if this is worth the ballast just to do a simple HTTP get with an exit code. Libcurl brings openldap libraries into the image and what not. So this little decent project provides a special-purpose HTTP client to use for health checks in docker or elsewhere instead of throwing a general-purpose HTTP client at the job.

It supports making a HTTP GET request to a URL and reading a JSON document back and checking for a value in it using a jq-like path expression.

Simple sample usage in Dockerfile:

COPY ./htcheck /usr/bin/

HEALTHCHECK --interval=5m --timeout=3s CMD htcheck -u http://localhost/ || exit 1

Sample usage for Spring Boot actuator health endpoint, which normally serve a JSON document in the form:

{
    "status" : "UP"
}
So using the JSON path feature, we can compare against a expected value:
COPY ./htcheck /usr/bin/

HEALTHCHECK --interval=5m --timeout=3s CMD htcheck -u http://localhost/health -p .status -v UP || exit 1

Licenses are checked and documented in the README. Thanks to the dependency projects [https://github.com/savaki/jq] and [https://github.com/spf13/pflag] that were made use of. Probably some features are still missing, but as a first shot, it should serve. S’il vous plaît!

Dualcomm 10/100/1000Base-T Gigabit Ethernet Network TAP

As I only have only a unmanaged switch Netgear FS116 at home, I don’t have a SPAN port to do network sniffing on the home LAN. In the course of building up a NSM (network security monitoring) setup for my home network, I needed a way to tap the wired LAN. Therefore I looked at network taps, which tend to be extremely expensive for home use. Finally I found some recommendation and bought a Dualcomm 10/100/1000Base-T Gigabit Ethernet Network TAP. It’s not cheap but better quality than a throwing star tap and offers full duplex passive sniffing of network traffic for affordable price. 

Setup is absolutely seamless, as there is no setup, just put the tap between the home router and the switch in order to get all internal traffic coming from outside, LAN cable to the sniffing ethernet interface and that’s it. The little box is powered by USB, so just put the USB cable to a monitor’s USB and that worked fine enough. 

Currently there are two options, either I use my RPI 3B LAN interface as sniffing interface and the RPI’s WLAN as management interface. Or I can also attach it to a old laptop, that I use as a monitoring collection station with SELKS distribution on it. I use SELKS instead of Security Onion (SO), because the laptop is just too old and SO freezes on this hardware. SELKS also has ELK stack and suricata installed and runs decently. Not optimal performance, but for testing it works. Also here WLAN is the management interface as the laptop also only has one wired LAN interface. And sniffing interfaces are not managed and don’t get an IP, as they are input only.

Long-term it could be interesting to replace the unmanaged switch with a managed switch so that one can move the tap to any other place  and use the SPAN port of the managed switch for e.g. the RPI. With the new RPI 4 model B one gets true gigabit LAN and that should be able to handle all traffic that the switch provides without any problems in such a home setup.

The packet-foo blog contains the probably best article series on network package capture and analysis including network taps, that you can find. 

Trying to build packetbeat for Raspberry PI (arm64)

After my previous article on building filebeat for Raspberry PI 3 B+ (arm64), I now wanted to get a binary for packetbeat, the second most interesting module of elastic beats. I tried the same approach with cross-compiling using GOARCH=arm64 but it fails, while a straight compile for amd64 works. It fails with a message that it excludes all Go files due to build constraints. Issue is that there is probably native C code involved and you cannot cross-compile this beat. I searched posts and tried all options for 2 hours, it does not work.

I tried again on the PI directly, the build is running but if you do a “go install”, it finally gets out of memory (“cannot allocate memory”). Problem is that the PI 3 has only 1 GB of memory and that does not seem to be enough. I tried all kinds of tricks, like setting GOMAXPROC=1, GOGC=70 but njet. The problem also seems to be related to the C build using gcc. You need to install “libpcap-dev” for the “pcap.h” header file using “apt-get install”, otherwise one gets a compile error earlier.  If using “go build -v -x” directly you get this “cannot allocate memory” message from gcc. When using “make” the build gets killed instead. Nevertheless it’s rare are there are reports from people that compile Kubernetes on arm64 RPI 3B like mine. Probably K8s does not contain native C parts like the libpcap in packetbeat. So I finally gave up, because …

But … there is good news ahead! Since a few days, the new Raspberry PI 4 Model B has been released! With up to 4 GB memory that will hopefully work. Also it has now true GB LAN, which is for network sniffing, not a bad idea either, when attaching it to a real network tap. So that is a clear buying plan for Juli!

Modsecurity Handbook

Modsecurity (from SpiderLabs) is probably the best known open-source web application firewall (WAF) originally (and still is) a module from the Apache web server. But in the mean time it is also available as module for Nginx (nginx-modsecurity) and IIS and other integrations. I came into contact with modsecurity in the context with Nginx.

The second important project in conjunction with modsecurity is the OWASP core rule set (CRS) a set of modsecurity rules for a WAF. You meet these two in many unexpected places, e.g. the Azure application gateway is based on Nginx with the CRS. Or the Kubernetes ingress controller is an Nginx with CRS and modsecurity WAF module included.

This is why I recently bought the book “Modsecurity Handbook” from Feisty Duck and the authors Christian Felini and Ivan Ristić (see https://www.feistyduck.com/books/modsecurity-handbook/).

This is really “THE” book on modsecurity from its authors, the bible to to say and goes into the depth of writing rules youself. It is not an explanation of the CRS, for this there no books, you have to read the rules in the github repository. This book does prepare you to do this, something that look daunting at first when you ever looked at the CRS ruleset without preparation.

That is probably one of the most important learnings one gets from the book, because I don’t know yet, whether I will write my own modsecurity rulesets myself. Although the second interesting insight were the use cases that you could cover with a WAF. The book has a long chapter on this topic and delves into detailed implementation ideas on use cases such as

  • IP address tracking and blacklisting
  • Session tracking, blocking, forced renegotiation and restricting session lifetime as well as detecting session hijacking (a well-known attack technique)
  • Brute force attack detection
  • Denial of service (DoS) detection
  • Periodic security testing and alerting
  • User tracking
  • Whitelisting of application operations
  • File inspection
  • Dynamic patching of application vulnerabilities or for exploits

The idea that nginx logs are an important source for security and audit logs for a SIEM is certainly not surprising. But being able to actively detect during runtime certain vulnerabilities and constantly reporting them as security alert is interesting. Think about missing security headers or wrongly configured content security policies (CSP) in HTTP. Instead of detecting it during vulnerability assessments or penetration tests, such inspection can happen during operations and thus provide a 100% coverage of all operations.

Also having a tool to quickly mitigate a vulnerability before the development team can come up with a fix and new release for a backend sounds interesting. You can even inject content into a response, e.g. Javascript. I just have some doubts about the complexity to introduce new rules and such mitigations quickly in environments where modsecurity and CRS are realistically found these days, such as Kubernetes ingress or WAF. An Azure application gateway for example does not expose the full functionality of modsecurity directly but hides most juwels under some own configuration portal.

At the end the book contains an extensive reference part with explanation of all the directives, variables, operators and actions of the modsecurity rule language. This way the book serves well when you need to actually develop rules in practice beyond what the Internet provides as reference resources.

Creating your own rulesets has its quite hard complexities in my opinion, but it is a tool in your defensive toolset. At least using the CRS with a WAF out-of-the box with just slight tuning, such as disabling rules that produce false positive or unneeded rulesets should be possible. That is anyhow the only thing that environments such as Azure application gateway allow you to do. Going beyond that needs a good reason. The disadvantage is really that such configuration is decoupled from the protected service or application and if we can fix a vulnerability there quickly it certainly is the preferable option before we turn to using custom modsecurity rules. In times of continuous deployment that should be fast enough to avoid dynamic patching. For old applications where there is no team anymore maintaining security or that has half-year release cycles this is still a valid option in the security control portfolio.

I’m curious if I will finally use it one time or not.

Yours, Peter

Bulletproof SSL and TLS

As I’m currently involved with lots of openssl automation at work, I bought the book “Bulletproof SSL and TLS” from Ivan Ristić. See the book’s site at https://www.feistyduck.com/books/bulletproof-ssl-and-tls/. Attention, it looks like on Amazon there is only the 2014 edition available, while on Ivan’s blog (https://blog.ivanristic.com/2017/07/announcing-bulletproof-ssl-and-tls-2017-revision.html), which I found out after the purchase of course, there is a 2017 version mentioned. Nevertheless that the book edition I read was a bit dated, I learned a lot, despite having been engaged with openssl before. 

It is a difference being able to generate and sign some certificates and knowing the history, the vulnerabilities and mechanisms of the protocol itself. This book is definitively the “bible” of TLS from the founder of the (Qualys) SSL Labs with the famous SSL server test tool (btw. also available as standalone tool: ssllabs-scan on github). So there is quite some expertise and mastership behind this book.

What can one learn from the book? Well first a thorough basis and the insight, or maybe reminder, that TLS is not just encryption but also certificate-based authentication and provides integrity and session management. So it’s a bundle of security functionality that can be used not only for HTTPS but also any other protocol that you can run over TCP. There are many articles about TLS port forwarding, but with the book, I have finally gotten the differences. 

There is by the way also a github repository to the book that contains among other resources configuration files for setting up a own root CA for self-signed certificates. That being a task that I’m just involved in and this thus very handy to verify my configuration taken from other sources in the web. Clearly for public customer or browser-facing endpoints one will always have to use purchased certificates from a public CA. But in the innards of a system, behind a reverse proxy or from the application backend to a infrastructure service, such as RabbitMQ or a DB, self-signed certificates, well-configured, serve well their purpose. And you save money and have the full control over expiration time and what not.

Especially interesting for were the details on OSCP and OSCP stapling and all the other initiatives that there are. Certainly a topic that one would like to explore at work for getting an additional grain of security into especially cloud-hosted services. Another concept that was covered were the different ways of pinning and what it really means. It is not so a sophisticated concept that nobody uses, anyhow. 

What I found especially helpful were, beyond some openssl command-line examples also a in-depth chapter on configuring Nginx with TLS, something that I happen to just do at the moment at work, too. What a coincidence. That adds well to the Nginx TLS documentation, which is more reference than tutorial. Especially the securing of a down-stream connection to backend services in a reverse proxy scenario.

Well, this is thick book and it took a while to get through but it was worth it and I’m now feeling much better prepared for practical work with TLS, openssl and juggling with certificates.

Yours, Peter