Code Quality – Cyclomatic Complexity

In the standard ISO 26262-6:2011 [1] the term “complexity” appears a number of times, generally in the context of reducing or lowering said complexity.

There are many different ways of defining “complexity”, for example, Fred Brooks, in his 1986 landmark paper, “No Silver Bullet — Essence and Accidents of Software Engineering” asserts that there are two types of complexity; Essential and Accidental. [2]

Rather than getting into esoteric discussion about design complexity, I’d like to focus on code complexity.

Over the years, I have found one metric to be the simplest and most consistent indicator of code quality – Cyclomatic Complexity. This is often also referred to as the “McCabe Metric”, after Tom McCabe’s original paper in 1976 “A Complexity Measure” [3]. 

It’s not perfect, it can be fooled and the complexity measure can differ slightly from tool to tool (as the original work was analysing Fortran for the PDP-10). But, given that caveat, it has major value to act as “raising a red flag” if code goes beyond certain thresholds. In addition, it is easily incorporated as a “quality gate” into a modern CI/CD (Continuous Integration/Continuous Delivery) build system.

Cyclomatic Complexity (CC)

Very simply, the metric is calculated by building a directed graph from the source code. It is done on a function-by-function basis and is not accumulative (i.e. the reported value is just for that function).

Given the following code:

void ef1(void);
void ef2(void);
void ef3(void);
   
void func(int a, int b)
{
  ef1();
  if(a < 0) {
    ef2();
  }
  ef3();
}

A directed graph of the code would look thus:

The Complexity is measured using the very simple formula (based on graph theory) of:

v(G) = e – n + 2p

where:

 e = the number of edges of the graph.
n = the number of nodes of the graph.
p = the number of connected components

However, as we are analysing just functions and not a collection of connected graphs, then the formula can be simplified to

v(G) = e – n + 2

as P = 1 representing the entry and exit nodes of a function.

In this simple example v(G) is equal to two. Running Lizard, an open-source Cyclomatic Complexity Analyzer, the result (shown as CCN) is as expected:

$ docker run --rm -v $(pwd):/usr/project feabhas/alpine-lizard lizard
================================================
  NLOC    CCN   token  PARAM  length  location  
------------------------------------------------
       8      2     30      2       8 func@6-13@./main.c

In these examples I’m running Lizard from a docker container (as this fits with our Jenkins build system). However Lizard can be installed locally using pip install lizard if you have both Python and pip installed.

Continue reading

Posted in Agile, C/C++ Programming, General, Testing | Tagged , , | Leave a comment

Your handy cut-out-and-keep guide to std::forward and std::move

I love a good ‘quadrant’ diagram.  It brings me immense joy if I can encapsulate some wisdom, guideline or rule-of-thumb in a simple four-quadrant picture.

This time it’s the when-and-where of std::move and std::forward.  In my experience, when programmers are first introduced to move semantics, their biggest struggle is to know when (or when not) to apply std::move or std::forward.  Usually, it’s a case of “keep apply std::move until it compiles”.  I’ve been there myself.

To that end I’ve put together a couple of a simple overview quadrant graphics to help out the neophyte ‘mover-forwarder’.  The aim is to capture some simple rules-of-thumb in an easy-to-digest format.

Disclaimer:  these diagrams don’t address every move/forwarding use.   They’re not intended to.  That’s why we have books, presentations and long rambling articles on the topic.

Continue reading

Posted in C/C++ Programming | Tagged , , , , , , , | 2 Comments

Setting up Sublime Text to build your project

For many years embedded development was dominated by complex integrated development environments (IDEs) that hide away all the nasty, messy details of a typical embedded software project.

Recently, with the rapidly accelerating adoption of agile techniques in embedded systems, there has been a move away from integrated development environments towards smaller, simpler, individual tools.  Tools like CMake, Rake and SCons are used to manage build configurations.  Container facilities like Docker provide lightweight environments for build and test.  And developers are free to use their code editor of choice (and let’s face it: the “best editor” is as close to a developer’s heart as the “one-true-brace-style”)

And on.  And on.

As the title suggests in this article we’re going at how to integrate the Sublime Text editor with the SCons build tool, to make developing a little more elegant and seamless.

Continue reading

Posted in Agile, C/C++ Programming, General | Tagged , , , , | 1 Comment

“May Not Meet Developer Expectations” #77

Question:  Does the following compile?

int func()
{
  int (func);
  return func;
}

Continue reading

Posted in C/C++ Programming | Tagged , , , , , | 3 Comments

Bugs do matter…(unsurprisingly)

Welcome to 2018! How did that happen?

Thank you to everyone who attended last week’s webinar on “Measuring Software Quality“, and thank you for the positive feedback, it really does help us shape our future webinars/blogs.

During the talk, I discussed a suggestion by Sally Globe,  along the lines of “Bugs don’t matter”/”Perfect software is the enemy of rapid deployment” as long as you are “Not wrong long”, which came from an initial exchange on Twitter back last year:The caveat was this didn’t apply to safety-critical (which I think we definitely all agree with!). As I wasn’t at the talk, then some of the intent may be lost in translation.

However, as I see it, one of the major problems is that it assumes not only can we “fix it fast” (the goal of Continuous Delivery), which should be applauded (especially in these times of IoT and security vulnerabilities), but that someone notices before it is a major problem (think zero day vulnerabilities).

As a simple example; I happen to be a keen cyclist (yes a fully paid-up member of the MAMIL clan and #bloodycyclist). This week is the start of the 2018 cycling calendar with the Tour Down Under (TDU).  In 2018, the televised highlights in the UK are on a relatively new channel: FreeSports (Freeview/BT/TalkTalk 95, Sky 424 and FreeSat 252) as opposed to the usual EuroSportUK. Woohoo…(if that’s your bag of course)But guess what? Come 1AM, no cycling, so anyone who recorded it got one hour of random sport (none of it cycling), and then a further hour of the TDU. One minor problem, when the transmission ended there was still 59km to go, so no one saw the finish (usually pretty important)! Guess FreeSport’s excuse? software of course!

Okay, so not a major catastrophe in the big scheme of things, but if it was your project, would you be happy telling the powers-that-be “We weren’t wrong long”?

So the concept of “Not wrong long”; I guess it can work if:

  1. You can react quickly AND someone notices QUICKLY, or
  2. You have a monopoly, you say sorry and can blame the software, and
  3. You’re not controlling elements in the “real” world

I am a big fan of automation of the Test/Integration cycle using tools such a Jenkins and Docker (see previous posts). Assessing the potential for Continuous Delivery is a foundation of becoming more agile; but at the same time, we must always be wary of generalised and sweeping statements that have come from people not working in the embedded space.

Posted in Agile, Industry Analysis, Testing | Tagged , | Leave a comment

Exceptional fun!

In this article I want to look at some applications for one of C++’s more obscure mechanisms, the function try-block.

Continue reading

Posted in C/C++ Programming | Tagged , , , , , , | 3 Comments

An Introduction to Docker for Embedded Developers – Part 4 Reducing Docker Image Size

In Part 3  we managed to build a Docker image containing the tools required to compile and link C/C++ code destined for our embedded Arm target system. However, we’ve paid little attention to the size of the image. Doing a quick Docker image listing we can see its grown to a whopping 2.14GB:

$ docker image ls
REPOSITORY              TAG     IMAGE ID       CREATED             SIZE
feabhas/gcc-arm-scons   1.0     6187455a4bfe   8 days ago          2.14GB
gcc                     7.2     7d9419e269c3   2 months ago        1.64GB

In your day-to-day work the size of a Docker image may not bother you as Docker caches images locally on your machine. But after a while you’ll certainly need to prune them.

Apart from freeing up disk space, why else look to reduce the size of an image?

Continuous-Integration (CI)

As previously mentioned, the overriding benefit of using a Docker-based build is consistency and repeatability of the build. But, for modern CI to be effective, we want the build/test cycle to be as quick as possible.

A local provisioned build server (such as a Linux server running Jenkins) will also cache images after the first build, so less of an issue.

Cloud-based CI services (such as the previously-mentioned Travis-CI, Bitbucket, Gitlabs, etc.) are typically costed on build-minutes for a given period,  e.g. build-minutes per month.  In these cases pulling or building larger images naturally takes longer; and costs more.

Generating Smaller Images

There is plenty of good guidance around, and I’m sure what I’m doing here can be improved up on. Our basic approach to minimise our image consists of these steps:

  1. Start with a minimal Base image
  2. Only install what we need
  3. Remove anything we only needed to help install what we needed!
  4. Reduce the number of Docker layers

Minimal Base Image

Our base image is gcc:7.2 which comes in at 1.64GB. To that we added Scons and the gcc-arm cross compiler, but for cross-compilation we don’t require the host GCC (x86) compiler.

The most widely used minimal image is call Alpine. Alpine Linux is a security-oriented, lightweight Linux distribution based on musl libc and Busybox.

Continue reading

Posted in Agile, ARM, C/C++ Programming, Cortex, Testing | Tagged , , , | 3 Comments

An Introduction to Docker for Embedded Developers – Part 3 Cross-Compiling for Cortex-M

In the previous posting we looked at defining a custom Dockerfile where we can add specific tools (and their dependencies). From that we created a Docker image and this allowed us to build C/C++ code in a Docker container, ensuring a consistent build environment.

So far we have to build all our code using the native GCC toolchain which is part of the base Docker image (gcc:7.2). However, I want to be able to build an image I can download and run on a target system (in our case an ARMv7-M, Cortex-M4, STM32F4-based system).

There are three stages required:

  1. Create a Docker container with the gcc-arm-embedded compiler installed
  2. Have a base “hello world” project for the board
  3. A custom Scons file for the cross-build (of course make or CMake could be used here)

Pre-built GNU toolchain for Arm Cortex-M

The latest version of the pre-built GNU toolchain for Arm Cortex-M (and Cortex-R) is found here.

Normally we’d go through the process of selecting the correct package for our development machine (Windows/Mac/Linux), downloading it and installing it (setting up appropriate paths, etc.).

But with Docker we can create a container for this current version and use this to cross-compile our code.

Building on the Dockerfile from the previous post (gcc:7.2 + scons) we need too:

  1. Download the Linux tarball (tar archive) for the gcc arm cross-complier
  2. Extract the code from the tarball
  3. Remove the tarball file from the Docker image
  4. Set up the PATH to compiler /bin

Continue reading

Posted in Agile, ARM, C/C++ Programming, Cortex, Testing | Tagged , , | Leave a comment

An Introduction to Docker for Embedded Developers – Part 2 Building Images

In the initial post, we covered the basics of getting Docker setup and using an official base image for compilation.
But let’s suppose the base image doesn’t include all the facilities our company uses for development. For example, we have migrated from make files to CMake, but more lately we have taken to using the python-based Scons build system for C/C++ projects.
The official gcc base image supports make but not Scons or CMake. As before, we can search for a Scons docker image, but will see no official image exists. We now have two choices:

  1. Use an unofficial (user) image
  2. Build our own

Unofficial Images

Using an unofficial image is a bit of a hit-or-miss affair. A good unofficial image will have a decent set of instructions about its use and how it was built. Unfortunately, most don’t have any information and you can waste a substantial amount of time trawling through the repositories looking for a suitable image. In general, it may be better to build you own image from a base image.
It is worth noting there are some gems out there in the user community so it still can be worth a cursory browse.

Build your own image

How easy build your own image will depend on you experience with using a Linux package manager. If you are familiar with adding packages to a running Linux distribution (e.g. apt-get install), then you’ll see creating an image is straightforward. If you’re not very experienced with this aspect of Linux, then it’s worth spending a little bit of time getting a foundation understanding of the Advanced Packaging Tool.
It is important to understand that different Linux distributions (Ubuntu, Fedora, etc.) each have their own package manager. I still find this one of the most frustrating aspects when working with Linux (Ubuntu uses APT, Fedora uses yum). You will need to know the base Linux distribution and thus the associated package manager to build Docker images.

Basic workflow

There are two methods for building a Docker image:

  1. Build an image locally on your working machine
  2. Use a github/bitbucket repository to automatically build in the cloud

Of course, the choice depends on the intended use, maturity of the image, and how widely it is going to be used. Initially, though, while understanding Docker it is better to use a local build.
For a local build, the steps couldn’t be simpler:

  1. Create a “Dockerfile” file that specifies to image contents and behaviour
  2. Build the image (docker build)
  3. Use the image (docker run)

Dockerfile

Our Dockerfile specifies the following items:

  1. The base image we want to work from
  2. Updating the base image
  3. The new packages/files that need installing
  4. Setting up working directories/path information
  5. Default behaviour when run

Continue reading

Posted in Agile, General | Tagged , | 3 Comments

An Introduction to Docker for Embedded Developers – Part 1 Getting Started

Docker is a relatively new technology, only appearing just over four years ago. The core building blocks have always been part of Unix; but the significant support, Linux containers (LCX), first appeared back in 2008.

Initially Docker was only supported on Linux, but more recently native support for OSX (my development OS of choice) and Windows (albeit Windows 10 Pro) suddenly opens up some interesting workflow choices.

The “What”

So, first, what is Docker? I’m always trying to find the right words here that does Docker justice but doesn’t over simplify the technology. A one-liner is:

“A lightweight Virtual Machine”

The danger of this over-simplified statement is the natural follow-on questions trying to compare Docker to, say, a hypervisor technology such as VirtualBox.

Another one-liner I try is:

“It’s like a Linux process with its own file system and network connections”

However, a fuller description is:

“Linux containers are self-contained execution environments—with their own, isolated CPU, memory, block I/O, and network resources—that share the kernel of the host operating system. The result is something that feels like a virtual machine, but sheds all the weight and startup overhead of a guest operating system.”

So, Docker allows me to wrap up a program and all its dependences (e.g. python tools, libraries, etc.) into a single, isolated executable environment. The wrapping up of the program and its dependences is called a Docker image; when image is executed it runs as a Docker container.

The “Why”

Probably more importantly is “Why would I use Docker as an Embedded developer?”. Most of the current Docker development is in the field of DevOps called “microservices” where Docker is being used to deploy applications. This is, currently, far removed from most embedded systems and I won’t address here. [ See Getting started with Docker on your Raspberry Pi if that floats your boat]

How, then, can Docker help an Embedded developer?

Some of the key benefits are:

  • For anyone not developing on Linux it opens up the world of Free Open Source Software (FOSS) that quite often are not available on other platforms (or difficult to install)
  • It allows developers to use tools in their local development environment without having to install them (even if there is a build available).
  • It allows code to be checked against variants of toolchains without the struggle of tools co-existing
  • It ensures all team members are using exactly the same tools and build environment
  • It ensures my build server (e.g. Jenkins on Linux) is building against the same tools used in development and vice-versa.
  • It allows me to create a virtual TCP/IP network of separate applications on a single machine
  • It allows me to experiment with support technology without having pollute my development machine (e.g. run a local nginx HTTP server as a target for my IoT development in a Docker container)

There are plenty more good reasons to look at Docker which I’m sure you’ll start to see as you get use to it.

Continue reading

Posted in Agile, Design Issues, Testing | 2 Comments