Python 3 Type Hints

The expected end of support for Python 2.7 is 1st January 2020, at least according to Guido van Rossum’s blog post. Starting now, you should consider developing all new Python applications in Python 3, and migrating existing code to Python 3 as and when time and workload permit.

Moving to Python 3

If you are unaware of the changes introduced in Python 3 that broke backward compatibility with Python 2 then there is a good summary on this What’s New In Python 3.0 web page.

The biggest difference you will notice moving to Python 3 is that the print statement is now a print function. But there are plenty of other changes that you should be aware of. This and subsequent blogs will look at aspects of Python has been added or improved in Python 3.

Migrating existing Python 2 code normally starts with running the 2to3.py script  (supplied since Python 2.7). This script automates the mechanical process of converting  syntax:  primarily changing all print statements to function calls, renaming deprecated  functions and methods (such as xrange to range), and updating import statements to  match the refactored standard library modules. Unless you have used deprecated or out of  date functions or modules the converted code should now compile.

One downside to this approach is that it is all too easy to continue using Python 2 features you are familiar with when there are better alternatives available. Learning the features of Python 3 is a case of reading the What’s New page for each version of Python 3 as it is  released, looking for additional features or changed APIs.

Use Type Hints for Static Type Checking

Type hints were introduced in Python 3.5 and have been refined in later versions. They are intended to provide static type support for Python during code development and are similar to the type checking capabilities you get from compilers with languages such as C/C++, Java and C#. You can add type hints to function/method parameters and return types (Python 3.5), and variables used in assignment (effectively declarations – Python 3.6).

While type hint information is included in the compiled binary code (the .pyc file)  it is not used for run-time type checking so does not introduce a runtime overhead other than a small amount of extra memory usage.

To get the benefits of adding type hints to your code you will need to use an IDE that understands type hints or pass your code through a static type checker either manually or as part of a Continuous Integration (CI) workflow.

An IDE currently supporting type hint checking on method parameters is PyCharm
from Jetbrains (there is a free Community version). Neither of the other popular Python editors Eclipse/PyDev or Spyder support type hints at this time.

Static analyzers supporting type hints are the experimental mypy and pytype (not available for windows). The popular static analyzer Pylint does not yet support type hints.

Both PyCharm, mypy and ptype use the Typeshed collection of library stubs defining type hints for the standard python library and some 3rd party libraries.

If you like strongly typed languages you will love type hints because you can now annotate variables and parameters with their type and have a static checker highlight language misuse without imposing the overhead of a runtime check.

If you like code completion (code assist/intellisense) features like a popup list of methods for an object then type hints will give you a list targeted to the type of the object.

Type Hint Syntax

As a simple example for using type hints we’ll look at a function that takes a string parameter called text and returns an integer. Lets say we use it to extract the integer value from a line such as “MAXSIZE=512”. Here is a function with a stubbed out body.

def extract(text):
    return None

If you edit this code with PyCharm and start typing code at the beginning of the method by entering the characters text. the popup list of completion options will not normally include string methods – you’ll see just standard object methods like bit_length.

We can now add the type hint str to the parameter to show that it expects a string object, and include an int return type as follows:

def extract(text: str) -> int:
    return None

The first thing you will see in PyCharm is the None return value highlighted as an error because the function does not return an integer – we will correct this later. If again we type the characters text. the popup will use the type hint to show a list of string methods making it easier to develop the function body.

Refactoring the stub function to provide simple functionality (ignoring error handling for clarity) will remove the type warning on the return statement:

def extract(text: str) -> int:
    key, _, value = text.partition('=')
    return int(value)

Calling a function does not change when we use type hints:

maxsize = extract('MAXSIZE=512')

PyCharm and static analysis tools will identify that the maxsize variable references an integer and provide appropriate popup help and/or static type checking.

Since Python 3.6 (Dec 2016) we can also use type hints on the first assignment to a variable as in:

maxsize: int = extract('MAXSIZE=512')

Effectively type hints add variable declarations to the Python language: note that it is an error to add a type hint to a variable that has already been assigned a value.

This isn’t a type hints tutorial but it’s worth taking this example a little further to introduce
the typing module.

If we wanted to allow our stub code to allow None as well as an integer we would wrap the return type inside typing.Optional – as an aside these are often called nullable objects  from the terminology introduced in C#.

Updating our function to permit a return of None we refactor the return type to Optional[int]:

from typing import Tuple
def extract(text: str) -> Optional[int]:
    key, sep, value = text.partition('=')
    if sep isNone:
        returnNone
    return int(value)

The typing module defines several additional type objects that can be used to capture most type constructs used on a regular basis in Python. Simple examples being:

  • List[int] – a list of integers
  • Tuple[bool, float] – a tuple with two items: a boolean and a float
  • Dict[str, int] – a dictionary accessed using a string key and holding an integer
  • Dict[str, List[int]] – a dictionary with a string key holding a list of integers

Returning to our function example we can update the function return an optional tuple containing the keyword and its integer value:

from typing import Optional, Tuple
def extract(text: str) -> Optional[Tuple[str, int]]:
    key, sep, value = text.partition('=')
    if sep is None:
        return None
    return key.strip(), int(value)

Ideally we’d now like to see PyCharm highlight our current assignment as an error:

n: int = extract('MAXSIZE=512')

But at the moment PyCharm does not support type hints on assignment.

Whereas mypy does support assignment type hints and will show the error on line 9 of our blog.py file:

blog.py:9: error: Incompatible types in assignment
    (expression has type "Optional[Tuple[str, int]]", variable has type "int")

At this point you might be thinking that this isn’t Python anymore – it’s too complex and unreadable and has lost sight of the quick development aspects of simple Python scripts.

Hints are an optional language feature: so with simple scripts you can continue to use Python without cluttering up the language, just consider adding type hints to larger  Python projects or libraries to improve static analysis.

Summary

Type hints can help you improve code quality and reduce development time by using static analysis to identify mismatched types as you write the code. With a type hint aware IDE (like PyCharm) you’ll get type specific code completion popup help as well.

Remember that if you are developing library code and add type hints to your functions and methods not all developers will be aware of, or make use, of, type hints; which means you may still want to include runtime type checks using isinstance or hasattr functions if you think this is required. Type hints do not replace dynamic type checks.

If you want to develop robust high quality Python code making best use of the development toolchain then type hints are a valuable addition to the language.

Posted in Python, Python3, Testing | Leave a comment

Peripheral register access using C Struct’s – part 1

When working with peripherals, we need to be able to read and write to the device’s internal registers. How we achieve this in C depends on whether we’re working with memory-mapped IO or port-mapped IO. Port-mapped IO typically requires compiler/language extensions, whereas memory-mapped IO can be accommodated with the standard C syntax.

Embedded “Hello, World!”

We all know the embedded equivalent of the “Hello, world!” program is flashing the LED, so true to form I’m going to use that as an example.

The examples are based on a STM32F407 chip using the GNU Arm Embedded Toolchain .

The STM32F4 uses a port-based GPIO (General Purpose Input Output) model, where each port can manage 16 physical pins. The LEDS are mapped to external pins 55-58 which maps internally onto GPIO Port D pins 8-11.

Flashing the LEDs

Flashing the LEDs is fairly straightforward, at the port level there are only two registers we are interested in.

  • Mode Register – this defines, on a pin-by-pin basis what its function is, e.g. we want this pin to behave as an output pin.
  • Output Data Register – Writing a ‘1‘ to the appropriate pin will generate voltage and writing a ‘0‘ will ground the pin.

Mode Register (MODER)

Each port pin has four modes of operation, thus requiring two configuration bits per pin (pin 0 is configured using mode bits 0-1, pin 2 uses mode bits 2-3, and so on):

  • 00 Input
  • 01 Output
  • 10 Alternative function (details configured via other registers)
  • 11 Analogue

So, for example, to configure pin 8 for output, we must write the value 01 into bits 16 and 17 in the MODER register (that is, bit 16 => 1, bit 17 => 0).

Output Data Register (ODR)

In the Output Data Register (ODR) each bit represents an I/O pin on the port. The bit number matches the pin number.

If a pin is set to output (in the MODER register) then writing a 1 into the appropriate bit will drive the I/O pin high. Writing 0 into the appropriate bit will drive the I/O pin low.

There are 16 IO pins, but the register is 32bits wide. Reserved bits are read as ‘0’.

Port D Addresses

The absolute addresses for the MODER and ODR of Port D are:

  • MODER – 0x40020C00
  • ODR – 0x40020C14

Pointer access to registers

Typically when we access registers in C based on memory-mapped IO we use a pointer notation to ‘trick’ the compiler into generating the correct load/store operations at the absolute address needed. Continue reading

Posted in ARM, C/C++ Programming, CMSIS, Cortex | Tagged , , | 3 Comments

A brief introduction to Concepts – Part 2

In part 1 of this article we looked at adding requirements to parameters in template code to improve the diagnostic ability of the compiler.  (I’d recommend reading this article first, if you haven’t already)

Previously, we looked at a simple example of adding a small number of requirements on a template parameter to introduce the syntax and semantics.  In reality, the constraints imposed on a template parameter could consist of any combination of

  • Type traits
  • Required type aliases
  • Required member attributes
  • Required member functions

Explicitly listing all of this requirements for each template parameter, and every template function / class gets onerous very quickly.

To simplify the specification of these constraints we have Concepts.

Continue reading

Posted in C/C++ Programming | Tagged , , , , , , , | 4 Comments

A brief introduction to Concepts – Part 1

Templates are an extremely powerful – and terrifying – element of C++ programs.  I say “terrifying” – not because templates are particularly hard to use (normally), or even particularly complex to write (normally) – but because when things go wrong the compiler’s output is a tsunami of techno-word-salad that can overwhelm even the experienced programmer.

The problem with generic code is that it isn’t completely generic.  That is, generic code cannot be expected to work on every possible type we could substitute.  The generic code typically places constraints on the substituted type, which may be in the form of type characteristics, type semantics or behaviours.  Unfortunately, there is no way to find out what those constraints are until you fail to meet them; and that usually happens at instantiation time, far away from your code and deep inside someone else’s hard-to-decipher library code.

The idea of Concepts has been around for many years; and arguably they trace their roots right back to the very earliest days of C++.  Now in C++17 we are able to use and exploit their power in code.

Concepts allow us to express constraints on template types with the goals of making generic code

  • Easier to use
  • Easier to debug
  • Easier to write

In this pair of articles we’ll look at the basics of Concepts, their syntax and usage.  To be open up-front:  this article is designed to get you started, not to make you an expert on Concepts or generic code.

Continue reading

Posted in C/C++ Programming | Tagged , , , , , , , | Leave a comment

Register for our webinar – ‘Introduction to Docker”Introduction to Docker’

Dec 5, 2018 at 10am BST & 4pm BST

The introduction to Docker series is proving popular with our Blog readers, so we have decided to make it the subject for our next webinar.

Docker is a relatively new technology, only appearing just over five years ago. It has become integral to modern continuous integration (CI) and continuous delivery in an Agile world.

In this 45 minute webinar, presented by Niall Cooling, he will introduce Docker and how it can be used in an embedded development workflow. There will also be time for questions.

If you’d like to submit an advance Docker-related question for Niall to include in the webinar, please let us know. You can submit your question when you register or by emailing us info@feabhas.com. We hope you can join us.

Click here to register and reserve a free place for the 10am BST webinar

Click here to register and reserve a free place for the 4pm BST webinar

Posted in Agile, training, webinar | Leave a comment

An Introduction to Docker for Embedded Developers – Part 5 Multi-Stage Builds

Following on from the previous post, where we spent time reducing the docker image size, in this post I’d like to cover a couple of useful practices to further improve our docker image:

  1. Copying local files rather than pulling from the web
  2. Simplifying builds using a multi-stage build

Copying in Local Files

So far, when installing the GCC-Arm compiler, we have pulled it from the web using wget. This technique can suffer from two issues:

  1. Web links are notoriously fragile
  2. https adds complexity to the packages required with smaller base images such as Alpine-linux

An alternative approach, especially if you are managing your Dockerfiles in a git repository, is to pull the required file (e.g. gcc-arm-none-eabi-6-2017-q2-update-linux.tar.bz2) to your local file system and then copy this file into the docker image during the build process.

First we need to download to our local filesystem the version of GCC-Arm we want to use. The latest version can be found at: https://developer.arm.com/open-source/gnu-toolchain/gnu-rm/downloads

As of today, the latest version is 7-2018-q2-update.

I happen to be working on a Mac, but as our image is Linux based, I want to download the Linux 64-bit image gcc-arm-none-eabi-7-2018-q2-update-linux.tar.bz2.

Once downloaded, the local (build) directory contains two files:

.
├── Dockerfile
└── gcc-arm-none-eabi-7-2018-q2-update-linux.tar.bz2

We now modify the Dockerfile to copy from the local file system into our base image using the following command:

COPY <local file> <destination>

So the command (the trailing ‘.’ is to the current container working directory):

COPY gcc-arm-none-eabi-7-2018-q2-update-linux.tar.bz2 .

will copy the zip file from our local file system into the container. We can now go ahead and un-tar it and configure it as before, e.g. Continue reading

Posted in Agile, ARM, C/C++ Programming, Testing | Tagged , | 5 Comments

Updated: Developing a Generic Hard Fault handler for ARM Cortex-M3/Cortex-M4 using GCC

The original article was first posted back in 2013. Since posting I have been contacted many times regarding the article. One re-occuring question has been “How do I do this using GCC?”. So I thought it was about time I updated the article using GCC.

GNU Tools for ARM Embedded Processors

The original article used the Keil toolchain, here I am using arm-none-eabi-gcc. One of the major benefits of CMSIS is that almost all the code from the original posting will compile unchanged as CMSIS uses conditionals to replace instructions where necessary.

However, note that some of the file names have changed since that original article, e.g.

#include "ARMCM3.h" 

as a file no longer exists. Its contents have been split across a number of headers in the latest CMSIS. In addition, typically for a build, you will be building against a specific platform. In my case I’m targetting an STM32F4xx core.

In my project “ARMCM3.h” has been replaced with “cmsis_device.h” which maps on the the STM32F411.

From Keil to GCC

The code changes only occur when we use assembler to help dump the processor registers as part of the Hard Fault handling. As expected, inline assembler is specific to a toolchain.

The original Keil code was:

void Hard_Fault_Handler(uint32_t stack[]);

__asm void HardFault_Handler(void) 
{
  MRS r0, MSP
  B __cpp(Hard_Fault_Handler) 
}

The same code for GCC is:

void Hard_Fault_Handler(uint32_t stack[]);

void HardFault_Handler (void)
{
  asm volatile(
      " mrs r0,msp    \n"
      " b Hard_Fault_Handler \n"
  );
}

Register Dump Analysis

Continue reading

Posted in ARM, C/C++ Programming, CMSIS, Cortex | Tagged , , | Leave a comment

Technical debt

What is it & how does it affect software engineering management?

The ‘Golden Triangle’ of project management

The ‘golden triangle’ of project management uses the following constraints:

The rule is: you can pick any two of three; you can’t have them all.

When it comes to software development projects, it’s not uncommon to have a fixed time to market and budget, which means that, under pressure, the constraint that’s affected is quality.

Commonly, when the project management refers to ‘quality’ it implicitly means Intrinsic Quality.

Intrinsic quality and technical debt

Intrinsic Quality is the inherent ‘goodness’ of a system. That is, not what the product is/does, but how well it has been designed and constructed. If you like, Intrinsic quality is a measure of the engineering rigour that has been put into the product.

In the case of software-oriented products Intrinsic quality tends to manifest itself in architectural robustness and resilience to change.

Intrinsic quality is closely allied to the idea of Technical Debt.

Technical debt is a term created by Ward Cunningham in 1992, which describes “the obligation that a software organisation incurs when it chooses a design or construction approach that’s expedient in the short-term but that increases complexity and is more costly in the long-term.”(1)

A company will put effort into the design and architecture of their systems to give them greater flexibility in the future. Engineering rigour, and designing for change and maintainability reduces (but cannot complete eliminate, unfortunately) the impact of technical debt. That is, the higher the Intrinsic quality of a product, the less it will cost to maintain, modify and extend it during its life.

Note Intrinsic quality benefits the development organisation, and is largely invisible to the customer; thus very few customers are willing to pay for such work. Intrinsic quality is therefore an upfront cost to the development organisation, which has to be balanced against the reduced future costs of product maintenance.

If time-to-market and cost are fixed constraints in a project it is compelling to sacrifice the costs of engineering intrinsic quality.

Sacrificing intrinsic quality for short-term expediency must come at a (future) price. There’s no such thing as a free lunch! The challenge becomes calculating what the future cost will be.

The cost of technical debt

You can think of Technical Debt as a compound interest charge: it’s not only the length of time the ‘debt’ is held that’s a factor, but also the ‘interest rate’. This ‘interest rate’ isn’t fixed; and varies depending on where the compromises are made.

Technical debt affects all aspects of the software engineering process: including requirements and deployment to the user base, writing the code and the tools used to analyse code and modify it.(2)

Problem domain technical debts – that is, customer-facing omissions, compromises, failures, etc. – will (obviously) have the highest ‘interest rates’.

Architectural debts will have the largest effect on product extensibility, flexibility, maintainability, and so incur a high ‘interest rate’.

Coding issues – semantic errors, unit test failures, algorithmic inefficiencies – are the easiest to measure and categorise, so these areas tend to get the most attention. However, the ‘interest rate’ of such technical debts is relatively low, meaning issues can persist for long periods without significant impact on the overall technical debt of the product.

The ‘unknown-unknowns’

However, it’s not just the quality aspects or features that we know have been compromised in order to meet the cost/time constraints that must be counted as technical debt, The ‘unknown unknowns’ – that is, the things we don’t know we don’t know (made famous by former Secretary of Defence Donald Rumsfeld) – becomes a factor here too. The more unknown-unknowns there are in a domain, the easier it is to not factor them in. As a result, anything not-factored-in early enough also becomes a technical debt.

Take the following statistics by Tom de Marco (3). The chart shows the root cause of bugs in a typical software project

A couple of points worth noting here:

The smallest number of bugs can be traced to coding errors. Technical debts in this area have the lowest ‘interest rates’. Contrastingly, the largest number of bugs can be traced to requirements issues. These problem domain issues have the largest ‘interest rates’ for technical debt. Thus a typical software project accumulates its debts at the very highest rates of interest!

Evidence suggests that as developers move away from their ‘core’ skill – that is, the one they practice the most (writing code), the more unknown unknowns they are subject to. The chart then is also a pretty good indicator of ‘unknown unknowns’ in a project. The more ‘unknown unknowns’ the more likely it is the developer will make mistakes (and introduce bugs).

How much is my technical debt?

In 2012, researchers conservatively estimated that for every 100 KLOC (thousand of lines of code), an average software application had approximately US$361,000 of technical debt – that is, the cost to eliminate the structural-quality problems that seriously threatened the application’s business viability. (4)

5 steps to managing technical debt

1. Identify the technical debt – for example, applying the Swiss Cheese model to your system verification and validation (see below)

2. Measure the technical debt in terms of benefit and cost – Thinking of technical debt as compound interest, then the benefit is the amount of money you save by paying off the ‘loan’ early. The tricky bit is establishing what the ‘interest rate’ is for your organisation.

3. Prioritise the technical debt – identify the items that identify the items that have the highest payoff and repay them first. Of course, you can only prioritise technical debts that you can see. The dichotomy here is that the aspects most likely to have the highest technical debts are the ones you can’t currently see (the unknown unknowns)!

4. Repay the technical debt through refactoring – you can only refactor code successfully if you have adequate testing in place. That is, every restructuring change you make can have no impact on the (measurable) functionality of the system. Establishing and automating (where possible) verification and validation regimes for your project is an intrinsic quality exercise. And remember: the sacrificial lamb of project management is intrinsic quality! Companies with rampant technical debt tend to lack these regimes, thus exacerbating the problem by raising the ‘interest rate’ of their technical debt.

5. Monitor items that aren’t repaid – because their cost or value might change over time (certain technical-debt items can escalate to the point of becoming unmanageable). Once again, we can only monitor things we know are (currently) wrong. It is difficult to monitor unknown unknowns!

The Swiss Cheese Model and Technical Debt

There is no one, perfect, way to identify technical debts. Using multiple, independent techniques (each playing to their own strengths) is far more effective.

The “Swiss Cheese” approach to identifying Technical Debt uses multiple techniques, each with a different focus. The techniques are applied with the clear knowledge that no technique is perfect (nor should it be) but the flaws in any one technique do not overlap (much!) with the flaws of another layer.

  • The Static Analysis layer in the model identifies ambiguity and mistakes in codification. These are things that are difficult for engineers to spot, but easy to fix. Static Analysis tools are readily available and have a low cost to apply regularly on a code base. However, Static Analysis cannot identify incorrect algorithms or missing code and the debts it resolves are relatively tiny.
  • The Testing layer verifies system correctness. Since it focuses on failures (deviations from customer specification), technical debts are visible and obvious to the organisation.
  • The Review layer validates requirements and designs. It asks the questions: “Are we solving the right problem?”; “Are we solving it in the right way?” As review is a human-centric activity, tools typically help very little, beyond some metrics such as: cyclomatic complexity; or compile-time coupling, for example. As a result, the technical debts established by reviews are generally larger-scale, more ‘expensive’ and require far more effort (and money) to resolve.

Summary

Understanding Technical Debt is a critical part of software development project management. Sacrificing project intrinsic quality to expediate project delivery has to be very carefully balanced against the long-term costs of maintenance, extensibility, flexibility and re-use. Since the lifetime of a system could potentially extend into decades the costs of not removing Technical Debts could become untenable to the viability of the system.

Code-level restructuring / refactoring, whilst always beneficial, have the smallest beneficial impact. The higher ‘interest rates’ of Technical Debts associated with architectural problems typically far out-shadows the benefits gained from code-level fixes.
As a result, in order to be effective, engineers should be trained in software architecture, software design and even requirements analysis (5). All these topics are far more sophisticated that writing code and it takes time and effort to develop appropriate skills.

References

  • [1] http://www.construx.com/10x_Software_Development/Technical_Debt/
  • [2] Reducing Friction in Software Development – Paris Avgeriou, University of Groningen, Philippe Kruchten, University of British Columbia, Robert L. Nord and Ipek Ozkaya, Software Engineering Institute, Carolyn Seaman, University of Maryland, Baltimore County. Published by the IEEE Computer Society, 2016. https://ieeexplore.ieee.org/document/7367977/
  • [3] Structured Analysis and System Specification, De Marco T, Yourdon Press ISBN 0-138-54380-1, 1978
  • [4] B. Curtis, J. Sappidi, and A. Szynkarski, “Estimating the Principal of an Application’s Technical Debt,” IEEE Software, vol. 29, no. 6, 2012, pp. 34–42.
  • [5] A. Martini, J. Bosch, and M. Chaudron, “Investigating Architectural Technical Debt Accumulation and Refactoring over Time: A Multiple-Case Study,” Information and Software Technology, Nov. 2015, pp. 237—253.
Posted in General | Tagged , , | Leave a comment

Code Quality – Cyclomatic Complexity

In the standard ISO 26262-6:2011 [1] the term “complexity” appears a number of times, generally in the context of reducing or lowering said complexity.

There are many different ways of defining “complexity”, for example, Fred Brooks, in his 1986 landmark paper, “No Silver Bullet — Essence and Accidents of Software Engineering” asserts that there are two types of complexity; Essential and Accidental. [2]

Rather than getting into esoteric discussion about design complexity, I’d like to focus on code complexity.

Over the years, I have found one metric to be the simplest and most consistent indicator of code quality – Cyclomatic Complexity. This is often also referred to as the “McCabe Metric”, after Tom McCabe’s original paper in 1976 “A Complexity Measure” [3]. 

It’s not perfect, it can be fooled and the complexity measure can differ slightly from tool to tool (as the original work was analysing Fortran for the PDP-10). But, given that caveat, it has major value to act as “raising a red flag” if code goes beyond certain thresholds. In addition, it is easily incorporated as a “quality gate” into a modern CI/CD (Continuous Integration/Continuous Delivery) build system.

Cyclomatic Complexity (CC)

Very simply, the metric is calculated by building a directed graph from the source code. It is done on a function-by-function basis and is not accumulative (i.e. the reported value is just for that function).

Given the following code:

void ef1(void);
void ef2(void);
void ef3(void);
   
void func(int a, int b)
{
  ef1();
  if(a < 0) {
    ef2();
  }
  ef3();
}

A directed graph of the code would look thus:

The Complexity is measured using the very simple formula (based on graph theory) of:

v(G) = e – n + 2p

where:

 e = the number of edges of the graph.
n = the number of nodes of the graph.
p = the number of connected components

However, as we are analysing just functions and not a collection of connected graphs, then the formula can be simplified to

v(G) = e – n + 2

as P = 1 representing the entry and exit nodes of a function.

In this simple example v(G) is equal to two. Running Lizard, an open-source Cyclomatic Complexity Analyzer, the result (shown as CCN) is as expected:

$ docker run --rm -v $(pwd):/usr/project feabhas/alpine-lizard lizard
================================================
  NLOC    CCN   token  PARAM  length  location  
------------------------------------------------
       8      2     30      2       8 func@6-13@./main.c

In these examples I’m running Lizard from a docker container (as this fits with our Jenkins build system). However Lizard can be installed locally using pip install lizard if you have both Python and pip installed.

Continue reading

Posted in Agile, C/C++ Programming, General, Testing | Tagged , , | Leave a comment

Your handy cut-out-and-keep guide to std::forward and std::move

I love a good ‘quadrant’ diagram.  It brings me immense joy if I can encapsulate some wisdom, guideline or rule-of-thumb in a simple four-quadrant picture.

This time it’s the when-and-where of std::move and std::forward.  In my experience, when programmers are first introduced to move semantics, their biggest struggle is to know when (or when not) to apply std::move or std::forward.  Usually, it’s a case of “keep apply std::move until it compiles”.  I’ve been there myself.

To that end I’ve put together a couple of a simple overview quadrant graphics to help out the neophyte ‘mover-forwarder’.  The aim is to capture some simple rules-of-thumb in an easy-to-digest format.

Disclaimer:  these diagrams don’t address every move/forwarding use.   They’re not intended to.  That’s why we have books, presentations and long rambling articles on the topic.

Continue reading

Posted in C/C++ Programming | Tagged , , , , , , , | 2 Comments