Thanks for the memory (allocator)

One of the design goals of Modern C++ is to find new – better, more effective – ways of doing things we could already do in C++.  Some might argue this is one of the more frustrating aspects of Modern C++ – if it works, don’t fix it (alternatively: why use lightbulbs when we have perfectly good candles?!)

This time we’ll look at a new aspect of Modern C++:  the Allocator model for dynamic containers.  This is currently experimental, but has been accepted into C++20.

The Allocator model allows programmers to provide their own memory management strategy in place of their default library implementation.  Although it is not specified by the C++ standard, many implementations use malloc/free.

Understanding this feature is important if you work on a high-integrity, or safety-critical, project where your project standards say ‘no’ to malloc.

Continue reading

Posted in C/C++ Programming, General | Tagged , , , , , , , , , , , | 3 Comments

Python 3 Unicode and Byte Strings

A notable difference between Python 2 and Python 3 is that character data is stored using Unicode instead of bytes. It is quite likely that when migrating existing code and writing new code you may be unaware of this change as most string algorithms will work with either type of representation; but you cannot intermix the two.

If you are working with web service libraries such as urllib (formerly urllib2) and requests, network sockets, binary files, or serial I/O with pySerial  you will find that data is now stored as byte strings.

Continue reading

Posted in Python, Python3 | Leave a comment

Python 3 Type Hints

The expected end of support for Python 2.7 is 1st January 2020, at least according to Guido van Rossum’s blog post. Starting now, you should consider developing all new Python applications in Python 3, and migrating existing code to Python 3 as and when time and workload permit.

Moving to Python 3

If you are unaware of the changes introduced in Python 3 that broke backward compatibility with Python 2 then there is a good summary on this What’s New In Python 3.0 web page.

The biggest difference you will notice moving to Python 3 is that the print statement is now a print function. But there are plenty of other changes that you should be aware of. This and subsequent blogs will look at aspects of Python has been added or improved in Python 3.

Continue reading

Posted in Python, Python3, Testing | Leave a comment

Peripheral register access using C Struct’s – part 1

When working with peripherals, we need to be able to read and write to the device’s internal registers. How we achieve this in C depends on whether we’re working with memory-mapped IO or port-mapped IO. Port-mapped IO typically requires compiler/language extensions, whereas memory-mapped IO can be accommodated with the standard C syntax.

Embedded “Hello, World!”

We all know the embedded equivalent of the “Hello, world!” program is flashing the LED, so true to form I’m going to use that as an example.

The examples are based on a STM32F407 chip using the GNU Arm Embedded Toolchain .

The STM32F4 uses a port-based GPIO (General Purpose Input Output) model, where each port can manage 16 physical pins. The LEDS are mapped to external pins 55-58 which maps internally onto GPIO Port D pins 8-11.

Flashing the LEDs

Flashing the LEDs is fairly straightforward, at the port level there are only two registers we are interested in.

  • Mode Register – this defines, on a pin-by-pin basis what its function is, e.g. we want this pin to behave as an output pin.
  • Output Data Register – Writing a ‘1‘ to the appropriate pin will generate voltage and writing a ‘0‘ will ground the pin.

Mode Register (MODER)

Each port pin has four modes of operation, thus requiring two configuration bits per pin (pin 0 is configured using mode bits 0-1, pin 2 uses mode bits 2-3, and so on):

  • 00 Input
  • 01 Output
  • 10 Alternative function (details configured via other registers)
  • 11 Analogue

So, for example, to configure pin 8 for output, we must write the value 01 into bits 16 and 17 in the MODER register (that is, bit 16 => 1, bit 17 => 0).

Output Data Register (ODR)

In the Output Data Register (ODR) each bit represents an I/O pin on the port. The bit number matches the pin number.

If a pin is set to output (in the MODER register) then writing a 1 into the appropriate bit will drive the I/O pin high. Writing 0 into the appropriate bit will drive the I/O pin low.

There are 16 IO pins, but the register is 32bits wide. Reserved bits are read as ‘0’.

Port D Addresses

The absolute addresses for the MODER and ODR of Port D are:

  • MODER – 0x40020C00
  • ODR – 0x40020C14

Pointer access to registers

Typically when we access registers in C based on memory-mapped IO we use a pointer notation to ‘trick’ the compiler into generating the correct load/store operations at the absolute address needed. Continue reading

Posted in ARM, C/C++ Programming, CMSIS, Cortex | Tagged , , | 3 Comments

A brief introduction to Concepts – Part 2

In part 1 of this article we looked at adding requirements to parameters in template code to improve the diagnostic ability of the compiler.  (I’d recommend reading this article first, if you haven’t already)

Previously, we looked at a simple example of adding a small number of requirements on a template parameter to introduce the syntax and semantics.  In reality, the constraints imposed on a template parameter could consist of any combination of

  • Type traits
  • Required type aliases
  • Required member attributes
  • Required member functions

Explicitly listing all of this requirements for each template parameter, and every template function / class gets onerous very quickly.

To simplify the specification of these constraints we have Concepts.

Continue reading

Posted in C/C++ Programming | Tagged , , , , , , , | 4 Comments

A brief introduction to Concepts – Part 1

Templates are an extremely powerful – and terrifying – element of C++ programs.  I say “terrifying” – not because templates are particularly hard to use (normally), or even particularly complex to write (normally) – but because when things go wrong the compiler’s output is a tsunami of techno-word-salad that can overwhelm even the experienced programmer.

The problem with generic code is that it isn’t completely generic.  That is, generic code cannot be expected to work on every possible type we could substitute.  The generic code typically places constraints on the substituted type, which may be in the form of type characteristics, type semantics or behaviours.  Unfortunately, there is no way to find out what those constraints are until you fail to meet them; and that usually happens at instantiation time, far away from your code and deep inside someone else’s hard-to-decipher library code.

The idea of Concepts has been around for many years; and arguably they trace their roots right back to the very earliest days of C++.  Now in C++17 we are able to use and exploit their power in code.

Concepts allow us to express constraints on template types with the goals of making generic code

  • Easier to use
  • Easier to debug
  • Easier to write

In this pair of articles we’ll look at the basics of Concepts, their syntax and usage.  To be open up-front:  this article is designed to get you started, not to make you an expert on Concepts or generic code.

Continue reading

Posted in C/C++ Programming | Tagged , , , , , , , | Leave a comment

Register for our webinar – ‘Introduction to Docker”Introduction to Docker’

Dec 5, 2018 at 10am BST & 4pm BST

The introduction to Docker series is proving popular with our Blog readers, so we have decided to make it the subject for our next webinar.

Docker is a relatively new technology, only appearing just over five years ago. It has become integral to modern continuous integration (CI) and continuous delivery in an Agile world.

In this 45 minute webinar, presented by Niall Cooling, he will introduce Docker and how it can be used in an embedded development workflow. There will also be time for questions.

If you’d like to submit an advance Docker-related question for Niall to include in the webinar, please let us know. You can submit your question when you register or by emailing us info@feabhas.com. We hope you can join us.

Click here to register and reserve a free place for the 10am BST webinar

Click here to register and reserve a free place for the 4pm BST webinar

Posted in Agile, training, webinar | Leave a comment

An Introduction to Docker for Embedded Developers – Part 5 Multi-Stage Builds

Following on from the previous post, where we spent time reducing the docker image size, in this post I’d like to cover a couple of useful practices to further improve our docker image:

  1. Copying local files rather than pulling from the web
  2. Simplifying builds using a multi-stage build

Copying in Local Files

So far, when installing the GCC-Arm compiler, we have pulled it from the web using wget. This technique can suffer from two issues:

  1. Web links are notoriously fragile
  2. https adds complexity to the packages required with smaller base images such as Alpine-linux

An alternative approach, especially if you are managing your Dockerfiles in a git repository, is to pull the required file (e.g. gcc-arm-none-eabi-6-2017-q2-update-linux.tar.bz2) to your local file system and then copy this file into the docker image during the build process.

First we need to download to our local filesystem the version of GCC-Arm we want to use. The latest version can be found at: https://developer.arm.com/open-source/gnu-toolchain/gnu-rm/downloads

As of today, the latest version is 7-2018-q2-update.

I happen to be working on a Mac, but as our image is Linux based, I want to download the Linux 64-bit image gcc-arm-none-eabi-7-2018-q2-update-linux.tar.bz2.

Once downloaded, the local (build) directory contains two files:

.
├── Dockerfile
└── gcc-arm-none-eabi-7-2018-q2-update-linux.tar.bz2

We now modify the Dockerfile to copy from the local file system into our base image using the following command:

COPY <local file> <destination>

So the command (the trailing ‘.’ is to the current container working directory):

COPY gcc-arm-none-eabi-7-2018-q2-update-linux.tar.bz2 .

will copy the zip file from our local file system into the container. We can now go ahead and un-tar it and configure it as before, e.g. Continue reading

Posted in Agile, ARM, C/C++ Programming, Testing | Tagged , | 5 Comments

Updated: Developing a Generic Hard Fault handler for ARM Cortex-M3/Cortex-M4 using GCC

The original article was first posted back in 2013. Since posting I have been contacted many times regarding the article. One re-occuring question has been “How do I do this using GCC?”. So I thought it was about time I updated the article using GCC.

GNU Tools for ARM Embedded Processors

The original article used the Keil toolchain, here I am using arm-none-eabi-gcc. One of the major benefits of CMSIS is that almost all the code from the original posting will compile unchanged as CMSIS uses conditionals to replace instructions where necessary.

However, note that some of the file names have changed since that original article, e.g.

#include "ARMCM3.h" 

as a file no longer exists. Its contents have been split across a number of headers in the latest CMSIS. In addition, typically for a build, you will be building against a specific platform. In my case I’m targetting an STM32F4xx core.

In my project “ARMCM3.h” has been replaced with “cmsis_device.h” which maps on the the STM32F411.

From Keil to GCC

The code changes only occur when we use assembler to help dump the processor registers as part of the Hard Fault handling. As expected, inline assembler is specific to a toolchain.

The original Keil code was:

void Hard_Fault_Handler(uint32_t stack[]);

__asm void HardFault_Handler(void) 
{
  MRS r0, MSP
  B __cpp(Hard_Fault_Handler) 
}

The same code for GCC is:

void Hard_Fault_Handler(uint32_t stack[]);

void HardFault_Handler (void)
{
  asm volatile(
      " mrs r0,msp    \n"
      " b Hard_Fault_Handler \n"
  );
}

Register Dump Analysis

Continue reading

Posted in ARM, C/C++ Programming, CMSIS, Cortex | Tagged , , | Leave a comment

Technical debt

What is it & how does it affect software engineering management?

The ‘Golden Triangle’ of project management

The ‘golden triangle’ of project management uses the following constraints:

The rule is: you can pick any two of three; you can’t have them all.

When it comes to software development projects, it’s not uncommon to have a fixed time to market and budget, which means that, under pressure, the constraint that’s affected is quality.

Commonly, when the project management refers to ‘quality’ it implicitly means Intrinsic Quality.

Intrinsic quality and technical debt

Intrinsic Quality is the inherent ‘goodness’ of a system. That is, not what the product is/does, but how well it has been designed and constructed. If you like, Intrinsic quality is a measure of the engineering rigour that has been put into the product.

In the case of software-oriented products Intrinsic quality tends to manifest itself in architectural robustness and resilience to change.

Intrinsic quality is closely allied to the idea of Technical Debt.

Technical debt is a term created by Ward Cunningham in 1992, which describes “the obligation that a software organisation incurs when it chooses a design or construction approach that’s expedient in the short-term but that increases complexity and is more costly in the long-term.”(1)

A company will put effort into the design and architecture of their systems to give them greater flexibility in the future. Engineering rigour, and designing for change and maintainability reduces (but cannot complete eliminate, unfortunately) the impact of technical debt. That is, the higher the Intrinsic quality of a product, the less it will cost to maintain, modify and extend it during its life.

Note Intrinsic quality benefits the development organisation, and is largely invisible to the customer; thus very few customers are willing to pay for such work. Intrinsic quality is therefore an upfront cost to the development organisation, which has to be balanced against the reduced future costs of product maintenance.

If time-to-market and cost are fixed constraints in a project it is compelling to sacrifice the costs of engineering intrinsic quality.

Sacrificing intrinsic quality for short-term expediency must come at a (future) price. There’s no such thing as a free lunch! The challenge becomes calculating what the future cost will be.

The cost of technical debt

You can think of Technical Debt as a compound interest charge: it’s not only the length of time the ‘debt’ is held that’s a factor, but also the ‘interest rate’. This ‘interest rate’ isn’t fixed; and varies depending on where the compromises are made.

Technical debt affects all aspects of the software engineering process: including requirements and deployment to the user base, writing the code and the tools used to analyse code and modify it.(2)

Problem domain technical debts – that is, customer-facing omissions, compromises, failures, etc. – will (obviously) have the highest ‘interest rates’.

Architectural debts will have the largest effect on product extensibility, flexibility, maintainability, and so incur a high ‘interest rate’.

Coding issues – semantic errors, unit test failures, algorithmic inefficiencies – are the easiest to measure and categorise, so these areas tend to get the most attention. However, the ‘interest rate’ of such technical debts is relatively low, meaning issues can persist for long periods without significant impact on the overall technical debt of the product.

The ‘unknown-unknowns’

However, it’s not just the quality aspects or features that we know have been compromised in order to meet the cost/time constraints that must be counted as technical debt, The ‘unknown unknowns’ – that is, the things we don’t know we don’t know (made famous by former Secretary of Defence Donald Rumsfeld) – becomes a factor here too. The more unknown-unknowns there are in a domain, the easier it is to not factor them in. As a result, anything not-factored-in early enough also becomes a technical debt.

Take the following statistics by Tom de Marco (3). The chart shows the root cause of bugs in a typical software project

A couple of points worth noting here:

The smallest number of bugs can be traced to coding errors. Technical debts in this area have the lowest ‘interest rates’. Contrastingly, the largest number of bugs can be traced to requirements issues. These problem domain issues have the largest ‘interest rates’ for technical debt. Thus a typical software project accumulates its debts at the very highest rates of interest!

Evidence suggests that as developers move away from their ‘core’ skill – that is, the one they practice the most (writing code), the more unknown unknowns they are subject to. The chart then is also a pretty good indicator of ‘unknown unknowns’ in a project. The more ‘unknown unknowns’ the more likely it is the developer will make mistakes (and introduce bugs).

How much is my technical debt?

In 2012, researchers conservatively estimated that for every 100 KLOC (thousand of lines of code), an average software application had approximately US$361,000 of technical debt – that is, the cost to eliminate the structural-quality problems that seriously threatened the application’s business viability. (4)

5 steps to managing technical debt

1. Identify the technical debt – for example, applying the Swiss Cheese model to your system verification and validation (see below)

2. Measure the technical debt in terms of benefit and cost – Thinking of technical debt as compound interest, then the benefit is the amount of money you save by paying off the ‘loan’ early. The tricky bit is establishing what the ‘interest rate’ is for your organisation.

3. Prioritise the technical debt – identify the items that identify the items that have the highest payoff and repay them first. Of course, you can only prioritise technical debts that you can see. The dichotomy here is that the aspects most likely to have the highest technical debts are the ones you can’t currently see (the unknown unknowns)!

4. Repay the technical debt through refactoring – you can only refactor code successfully if you have adequate testing in place. That is, every restructuring change you make can have no impact on the (measurable) functionality of the system. Establishing and automating (where possible) verification and validation regimes for your project is an intrinsic quality exercise. And remember: the sacrificial lamb of project management is intrinsic quality! Companies with rampant technical debt tend to lack these regimes, thus exacerbating the problem by raising the ‘interest rate’ of their technical debt.

5. Monitor items that aren’t repaid – because their cost or value might change over time (certain technical-debt items can escalate to the point of becoming unmanageable). Once again, we can only monitor things we know are (currently) wrong. It is difficult to monitor unknown unknowns!

The Swiss Cheese Model and Technical Debt

There is no one, perfect, way to identify technical debts. Using multiple, independent techniques (each playing to their own strengths) is far more effective.

The “Swiss Cheese” approach to identifying Technical Debt uses multiple techniques, each with a different focus. The techniques are applied with the clear knowledge that no technique is perfect (nor should it be) but the flaws in any one technique do not overlap (much!) with the flaws of another layer.

  • The Static Analysis layer in the model identifies ambiguity and mistakes in codification. These are things that are difficult for engineers to spot, but easy to fix. Static Analysis tools are readily available and have a low cost to apply regularly on a code base. However, Static Analysis cannot identify incorrect algorithms or missing code and the debts it resolves are relatively tiny.
  • The Testing layer verifies system correctness. Since it focuses on failures (deviations from customer specification), technical debts are visible and obvious to the organisation.
  • The Review layer validates requirements and designs. It asks the questions: “Are we solving the right problem?”; “Are we solving it in the right way?” As review is a human-centric activity, tools typically help very little, beyond some metrics such as: cyclomatic complexity; or compile-time coupling, for example. As a result, the technical debts established by reviews are generally larger-scale, more ‘expensive’ and require far more effort (and money) to resolve.

Summary

Understanding Technical Debt is a critical part of software development project management. Sacrificing project intrinsic quality to expediate project delivery has to be very carefully balanced against the long-term costs of maintenance, extensibility, flexibility and re-use. Since the lifetime of a system could potentially extend into decades the costs of not removing Technical Debts could become untenable to the viability of the system.

Code-level restructuring / refactoring, whilst always beneficial, have the smallest beneficial impact. The higher ‘interest rates’ of Technical Debts associated with architectural problems typically far out-shadows the benefits gained from code-level fixes.
As a result, in order to be effective, engineers should be trained in software architecture, software design and even requirements analysis (5). All these topics are far more sophisticated that writing code and it takes time and effort to develop appropriate skills.

References

  • [1] http://www.construx.com/10x_Software_Development/Technical_Debt/
  • [2] Reducing Friction in Software Development – Paris Avgeriou, University of Groningen, Philippe Kruchten, University of British Columbia, Robert L. Nord and Ipek Ozkaya, Software Engineering Institute, Carolyn Seaman, University of Maryland, Baltimore County. Published by the IEEE Computer Society, 2016. https://ieeexplore.ieee.org/document/7367977/
  • [3] Structured Analysis and System Specification, De Marco T, Yourdon Press ISBN 0-138-54380-1, 1978
  • [4] B. Curtis, J. Sappidi, and A. Szynkarski, “Estimating the Principal of an Application’s Technical Debt,” IEEE Software, vol. 29, no. 6, 2012, pp. 34–42.
  • [5] A. Martini, J. Bosch, and M. Chaudron, “Investigating Architectural Technical Debt Accumulation and Refactoring over Time: A Multiple-Case Study,” Information and Software Technology, Nov. 2015, pp. 237—253.
Posted in General | Tagged , , | Leave a comment