You are currently browsing the archives for the Industry Analysis category.

Security and Connected Devices

September 9th, 2015
Andy McCormick

Andy McCormick

Technical Consultant at Feabhas Ltd
I provide expertise and training for Embedded Linux courses.

I have over 20 years of experience in the embedded sector, gained at companies such as Pace, Open TV and Sony Semiconductor Europe.

I've led work on numerous projects at all stages in the design cycle with comprehensive expertise in software engineering design, support and integration.
Andy McCormick

Latest posts by Andy McCormick (see all)

With the Internet of Things, we are seeing more and more devices that were traditionally “deep embedded” and isolated from the outside world becoming connected devices. Security needs to be designed into connected products from the outset as the risk of outside attacks is very real. This is especially true if you’re migrating from embedded RTOS systems to Linux and encountering a smorgasbord of “free” connectivity functionality for the first time.

Here we list 10 top tips to help make your connected device as secure as possible. Remember, in many cases, it may not be a question of ‘if’ but ‘when’ an attack occurs.

1. Keep your subsystems separate.

The Jeep Cherokee was chosen as a target for hacking by Charlie Miller and Chris Valasek following an assessment of the vulnerabilities of 24 models of vehicle to see if the internet-connected devices used primarily for communication and entertainment were properly isolated from the driving systems [1].

Most car driving systems are controlled using a CAN bus. You could access them via a diagnostic port – this is what happens when they are serviced in a garage. You would have to have physical access to the vehicle to do this. But if you are connecting to the comms/entertainment systems via the internet, and they’re connected to the driving systems, you could potentially access the driving systems from the internet.

With the explosion of devices being connected, consideration needs to be made to the criticality of functions and how to prevent remote access to a car’s brakes, steering, accelerator, power train and engine management controls. While it might be permissible to grant remote read access for instruments (e.g. mileage and fuel consumption), any control systems should only be accessible by the driver at the controls. And with things like heart monitors starting to become connected devices, the criticality of separation is likely to increase.

2. Secure Your Boot Code

One of the most effective ways of hijacking a system is via the boot code. Some of the earliest computer viruses, e.g. the Elk Cloner for Apple II [2], Brain and Stoned viruses for PCs, infected the boot sectors of removable media. Later viruses corrupted the operating system or even loaded their own. The same possibilities exist with computers and embedded devices today if the bootloader is well known, e.g, grub, u-boot or redboot.

Most devices designed with security in mind have a secure bootloader and a chain of trust. The bootloader will boot from a secure part of the processor and will have a digital signature, so that only a trusted version of it will run. The bootloader will then boot a signed main runtime image.

In many cases the bootloader will boot a signed second stage bootloader, which will only boot a signed main runtime. That way, the keys or encryption algorithms in the main runtime can be changed by changing the second stage bootloader.

3. Use Serialisation and Control Your Upgrade Path

When it comes to upgrading images in the field (to support new features, or to fix bugs or security flaws), this can be done using serialisation to target specific units in the field at particular times to reduce the risk of large numbers of units failing simultaneously after an upgrade.

Each runtime image should be signed with a version number so that only higher number versions can run. Upgrades can be controlled by a combination of different keys held in the unit’s FLASH.

4. Design for Disaster Recovery

Your box no longer boots in the field because the runtime image has become corrupted. What then?
Truck rolls or recalls are very expensive and they deprive the user of their product. There are alternatives:

(i) Keep a copy of the runtime for disaster recovery. This can be stored in onboard FLASH as a mirror of the runtime itself, or in a recovery medium, e.g. a USB stick, which is favoured these days by PC manufacturers.

(ii) Alternatively, the bootloader can automatically try for an over-the-air download – this is often favoured with things like set top boxes where the connection is assumed good (it wouldn’t be much of a set top box if it couldn’t tune or access the internet). This saves on FLASH but deprives the user of their product while the new runtime image is being downloaded.

5. Switch off debug code

Don’t give out any information that might be of use to the outside world. The Jeep Cherokee hack was made possible by an IP address being passed back to the user. It’s hard to see what use this would be to a typical non-tech user.

6. Harden the Kernel

The Linux Kernel contains thousands of options, including various ports, shells and communication protocols. It almost goes without saying that any production builds needs everything switched off except the features you need. But implementing this isn’t always so straightforward due to the inter-dependencies of some kernel features. Don’t use bash unless it’s unavoidable, use ash instead. The disclosure of the Shellshock, a 25-year-old vulnerability [3], in September 2014, triggered a tidal wave of hacks, mainly distributed denial of service attacks and vulnerability scanning.

Disable telnet. Disable SSH unless you have an essential usage requirement. Disable HTTP. If there is any way a user might form a connection with the box, especially using a method well-used on other boxes, that’s a door into the box that needs locking.

With the growing capabilities and connected nature of embedded RTOS systems approaching that of embedded Linux in Machine to Machine communications and the Internet of Things, similar “hardening” processes need to be followed.

7. Use a Trusted Execution Environment

Most of the main processors used in connected devices (smart phones, tablets, smart TVs, set top boxes) now contain a secure area known as a Trusted Execution Environment (TEE).

A TEE provides isolated execution environment where confidential assets (e.g. video content, banking information) can be accessed in isolation. Four popular uses are:
(i) premium content protection, especially 4k UHD content
(ii) mobile financial services
(iii) authentication (facial recognition, fingerprints and voice)
(iv) secure handling of commercially sensitive or government-classified information on devices.

TEEs have two security levels:
Profile 1 is intended to prevent software attacks.
Profile 2 is intended to prevent hardware and software attacks.

8. Use a Container Architecture

If you are designing a system with a processor that doesn’t use a TEE, you can still design a reasonably safe solution using a container architecture to isolate your key processes.

Linux Containers have been around since August 2008 and relies on Kernel cgroups functionality that first appeared in Kernel version 2.6.24. LXC 1.0, which appeared in February 2014, is considerably more secure than earlier implementations, allowing regular users to run “unprivileged containers”.

Alternatives to LXC are virtualization technologies such as OpenVZ and Linux-Vserver. Other operating systems contain similar technologies such as FreeBDS jails, Solaris Containers, AIX Workload Partitions. Apple’s iOS also uses containers.

9. Lock your JTAG port

Quihoo360 Unicorn Team’s hack of Zigbee [4] was made possible by dumping the contents of the FLASH from the board of the IoT gateway. This enabled them to identify the keys used on the network. The fact that the keys themselves were stored in a format that enabled them to be decoded made the hack easier.

If your JTAG port is unlocked, and hackers have access to the development tools used for the target processor, then they could potentially overwrite any insecure boot code with their own, allowing them to take control of the system and its upgrades.

10. Encrypt Communications Channels and any Key Data

If all the above steps are taken, a device can still be vulnerable to a man-in-the middle attack if the payload is sent unencrypted.

If you have a phone, table, smart TV or set top box accessing video on demand (VOD), the user commands need to be encrypted, otherwise it is possible to get free access to the VOD server by spoofing the server to capture box commands, and then spoofing the box to capture the server responses. It might even be possible to hack the server to grant access to multiple devices in the field, or mount a denial of service attack.

GPS spoofing by Quihoo 360 was demonstrated at DEF CON 23, where signals were recorded and re-broadcast [5]. It’s not the first time GPS spoofing has happened. Spoofing / MoM attacks on any user-connected system are commonplace.

Bonus Extra Tip: Get a Third Party to Break It

This is probably the most useful advice of all. As with software testing in general, engineers shouldn’t rely on marking their own homework: the same blind spots missed in a design will be missed in testing. Engineers designing systems won’t have the same mentality as those trying to hack them. An extra pair of eyes going over the system trying to break it will expose vulnerabilities you never thought existed.


Security is a vast subject and we’ve only scratched the surface in this blog.
Feabhas offer a course EL-402 in Secure Linux Programming, for more information click here.


1. Fiat Chrysler Jeep Cherokee hack

2. Elk Cloner

3. Shellshock

4. Zigbee hack
Def Con 23

5. GPS Spoofing
Def Con 23

goto fail and embedded C Compilers

February 27th, 2014

Niall Cooling

Director at Feabhas Limited
Co-Founder and Director of Feabhas since 1995.
Niall has been designing and programming embedded systems for over 30 years. He has worked in different sectors, including aerospace, telecomms, government and banking.
His current interest lie in IoT Security and Agile for Embedded Systems.

Latest posts by Niall Cooling (see all)

I can’t imagine anyone reading this posting hasn’t already read about the Apple “goto fail” bug in SSL. My reaction was one of incredulity; I really couldn’t believe this code could have got into the wild on so many levels.

First we’ve got to consider the testing (or lack thereof) for this codebase. The side effect of the bug was that all SSL certificates passed, even malformed ones. This implies positive testing (i.e. we can demonstrate it works), but no negative testing (i.e. a malformed SSL certificate), or even no dynamic SSL certificate testing at all?

What I haven’t established* is whether the bug came about through code removal (e.g. there was another ‘if’ statement before the second goto) or, due to trial-and-error programming, the extra goto got added (with other code) that then didn’t get removed on a clean-up. There are, of course, some schools of thought that believe it was deliberately put in as part of prism!

Then you have to query regression testing; have they never tested for malformed SSL certificates (I can’t believe that; mind you I didn’t believe Lance was doping!) or did they use a regression-subset for this release which happened to miss this bug? Regression testing vs. product release is always a massive pressure. Automation of regression testing through continuous integration is key, but even so, for very large code bases it is simplistic to say “rerun all tests”; we live in a world of compromises.

Next, if we actually analyse the code then I can imagine the MISRA-C group jumping around saying “look, look, if only they’d followed MISRA-C this couldn’t of happened” (yes Chris, it’s you I’m envisaging) and of course they’re correct. This code breaks a number of MISRA-c:2012 rules, but most notably:

15.6 (Required) The body of an iteration-statement or selection-statement shall be a compound-statement

Which boils down to all if-statements must use a block structure, so the code would go from (ignoring the glaring coding error of two gotos):
Read more »

Can existing embedded applications benefit from Multicore Technology?

June 15th, 2012

Niall Cooling

Director at Feabhas Limited
Co-Founder and Director of Feabhas since 1995.
Niall has been designing and programming embedded systems for over 30 years. He has worked in different sectors, including aerospace, telecomms, government and banking.
His current interest lie in IoT Security and Agile for Embedded Systems.

Latest posts by Niall Cooling (see all)

It feels that not a day goes by without a new announcement regarding a major development in multicore technology. With so much press surrounding multicore, you have to ask the question “Is it for me?” i.e. can I utilise multicore technology in my embedded application?

However, from a software developer’s perspective, all the code examples seem to demonstrate the (same) massive performance improvements to “rendering fractals” or “ray tracing programs”. The examples always refer to Amdahl’s Law, showing gains when using, say, 16- or 128-cores. This is all very interesting, but not what I would imagine most embedded developers would consider “embedded”.  These types of programs are sometimes referred to as “embarrassingly parallel” as it is so obvious they would benefit from parallel processing. In addition the examples use proprietary solutions, such as TBB from Intel, or language extensions with limited platform support, e.g. OpenMP. In addition, this area of parallelisation is being addressed more and more by using multicore General Purpose Graphics Processing Units (GPGPU), such as PowerVR from Imagination Technologies and Mali from ARM, using OpenCL; however this is getting off-topic.

So taking “fractals”, OpenMP and GPGPUs out of the equation, is multicore really useful for embedded systems? Read more »

Artisan – Aonix merger

February 1st, 2010

Latest posts by admin (see all)

Late last month the merger of Artisan and Aonix was announced. This is yet another interesting move in the supply of tools to the embedded systems community.

The tool market has seen a number of significant changes over the last year. It started just over a year ago with the acquisition of Telelogic by IBM.  At that time Telelogic were starting to become the dominant player in the embedded design tool market (predominately UML), based primarily around their Rhapsody tool (which of course came in to their stable from their acquisition of I-Logix back in 2006).  A lot of people, me included, were concerned that Telelogic would be swallowed up into IBM and loose their focus and support for the embedded community.

Much of this concern was based on IBM’s previous acquisition of Rational back in 2003.  At that time Rational were certainly a player in the embedded UML market, but slowly fewer and fewer resources were given over, and no real investment to tools such as Rational Rose Real-Time. Sure enough, within a year the Telelogic name has disappeared to be replaced by IBM Rational. As an embedded developer you would be hard pressed to find what support IBM offer unless you know what to look for (try, for example, typing “embedded” into the IBM search box, you won’t find much of use to the embedded developer).

This has meant that a void has been developing for anyone looking to use UML in real-time embedded projects. Obviously there are a number of generic tools around, such as Enterprise Architect from Sparx Systems and MaigicDraw from Magic Draw, but neither is focused on to real-time embedded projects.

The merger of Artisan and Aonix makes a lot of sense to both companies. First, for Artisan it gives them a much better presence in the US and thus better positioned to exploit the void left by Telelogic. Also Aonix’s strength has historically been around safety-related software systems, which aligns with where Artisan has been having success.  The merger may also help Artisan address some of the areas they have been perceived as weak in compared to Telelogic, most notability code generation and simulation. Finally, if the code generation is addressed, then it may prove a good platform for gaining acceptance of Aonix’s really interesting, but still relatively niche, Java PERC offering.

So expect to see the name Atego much more in 2010.
%d bloggers like this: