Becoming a Rule of Zero Hero

November 5th, 2015

“Do, or do not; there is no ‘try’.”

Previously, we’ve looked at The Rule of Zero which, in essence, says: avoid doing your own resource management; use a pre-defined resource-managing type instead.

This is an excellent guideline and can significantly improve the quality of your application code. However, there are some circumstances where you might not get exactly what you were expecting. It’s not that the code will fail; it just might not be as efficient as you thought.

Luckily, the solution is easy to implement and has the additional side-effect of making your code even more explicit.

Read more »

Bitesize Modern C++ : Smart pointers

October 22nd, 2015

The dynamic creation and destruction of objects was always one of the bugbears of C. It required the programmer to (manually) control the allocation of memory for the object, handle the object’s initialisation then ensure that the object was safely cleaned-up after use and its memory returned to the heap. Because many C programmers weren’t educated in the potential problems (or were just plain lazy or delinquent in their programming) C got a reputation in some quarters for being an unsafe, memory-leaking language.

Things didn’t significantly improve in C++. We replaced malloc and free with new and delete; but the memory management issue remained.


I concede – the code above is trivial and stupid but I suspect if I looked around I could find similar (or even worse!) examples in actual production code.

Languages such as Java and C# solved this problem by taking memory management out of the hands of the programmer and using a garbage collector mechanism to ensure memory is cleaned up when not in use.

In Modern C++ they have chosen not to go down this route but instead make use of C++’s Resource Acquisition Is Initialisation (RAII) mechanism to encapsulate dynamic object creation / destruction within smart pointers.

A smart pointer is basically a class that has the API of a ‘raw’ pointer. In Modern C++ we have four classes for dynamic object management:

std::auto_ptr : Single-owner managed pointer, from C++98; now deprecated

std::shared_ptr : A reference-counted pointer, introduced in C++98 TR1

std::unique_ptr : Single-owner managed pointer which replaces (the now deprecated) auto_ptr

std::weak_ptr : Works with shared_ptr in situations where circular references could be a problem


Avoid using std::auto_ptr

std::auto_ptr was introduced in C++98 as a single-owner resource-managed smart pointer. That is, only one auto_ptr can ever be pointing at the resource.

auto_ptr objects have the peculiarity of taking ownership of the pointers assigned (or copied) to them: An auto_ptr object that has ownership over one element is in charge of destroying the element it points to and to deallocate the memory allocated to it when itself is destroyed. The destructor does this by calling delete automatically.


When an assignment operation takes place between two auto_ptr objects, ownership is transferred, which means that the object losing ownership is set to no longer point to the element (it is set to nullptr).   This also happens if you copy from one auto_ptr to another – either explicitly, or by passing an auto_ptr to a function by value.

This could lead to unexpected null pointer dereferences – an unacceptable consequence for most (if not all) systems. Therefore, we recommend avoiding the use of auto_ptr. It has now been deprecated in C++11 (and replaced with the much more consistent std::unique_ptr)


Use std::unique_ptr for single ownership

std::unique_ptr allows single ownership of a resource. A std::unique_ptr is an RAII wrapper around a ‘raw’ pointer, therefore occupies no more memory (and is generally as fast) as using a raw pointer. Unless you need more complex semantics, unique_ptr is your go-to smart pointer.

unique_ptr does not allow copying (by definition); but it does support move semantics, so you can explicitly transfer ownership of the resource to another unique_ptr.


The utility function make_unique<T>() hides away the memory allocation and is the preferred mechanism for dynamically creating objects. make_unique<T>() is not officially supported in C++11; but it is part of C++14 and is supported by many C++11-compliant compilers. (A quick search will turn up an implementation if your compiler doesn’t currently support it)


For sharing a resource, use std::shared_ptr

std::shared_ptr is a reference-counted smart pointer.

Creating a new dynamic object also creates a new associated management structure that holds (amongst other things) a reference count of the number of shared_ptrs currently ‘pointing’ at the object.

Each time a shared_ptr is copied the reference count is incremented. Each time one of the pointers goes out of scope the reference count on the resource is decremented. When the reference count is zero (that is, the last shared_ptr referencing the resource goes out of scope) the resource is deleted.

std::shared_ptrs have a higher overhead (in memory and code) than std::unique_ptr but they come with more sophisticated behaviours (like the ability to be copied at relatively low cost).


Once again, the standard library provides a utility function make_shared<T>() for creating shared dynamic objects; and, once again, this is the preferred mechanism.


Use std::weak_ptr for tracking std::shared_ptrs

A std::weak_ptr is related to a std::shared_ptr. Think of a weak_ptr as a ‘placeholder’ for a shared_ptr. std::weak_ptrs are useful if you want to track the existence of a resource without the overhead of a shared_ptr; or you need to break cyclic dependencies between shared_ptrs (A topic that is outside the scope of this article; but have a look here if you’re interested)

When you create a weak_ptr it must be constructed with an extant shared_ptr. It then becomes a ‘placeholder’ for that shared_ptr. You can store weak_ptrs, copy and move them, but doing so has no effect the reference count on the resource.


Note you cannot directly use a weak_ptr. You must convert it back to a shared_ptr first. weak_ptrs have a method, lock(), that creates (in effect) a copy of the original shared_ptr, which can then be accessed.


Since weak_ptrs can have a different lifetime to their associated shared_ptr there is a chance the original shared_ptr could go out of scope (and conceptually delete its resource) before the weak_ptr is destroyed.  (Strictly speaking, the resource is deleted when the last referencing shared_ptr and/or weak_ptr have gone out of scope)

A weak_ptr can therefore be invalid – that is, referencing a resource that is no longer viable. You should use the expired() method on the weak_ptr to see if it is still valid, before attempting to access it (alternatively, calling lock() on an expired weak_ptr will return nullptr).


That’s all for now.

We’ve got to the end of the Bitesize series for Modern C++.  You should now be in a much stronger position to explore the new features of C++ in more detail.

If you missed an article, or just want the complete set in a single document, you can download the full set of articles as a PDF, here.

To learn more about Feabhas’ Modern C++ training courses, click here.

Bitesize Modern C++ : std::array

October 8th, 2015

C++98 inherited C’s only built-in container, the array. Arrays of non-class types behave in exactly the same way as they do in C. For class types, when an array is constructed the default constructor is called on each element in the arrayimage

Explicitly initialising objects in an array is one of the few times you can explicitly invoke a class’s constructor.


For track[], the non-default constructor is called for first three elements, followed by default (no parameter) constructor for the last two elements; hence they are 0.0.

(Note the performance implications of this – five constructor calls will be made whether you explicitly initialise the objects or not.)

Arrays are referred to as ‘degenerate’ containers; or, put more antagonistically: they are a lie.

Arrays are basically a contiguous sequence of memory, pointers, and some syntactic sugar. This can lead to some disturbing self-delusion on the part of the programmer.


Despite the fact that the declaration of process() appears to specify an array of five Position objects, it is in fact a simple Position* that is passed. This explains why the array_sizeof macro fails (since the size of a Position is greater than the size of a pointer!). It also explains why we can increment the array name (which should be a constant) – as it is in main())

In C++11, use of ‘raw’ arrays is undesirable; and there are more effective alternatives.

std::array is fixed-size contiguous container. The class is a template with two parameters – the type held in the container; and the size.


std::array does not perform any dynamic memory allocation. Basically, it’s a thin wrapper around C-style arrays. Memory is allocated – as with built-in arrays – on the stack or in static memory. Because of this, and unlike std::vector, std::arrays cannot be resized.

If C-style notation is used there is no bounds-checking on the std::array; however, if the at() function is used an exception (std::out_of_range) will be thrown if an attempt is made to access outside the range of the array.

std::arrays also have the advantage that they support all the facilities required by the STL algorithms so they can be used wherever a vector or list (etc.) could be used; without the overhead of dynamic memory management.


Finally, because container types are classes (not syntactic sugar) they can be passed around the system like ‘proper’ objects.



More information

Can’t wait? Download the full set of articles as a PDF, here.

To learn more about Feabhas’ Modern C++ training courses, click here.

Bitesize Modern C++ : noexcept

September 24th, 2015

We have some basic problems when trying to define error management in C:

  • There is no “standard” way of reporting errors. Each company / project / programmer has a different approach
  • Given the basic approaches, you cannot guarantee the error will be acted upon.
  • There are difficulties with error propagation; particularly with nested calls.

The C++ exception mechanism gives us a facility to deal with run-time errors or fault conditions that make further execution of a program meaningless.

In C++98 it is possible to specify in a function declaration which exceptions a function may throw.


The above function declarations state:

  • get_value() can throw any exception. This is the default.
  • display() will not throw any exceptions.
  • set_value() can throw exceptions of only of type char* and Sensor_Failed; it cannot throw exceptions of any other type.

This looks wonderful, but compilers (can) only partially check exception specifications at compile-time for compliancy.


If process() throws an exception of any type other than std::out_of_range this will cause the exception handling mechanism – at run-time – to call the function std::unexpected() which, by default, calls std::terminate() (although its behaviour can – and probably should – be replaced).

Because of the limitations of compile-time checking, for C++11 the exception specification was simplified to two cases:

  • A function may propagate any exception; as before, the default case
  • A function may not throw any exceptions.

Marking a function as throwing no exceptions is done with the exception specifier, noexcept.

(If you read the noexcept documentation you’ll see it can take a boolean constant-expression parameter. This parameter allows (for example) template code to conditionally restrict the exception signature of a function based on the properties of its parameter type. noexcept on its own is equivalent to noexcept(true). The use of this mechanism is beyond the scope of this article.)


On the face of it, the following function specifications look semantically identical – both state that the function will not throw any exceptions:


The difference is in the run-time behaviour and its consequences for optimisation.

With the throw() specification, if the function (or one of its subordinates) throws an exception, the exception handling mechanism must unwind the stack looking for a ‘propagation barrier’ – a (set of) catch clauses. Here, the exception specification is checked and, if the exception being thrown doesn’t match the provided specification, std::unexpected() is called.

However, std::unexpected() can itself throw an exception. If the exception thrown by std::enexpected() is valid for the current exception specification, exception propagation and stack unwinding continues as before.

This means that there is little opportunity for optimisation by the compiler for code using a throw() specification; in fact, the compiler may even introduce pessimisations to the code:

  • The stack must be maintained in an unwindable state.
  • Destructor order must be maintained to ensure objects going out of scope as a result of the exception are destroyed in the opposite order to their construction.
  • The compiler may introduce new propagation barriers to the code, introducing new exception table entries, thus making the exception handling code bigger.
  • Inlining may be disabled for the function.

In contrast, in the case of a noexcept function specification std::terminate() is called immediately, rather than std::unexpected(). Because of this, the compiler has the opportunity to not have to unwind the stack during an exception, allowing it a much wider range of optimisations.

In general, then, if you know your function will never throw an exception, prefer to specify it as noexcept, rather than throw().


More information

Can’t wait? Download the full set of articles as a PDF, here.

To learn more about Feabhas’ Modern C++ training courses, click here.

Bitesize Modern C++ : Override and Final

September 10th, 2015

Override specifier

In C++98 using polymorphic types can sometimes lead to head-scratching results:image

On the face of it this code looks sound; indeed it will compile with no errors or warnings. However, when it runs the Base version of op() will be executed!

The reason? Derived’s version of op() is not actually an override of Base::op since int and long are considered different types (it’s actually a conversion between an int and a long, not a promotion)

The compiler is more than happy to let you overload functions in the Derived class interface; but in order to call the overload you would need to (dynamic) cast the Base class object in usePolymorphicObject().

In C++11 the override specifier is a compile-time check to ensure you are, in fact, overriding a base class method, rather than simply overloading it.


Final specifier

In some cases you want to make a virtual function a ‘leaf’ function – that is, no derived class can override the method. The final specifier provides a compile-time check for this:


More information

Can’t wait? Download the full set of articles as a PDF, here.

To learn more about Feabhas’ Modern C++ training courses, click here.

Security and Connected Devices

September 9th, 2015

With the Internet of Things, we are seeing more and more devices that were traditionally “deep embedded” and isolated from the outside world becoming connected devices. Security needs to be designed into connected products from the outset as the risk of outside attacks is very real. This is especially true if you’re migrating from embedded RTOS systems to Linux and encountering a smorgasbord of “free” connectivity functionality for the first time.

Here we list 10 top tips to help make your connected device as secure as possible. Remember, in many cases, it may not be a question of ‘if’ but ‘when’ an attack occurs.

1. Keep your subsystems separate.

The Jeep Cherokee was chosen as a target for hacking by Charlie Miller and Chris Valasek following an assessment of the vulnerabilities of 24 models of vehicle to see if the internet-connected devices used primarily for communication and entertainment were properly isolated from the driving systems [1].

Most car driving systems are controlled using a CAN bus. You could access them via a diagnostic port – this is what happens when they are serviced in a garage. You would have to have physical access to the vehicle to do this. But if you are connecting to the comms/entertainment systems via the internet, and they’re connected to the driving systems, you could potentially access the driving systems from the internet.

With the explosion of devices being connected, consideration needs to be made to the criticality of functions and how to prevent remote access to a car’s brakes, steering, accelerator, power train and engine management controls. While it might be permissible to grant remote read access for instruments (e.g. mileage and fuel consumption), any control systems should only be accessible by the driver at the controls. And with things like heart monitors starting to become connected devices, the criticality of separation is likely to increase.

2. Secure Your Boot Code

One of the most effective ways of hijacking a system is via the boot code. Some of the earliest computer viruses, e.g. the Elk Cloner for Apple II [2], Brain and Stoned viruses for PCs, infected the boot sectors of removable media. Later viruses corrupted the operating system or even loaded their own. The same possibilities exist with computers and embedded devices today if the bootloader is well known, e.g, grub, u-boot or redboot.

Most devices designed with security in mind have a secure bootloader and a chain of trust. The bootloader will boot from a secure part of the processor and will have a digital signature, so that only a trusted version of it will run. The bootloader will then boot a signed main runtime image.

In many cases the bootloader will boot a signed second stage bootloader, which will only boot a signed main runtime. That way, the keys or encryption algorithms in the main runtime can be changed by changing the second stage bootloader.

3. Use Serialisation and Control Your Upgrade Path

When it comes to upgrading images in the field (to support new features, or to fix bugs or security flaws), this can be done using serialisation to target specific units in the field at particular times to reduce the risk of large numbers of units failing simultaneously after an upgrade.

Each runtime image should be signed with a version number so that only higher number versions can run. Upgrades can be controlled by a combination of different keys held in the unit’s FLASH.

4. Design for Disaster Recovery

Your box no longer boots in the field because the runtime image has become corrupted. What then?
Truck rolls or recalls are very expensive and they deprive the user of their product. There are alternatives:

(i) Keep a copy of the runtime for disaster recovery. This can be stored in onboard FLASH as a mirror of the runtime itself, or in a recovery medium, e.g. a USB stick, which is favoured these days by PC manufacturers.

(ii) Alternatively, the bootloader can automatically try for an over-the-air download – this is often favoured with things like set top boxes where the connection is assumed good (it wouldn’t be much of a set top box if it couldn’t tune or access the internet). This saves on FLASH but deprives the user of their product while the new runtime image is being downloaded.

5. Switch off debug code

Don’t give out any information that might be of use to the outside world. The Jeep Cherokee hack was made possible by an IP address being passed back to the user. It’s hard to see what use this would be to a typical non-tech user.

6. Harden the Kernel

The Linux Kernel contains thousands of options, including various ports, shells and communication protocols. It almost goes without saying that any production builds needs everything switched off except the features you need. But implementing this isn’t always so straightforward due to the inter-dependencies of some kernel features. Don’t use bash unless it’s unavoidable, use ash instead. The disclosure of the Shellshock, a 25-year-old vulnerability [3], in September 2014, triggered a tidal wave of hacks, mainly distributed denial of service attacks and vulnerability scanning.

Disable telnet. Disable SSH unless you have an essential usage requirement. Disable HTTP. If there is any way a user might form a connection with the box, especially using a method well-used on other boxes, that’s a door into the box that needs locking.

With the growing capabilities and connected nature of embedded RTOS systems approaching that of embedded Linux in Machine to Machine communications and the Internet of Things, similar “hardening” processes need to be followed.

7. Use a Trusted Execution Environment

Most of the main processors used in connected devices (smart phones, tablets, smart TVs, set top boxes) now contain a secure area known as a Trusted Execution Environment (TEE).

A TEE provides isolated execution environment where confidential assets (e.g. video content, banking information) can be accessed in isolation. Four popular uses are:
(i) premium content protection, especially 4k UHD content
(ii) mobile financial services
(iii) authentication (facial recognition, fingerprints and voice)
(iv) secure handling of commercially sensitive or government-classified information on devices.

TEEs have two security levels:
Profile 1 is intended to prevent software attacks.
Profile 2 is intended to prevent hardware and software attacks.

8. Use a Container Architecture

If you are designing a system with a processor that doesn’t use a TEE, you can still design a reasonably safe solution using a container architecture to isolate your key processes.

Linux Containers have been around since August 2008 and relies on Kernel cgroups functionality that first appeared in Kernel version 2.6.24. LXC 1.0, which appeared in February 2014, is considerably more secure than earlier implementations, allowing regular users to run “unprivileged containers”.

Alternatives to LXC are virtualization technologies such as OpenVZ and Linux-Vserver. Other operating systems contain similar technologies such as FreeBDS jails, Solaris Containers, AIX Workload Partitions. Apple’s iOS also uses containers.

9. Lock your JTAG port

Quihoo360 Unicorn Team’s hack of Zigbee [4] was made possible by dumping the contents of the FLASH from the board of the IoT gateway. This enabled them to identify the keys used on the network. The fact that the keys themselves were stored in a format that enabled them to be decoded made the hack easier.

If your JTAG port is unlocked, and hackers have access to the development tools used for the target processor, then they could potentially overwrite any insecure boot code with their own, allowing them to take control of the system and its upgrades.

10. Encrypt Communications Channels and any Key Data

If all the above steps are taken, a device can still be vulnerable to a man-in-the middle attack if the payload is sent unencrypted.

If you have a phone, table, smart TV or set top box accessing video on demand (VOD), the user commands need to be encrypted, otherwise it is possible to get free access to the VOD server by spoofing the server to capture box commands, and then spoofing the box to capture the server responses. It might even be possible to hack the server to grant access to multiple devices in the field, or mount a denial of service attack.

GPS spoofing by Quihoo 360 was demonstrated at DEF CON 23, where signals were recorded and re-broadcast [5]. It’s not the first time GPS spoofing has happened. Spoofing / MoM attacks on any user-connected system are commonplace.

Bonus Extra Tip: Get a Third Party to Break It

This is probably the most useful advice of all. As with software testing in general, engineers shouldn’t rely on marking their own homework: the same blind spots missed in a design will be missed in testing. Engineers designing systems won’t have the same mentality as those trying to hack them. An extra pair of eyes going over the system trying to break it will expose vulnerabilities you never thought existed.


Security is a vast subject and we’ve only scratched the surface in this blog.
Feabhas offer a course EL-402 in Secure Linux Programming, for more information click here.


1. Fiat Chrysler Jeep Cherokee hack

2. Elk Cloner

3. Shellshock

4. Zigbee hack
Def Con 23

5. GPS Spoofing
Def Con 23

Bitesize Modern C++ : Range-for loops

August 27th, 2015

If you’re using container classes in your C++ code (and you probably should be, even if it’s just std::array) then one of the things you’re going to want to do (a lot) is iterate through the container accessing each member in turn.

Without resorting to STL algorithms we could use a for-loop to iterate through the container.


If the above is baffling to you there are plenty of useful little tutorials on the STL on the Internet (For example, this one)

We could simplify the iterator declaration in C++11 using auto:


(See the article on auto type-deduction for details of how it works)

However, there’s a nicer syntactic sugar to improve our code: the range-for loop:


The semantics of the range-for are: For every element in the container, v, create a reference to each element in turn, item.

The above code is semantically equivalent to the following:


Look familiar?

Not only does this save you some typing but, because it’s the compiler that’s generating the code it has a lot more potential for optimisation (for example, the compiler knows that the end() iterator is not invalidated in the body of the range-for statement, therefore it can be read once before the loop; or the compiler may choose to unroll the loop; etc.)

In case you were wondering, std::begin() and std::end() are free functions that return an iterator to the first element in the supplied container and an iterator to one-past-the-end, respectively. For most STL containers they simply call cont.begin() and cont.end(); but the functions are overloaded to handle built-in arrays and other container-like objects (see below)

Range-for loops are not limited to STL containers. They can also work with built-in arrays:


And also std::initializer_lists


More information

Can’t wait? Download the full set of articles as a PDF, here.

To learn more about Feabhas’ Modern C++ training courses, click here.

Bitesize Modern C++: std::initializer_list

August 13th, 2015

An aggregate type in C++ is a type that can be initialised with a brace-enclosed list of initialisers. C++ contains three basic aggregate types, inherited from C:

  • arrays
  • structures
  • unions

Since one of the design goals of C++ was to emulate the behaviour of built-in types it seems reasonable that you should be able to initialise user-defined aggregate types (containers, etc.) in the same way.


A std::initializer_list is a template class that allows a user-defined type to become an aggregate type.

When initialiser list syntax is used the compiler generates a std::initializer_list object containing the initialisation objects. A std::initializer_list is a simple container class that may be queried for its size; or iterated through.


If the class contains a constructor that takes a std::initializer_list as a parameter, this constructor is invoked and the std::initializer_list object passed.


Note, there is some syntactic sugar at work here – the lack of brackets ([]) in the declaration of aggr forces the compiler to construct the std::initializer_list (then call aggr‘s constructor) rather than creating an array of three Aggregate objects.

This is also a good place to insert some words of caution: Adding std::initializer_list constructor overloads may lead to unexpected results:


If a class has constructors overloaded for T and std::initializer_list<T> the compiler will always prefer the std::initializer_list overload. However, if you’ve provided a default constructor the compiler will always prefer that to calling the std::initializer_list overload with an empty initialiser list.

Initialiser lists can begin to look like so much magic and ‘handwavium’, so a brief look at an implementation of std::initializer_list is useful to dispel the mysticism:


When the compiler creates an std::initializer_list the elements of the list are constructed on the stack (or in static memory, depending on the scope of the initializer_list).


The compiler then creates the initializer_list object that holds the address of the first element and one-past-the-end of the last element. Note that the initializer_list is very small (two pointers) so can be passed by copy without a huge overhead; it does not pass the initialiser objects themselves. Once the initializer_list has been copied the receiver can access the elements and do whatever needs to be done with them.

Since C++11 all the STL containers support std::initializer_list construction; so now lists and vectors can be initialised in the same way as built-in arrays.



More information

Can’t wait? Download the full set of articles as a PDF, here.

To learn more about Feabhas’ Modern C++ training courses, click here.

Bitesize Modern C++: Uniform initialization

July 30th, 2015

C++98 has a frustratingly large number of ways of initialising an object.


(Note: not all these initialisations may be valid at the same time, or at all. We’re interested in the syntax here, not the semantics of the class X)

One of the design goals in C++11 was uniform initialisation syntax. That is, wherever possible, to use a consistent syntax for initialising any object. The aim was to make the language more consistent, therefore easier to learn (for beginners), and leading to less time spent debugging.

To that end they added brace-initialisation to the language.

As the name would suggest, brace-initialisation uses braces ({}) to enclose initialiser values. So extending the above examples:


There are a couple of highlights from the above code:

Integer i is default-initialised (with the value 0). This is equivalent to C++03’s (much more confusing):


x1 is explicitly default-constructed. This alleviates the ‘classic’ mistake made by almost all C++ programmers at some point in their career:


By extension, this also alleviates C++’s Most Vexing Parse as well. For those not familiar with it, here it is:


Most programmers read this as "create an object, adt, and initialise it with a temporary object, ADT()". Your compiler, however, following the C++ parsing rules, reads it as "adt is a function declaration for a function returning an ADT object, and taking a (pointer to) a function with zero parameters, returning an ADT object."

With brace-initialisation, this problem goes away:


The compiler cannot parse the above except as "create an object, adt, and initialise it with a temporary object, ADT{}"

The uniform initialisation syntax goal means that brace-initialisation can be used anywhere an object must be initialised. This includes the member initialisation list:


In C++98 programmers had the capability to initialise static member variables as part of the class declaration. C++11 extends this to allow default-initialisation of non-static class members. The code:


Can be re-written as:


The member initialiser code ensures that the member variable will always have a default value, even if it is not explicitly initialised in a member initialiser list. For the example above we have told the compiler to provide the default constructor; which does nothing.

When we create object adt1 the (compiler-supplied) default constructor is called. The member initialisers in the class-definition ensure that the members of adt1 are initialised.

Having the initialisers visible at point-of-instantiation gives the compiler the opportunity to optimise (away) constructor calls and create the object in-situ.


More information

Can’t wait? Download the full set of articles as a PDF, here.

To learn more about Feabhas’ Modern C++ training courses, click here.

Bitesize Modern C++: using aliases

July 16th, 2015

In a C++ program it is common to create type aliases using typedef. A type alias is not a new type, simply a new name for an existing declaration. Used carefully, typedef can improve the readability and maintainability of code – particularly when dealing with complex declarations.


In C++11 typedef can be replaced with a using-alias. This performs the same function as a typedef; although the syntax is (arguably) more readable. A using-alias can be used wherever a typedef could be used.


Using-aliases have the advantage that they can also be templates, allowing a partial substitution of template parameters.


More information

Can’t wait? Download the full set of articles as a PDF, here.

To learn more about Feabhas’ Modern C++ training courses, click here.

%d bloggers like this: