Template functions and classes tend to cause consternation amongst programmers. The conversation tends to go something like this:
- I understand the syntax of templates (although it’s ugly)
- I get the idea of replacing function-like macros with template functions
- I can see the application of template classes for containers
- Most containers and generic functions are library code
- I don’t write libraries
- What’s the point of me using templates?
In this article we’re going to look at an application of templates beyond writing library code – replacing run-time polymorphism (interfaces) with compile-time polymorphism. This idea is known as a Policy. The idea is reminiscent of the Strategy Pattern, but uses templates rather than interfaces.
Previously we looked at template class syntax and semantics. In this article we’ll extend this to look at inheritance of template classes.
Last time we looked at template functions, which introduced the concept of generic programming in C++.
This time let’s extend the idea of generic programming to classes.
Templates are a very powerful – but often very confusing – mechanism within C++. However, approached in stages, templates can be readily understood (despite their heinous syntax).
The aim of this series of articles is to guide beginners through the syntax and semantics of the foundation concepts in C++ template programming.
A Quickstart Guide
We’re going to look at how to create and use libraries on Linux and try to gain some insight on how libraries work behind the scenes.
Often when working with 3rd party code you may be limited on the options available. Some well known open-source projects have dual-licensed binaries that dictate different terms for static or dynamic linking.
Writing a library is a good way to provide an interface to customers, get code reuse and can be a major source of headaches!. To understand what’s best for your usecase it’s worth looking at what each type provides.
Static libraries (.a files) are precompiled object code which is linked into other executables at compile time and become part of that final application. These libraries load quickly, have less indirection and don’t run the risk of dependency hell which can beset their dynamic peers.
Following on from my previous post on Python and our new course on Python for Test Engineers which takes an elementary approach, I felt it was time to pay homage to that wonderful language once again but this time focusing on its applicability for Rapid Application Development.
The Higher Level the Language; The More Productive the Programmer
I love writing Python. I’ll be honest, it’s the closest I’ll get to writing executable pseudo-code which best mirrors how my mind works and I’m sure I can’t be alone in this. When coding in Python, its dynamic nature means you can sculpt and remodel your code as you express your ideas; this process gives a better understanding of the codes impact as well as providing more accurate estimates for how long a given task will take as there are fewer unknowns in the language itself.
“But Python is for high level applications!” – sorry but the world of embedded is evolving, Embedded Linux is everywhere with its “everything is a file” interface for things that were predominantly the domain of device drivers – I can manipulate framebuffers, SPI and I2C busses and GPIO using Python and even microcontrollers, that last bastion of embedded, are getting in on the action with the recently Kickstarted Micropython which brings Python further into traditional “deeply embedded” territory.
Python as a System Programming Language
An argument I get from other developers is that Python isn’t a real systems programming language – yes, you can script in it but that’s not how “Real coders” write software. Well, when those real Scotsmen, uh, coders take time from etching ones and zeroes directly into their hard disc platters and explain what they mean it typically comes down to a misunderstanding of Pythons capabilities.
“I need threading”
“Uh, swell, but I also need IPC/Networking”
“I also need serial comms”
OK, not built in but how about the cross-platform PySerial?
“And what about my existing programs?”
Python can handle all of those things with aplomb and also affords you ready-to-roll web services, file handling, parsing of data in XML/JSON form or plain text using regular expressions. Maybe you want to interact with the latest pluggable devices using PyUSB and you’ll lose track of the many ways to show a Graphical User Interface that it’s more akin to a beauty pageant.
As we move forward, multimedia is increasingly a key part in product development, many devices need to push and manipulate video and sound and once again, Python provides a huge leg up whether via simple graphics manipulation using Pillow, image processing using scikit-image or even live and post-processing of video using gstreamer or MoviePy respectively.
Reducing Time to Market
Time to market is the phrase that pays in the world of product development – it defines our coding deadlines, the features we can ship with and the very nature of our products and we’re always under pressure to reduce it.
Research indicates  that designing and writing a program in Python (or similar scripting language) typically takes half as long and requires half as much code as an equivalent C or C++ program.
Given that the number of lines we, as programmers, can write in a given time is constant and not dependent on the language or its type we can infer that with a higher level language such as Python, we get a much higher instructions-per-line metric and developers only have to spend half as long coding as our C++ wielding brethren. I can be more productive than if I was using C++ safe in the knowledge that the same task will require less code in Python.
This is how we get our minimum viable product, this is how we crush our time to market – we code smarter, not harder.
Of Data Types and Paradigms
Data types are important as they define the information we can hold in a variable. Built-in types are a godsend in Python – we have lists, dictionaries, sets and many more and we don’t need to track down and incorporate 3rd party libraries to provide neither them, nor the functions to manipulate them.
This ability to develop as you prototype means that you can develop supporting routines as you go and so to do the types of data you will need to handle which means you can pivot and rearchitect as you develop your software.
Python is a multi-paradigm language (urgh, sorry!) that allows us as programmers to use procedural, object-oriented and functional where it makes sense without shoe-horning one into the other.
One thing that has been on the increase is that Python is entering into areas traditionally aligned with domain specific tools such as Matlab and similar but as the power of Python is becoming apparent, modelling in one language and porting it to something like C is losing out to the decision to model, develop and code in just one language.
Add-on Libraries such as NumPy, SciPy and many others allow a combined model of modelling and deployment that affords huge benefits in a multi-disciplinary team as everyone speaks the same language: Python.
Good enough is good enough
Python isn’t just a prototyping language – you don’t need to throw away any of your code, companies are readily using Python for their deployed products as they evolve from RAD ideas to fully-fledged, production ready programs.
Python may not be everyone’s cup of tea and it has its limitations – it’s never going to offer the performance per watt you’d get from hand-coded assembly – but more often than not it’s good enough to get to market and, as companies are finding out, it’s good enough to keep you in the market.
Inquire today about how we can help you kick start your development process with Python.
A new (but not so welcome?) addition
Lambdas are one of the new features added to C++ and seem to cause considerable consternation amongst many programmers. In this article we’ll have a look at the syntax and underlying implementation of lambdas to try and put them into some sort of context.
I can’t imagine anyone reading this posting hasn’t already read about the Apple “goto fail” bug in SSL. My reaction was one of incredulity; I really couldn’t believe this code could have got into the wild on so many levels.
First we’ve got to consider the testing (or lack thereof) for this codebase. The side effect of the bug was that all SSL certificates passed, even malformed ones. This implies positive testing (i.e. we can demonstrate it works), but no negative testing (i.e. a malformed SSL certificate), or even no dynamic SSL certificate testing at all?
What I haven’t established* is whether the bug came about through code removal (e.g. there was another ‘if’ statement before the second goto) or, due to trial-and-error programming, the extra goto got added (with other code) that then didn’t get removed on a clean-up. There are, of course, some schools of thought that believe it was deliberately put in as part of prism!
Then you have to query regression testing; have they never tested for malformed SSL certificates (I can’t believe that; mind you I didn’t believe Lance was doping!) or did they use a regression-subset for this release which happened to miss this bug? Regression testing vs. product release is always a massive pressure. Automation of regression testing through continuous integration is key, but even so, for very large code bases it is simplistic to say “rerun all tests”; we live in a world of compromises.
Next, if we actually analyse the code then I can imagine the MISRA-C group jumping around saying “look, look, if only they’d followed MISRA-C this couldn’t of happened” (yes Chris, it’s you I’m envisaging) and of course they’re correct. This code breaks a number of MISRA-c:2012 rules, but most notably:
15.6 (Required) The body of an iteration-statement or selection-statement shall be a compound-statement
Which boils down to all if-statements must use a block structure, so the code would go from (ignoring the glaring coding error of two gotos):
Read more »
During the last couple of years, internally we’ve moved over to using Git as our Revision Control System (RCS). It’s been an interesting exercise, especially where, like me, you’ve come from a traditional model (such as subversion or even back to good old SCCS). I’m sure you’ve all got your own “top 5” and I don’t necessarily expect you to agree with me, but here’s my key learning points:
#1 “Branch always, branch often”
At the outset this was certainly the biggest mindset change*. In many RCSs, branching is particularly “heavyweight” and merging branches (and branches of branches, and branches… you get the gist) can be, to say the least, challenging. So you could end up being counter-productive by trying to avoid branching unless absolutely necessary.
It couldn’t be more different using Git, and once I’d “got it”, then it is an instinct to always work on a branch. I guess it all fits in with the aim to keep work cycles too much shorter timeframes, so branching and merging must be trivial. There is an excellent tutorial to help learn branching on Git.
#2 “Don’t ignore your .gitignore”
You know you’re going to need it, so do it before you start any real work. It’ll just save you a whole load of tidying up later on (ok so maybe this is an age thing? don’t procrastinate!). There are even a language specific .gitignore templates at https://github.com/github/gitignore, so no excuses.
It’s also worth getting to know the basics of the pattern matching that can be used in the .gitignore file, they’re well documented.
#3 “GitHub is not Git”
Maybe it’s my onset of senility, but initially it wasn’t exactly clear where Git ended and GitHub started. Most of the early stuff I looked at to help understand and learn Git almost always used GitHub to host your project. In hindsight it’s all obvious (well of course it’s just a web-based hosting service for software development projects that use Git), but at the time I can say my understanding was a little “fuzzy” around the edges.
Two things really helped; first the excellent book Pro Git written by Scott Chacon which you can find online at http://git-scm.com/book. I found it so useful to I also bought the eBook (Kindle) version.
Second, moving to BitBucket away from GitHub. In fact we use both, GitHub for our publicly hosted projects and BitBucket for internal work. This just made it plainly obvious where Git ended and GitHub/BitBucket started.
#4 “Quickly learn how to undo”
There are many ways to mess it up, if you really try. Committing without adding, modifying files without branching (see #1), committing unwanted files (see #2), not pushing changes (see #3) and the list goes on (and I’ve done them all).
Be assured, you can always fix it, from a simple commit-amend to, probably one of the coolest things, being able to create a branch from a stash. But I’d highly recommend spending a bit of time understanding unstaging as opposed to unmodifying a file.
Using Git is very much about understanding workflow and typically messing up means you’ve not quite got your workflow model running smoothly. The nice folks at Atlassian have put together a set of guides on workflow.
#5 “Initially do all your work on the command line”
For Linux people (and hopefully most programmers on a Mac), this doesn’t need saying as why would you do it any other way? However for all those Windows-based people (and in the embedded world this is the majority) then there is a tendency to use graphical tools. Both GitHub and BitBucket offer graphical frontends to managing Git local/remote repo’s but without understanding the underlying Git model (which you only really get from using it at the command line) then it’s easy to get lost.
I didn’t find the GitHub Windows client tool especially friendly, whereas SourceTree from Alassian I found a more intuitive client. Nevertheless I’d still rather work at the shell.
So that’s my top 5; What have I missed (remember you’ve got to drop one to add one)?
What would I do differently? certainly I’d read “Pro Git” cover-to-cover much sooner than I did. Otherwise, as usual, just throw yourself in and have a go.
Now what was that someone said about Mercurial being better…
* Actually the initial challenge was getting used to say “Git” outside of its usual context in the UK.