The Rule of the Big Five

The dynamic creation and destruction of objects was always one of the bugbears of C. It requires the programmer to manually control the allocation, initialisation and deallocation of memory for the object. Because many C programmers weren’t educated in the potential problems (or were just plain lazy or delinquent in their programming) C got a reputation in some quarters for being an unsafe, memory-leaking language.

C++ improved matters significantly with an idiom known as RAII/RRID; more generically referred to as resource management. Resource management frees the client from having to worry about the lifetime of the managed object, potentially eliminating memory leaks and other problems in C++ code.

However, introducing resource management can lead to potential problems, particularly if the ‘manager’ objects are passed around the system. These problems led to the need for establishing a ‘copy policy’ for each of your types, sometimes referred to as ‘The Rule of the Big Three’. C++11 further complicated this by introducing move semantics.

This whitepaper explores the copy and move semantics of C++ and introduces a policy we call ‘The Rule of The Big Five’.

The whitepaper can be downloaded from here

Example source code for Visual Studio 2012 and GCC can be downloaded from GitHub.

Posted in C/C++ Programming | Tagged , , , , , , , , , , , , , | Leave a comment

UK based One-day ARM User Conference (and it’s free!)

For those of you that are not on our company hit list, sorry I mean mailing list, then you may not have heard about next week’s ARM User Conference run by the good folks at Hitex UK.

The event is titled “ARM – Continually Raising the Standard” and is being held at Stoneleigh Park near Coventry on the 19th September 2013. This year there are two streams running to allow a wider choice of presentation.

Button

The event is also preceded by a number of (paid) workshops on the 17th  and 18th.

I shall be presenting the paper on “Developing a Generic Hard-fault Handler for the ARMv7–M Architecture”. Feabhas shall also have a table-top there so if you’re attending please stop by and say hello.

Full details of the event can be found here

Posted in ARM | Leave a comment

ARM TechCon 2013

ARM’s Technical Conference called TechCon™ is running between October 29th and 31st at the Santa Clara Convention Center in California.

Logo

This year I shall be making the trip over to present three classes:

For those of you who are regular readers of this blog you’ll recognise the Generic Hard-Fault Handler from a previous post.

The class “Can Existing Embedded Applications Benefit from Multicore Technology?” came about as it seemed that not a day would goe by without an announcement of a major development in multicore technology. With so much press about multicore, I started to wonder whether I should consider using multicore technology in my typical embedded applications? 

From a software developer’s perspective, however, all the code examples seem to demonstrate the (same) massive performance improvements to rendering fractals or ray-tracing programs. The examples always refer to Amdahl’s Law, showing gains when using, say, 16 or 128 cores. This is all very interesting, but not what I, and hopefully most embedded developers, might consider embedded. This class discusses multicore from a more traditional embedded viewpoint.

Many Embedded-C programmers still believe that C++ leads to slow, bloated programs. Though this viewpoint may have had limited foundation over a decade ago, it is misplaced for the core aspects of C++ (classes, inheritance, and dynamic polymorphism). With a modern ARM C++ cross-compiler, it is also misplaced for the more advanced features (templates and exception handling). In the class “Virtual Functions in C++ on the ARM Architecture”, I will be focusing on the performance and memory of the C++ virtual functions, type info and look at the use of multiple inheritance in an ARM embedded environment.

If you are planning to attend TechCon this year then please look me up. I will be making the presentations available via the blog after the event.

Niall.

Posted in ARM, C/C++ Programming, Cortex | Tagged | 3 Comments

Namespaces

In this article we look at one of the issues inherent in C when building larger projects – the problem of function and object naming. We look at the C++ solution to this: namespaces.

A problem with big projects in C

When we move to multi-file projects the problem in C is having to create unique names for functions and externs in the global namespace. If we don’t have unique definitions this will lead to a link-time error.

We could, of course, make the functions static but then they are only available within the file they are defined in.

image

A common solution to the problem amongst C programmers is to add an extension to the function name to make it unique. Very commonly this is the module name.

This works for your own code but is often not an option for third-party code:

image

The C++ answer: namespaces

A namespace is a named scope. A user-defined namespace is a mechanism for expressing logical grouping in your code.

image

By putting the classes Longitude and Latitude into the namespace Nav we have effectively extended their names by ‘prefixing’ them with the namespace name.

In the implementation file we must prefix the namespace name onto the class (using the scope resolution operator) when we define the member functions (or indeed any other member). An alternative notation is to enclose all the class definitions within a namespace declaration.

image

Elements defined within a namespace can be accessed in any of three ways:

  • by using the fully qualified name, in this case Nav::
  • If the item is used a lot, then it can individually be brought into the global namespace with the using directive.
  • The global statement using namespace Nav makes all names in the namespace available

image

Namespaces are an open scope – it is possible to keep adding definitions to the namespace, across different translation units.  Although classes act as namespaces they are referred to as a closed scope – that is, once a class (namespace) has been defined it cannot be added to.

image

It is good practice to put all code into a namespace, rather than leaving it in the global namespace. The only thing that should (ideally) be in the global namespace is main(). (MISRA-C++ makes this demand of you.)

Namespace hierarchies

Namespaces may be nested arbitrarily deep. Nested namespaces are analogous to a hierarchical file system, rooted in the global namespace (which is identified by having nothing to the left of the scope resolution operator (::)

image

If your coding standard demands that you explicitly qualify type names then having hierarchies of namespaces (each with a descriptive name) can quickly become onerous, and lead to less-than-readable code. To improve legibility C++ allows namespace aliasing. A namespace alias is a – usually shorter and more succinct – synonym for the declared namespace.

image

 

Forward references and namespaces

Wherever possible we want to reduce the coupling between modules in our design. Including one header file within another builds dependencies (coupling) between the two interfaces.

image

In this case, including the class definition of class Sensor is unnecessary. You only need to include the class definition if you are going to allocate memory for an object (a Sensor object, in this case) or access any of its member variables or operations. Class Positioner does not instantiate a Sensor object; it merely has a pointer to a Sensor object. Since the compiler knows how much memory to allocate for a pointer we do not need to include the class definition of Sensor. However, we must still declare that class Sensor is a valid type to satisfy the compiler. In this case we do so with a forward reference – actually just a pure declaration that class Sensor exists (somewhere).

However, if we put our Sensor and Actuator classes in a namespace we have a problem. In the case of the Positioner class, above, since we are only declaring pointers to Sensor and Actuator objects it is good practice to use forward references to those classes.

The syntax, as shown below, looks reasonable but doesn’t work.

image

The compiler takes the forward reference as referring to a nested class; it cannot know IO is a namespace.

The solution is to tell the compiler that IO is a namespace with the namespace keyword. The forward references to Sensor and Actuator can then be declared within the namespace.

image

 

Argument-dependent lookup

Remember from previously, if we want to use a class or function from a namespace we have to explicitly fully-qualify the entity. If this is true (and it is) then the following code shouldn’t compile:

image

The reason it should fail is the overloaded operator<<. This is actually a function call which, like everything else in the Standard Library, placed in the namespace std

std::ostream& std::operator<< (std::ostream&, const char*);

This means that, in order to access this function, we should have to fully qualify its name:

image

This is not very readable; and, of course, we know the original code does compile perfectly fine.

The solution is a compiler mechanism called Argument-Dependent Lookup (ADL) or Koenig Lookup (after its inventor, Andrew Koenig). ADL states that if you supply a function argument of class type, then to find the function name the compiler considers matching names in the namespace containing the argument’s type.

image

In the example above we have defined a class, Digital and an overloaded function doStuff in the namespace Points.

When we make a call to doStuff() with a Points::Digital object the compiler is able to look into the owning namespace of Digital (Points::) and find a function with the correct signature.

However, this only works with arguments of class type so the call to doStuff() with an integer cannot be resolved automatically; the programmer would have to explicitly qualify the function

Points::doStuff(10);

Our earlier Standard Library example can now be explained: since one of the parameters of std::operator<< is of class type (in this case std::ostream) the compiler can search the std:: namespace for an appropriate function signature without the programmer having to explicitly qualify it. The simplification of library code like this is the primary reason for the inclusion of ADL.

Useful though this is, ADL has the potential to cause us problems in our code. Consider the example below:

image

Here, the call to doStuff() is ambiguous – it could be either Feabhas::doStuff(Points::Digital&) or Points::doStuff(Points::Digital&) (using ADL). There is no automatic resolution – the programmer must explicitly qualify the call with the appropriate namespace name to get the doStuff() they want.

 

Preserving the locality of code

In C, the keyword static has two sets of semantics, depending on where it used. The keyword static can be applied to functions and variables

Functions

Static functions are not exported, they are private to the module they are defined in. They have internal linkage; they do not appear in the module’s export table. This is useful for preventing your local helper functions from being called outside of your module.

Variables

Applying static to objects defined outside any block (confusingly, called ‘static objects’ in the standard!) gives the object internal linkage. The static object is visible anywhere in the translation unit, but not visible from any other translation unit.

When an automatic (local) variable is marked static in a function the compiler allocates permanent storage for it (at compile time). Practically, this means it retains its state between calls to the function but its scope is limited to the function in which it is defined.

C++ extends this behaviour to user-defined (class) types as well.

image

However, C++ prefers the use of a concept called an un-named namespace instead of static to give objects and functions internal linkage.

image

An un-named namespace is (as the name suggests!) an anonymous namespace – it does not have a name. The compiler allows entities (objects and functions) in this namespace to be accessed within the defining translation unit, but not outside. Un-named namespaces in different translation units are considered independent (and different). There is no way of naming a member of an unnamed namespace from another translation unit; hence the members of that namespace cannot be accessed (making them, effectively, static).

This removes the need for C’s static, which has now been deprecated (meaning it is currently supported, but likely to be removed from the next revision of the standard.)

 

Conclusion

Namespaces are a powerful organisational tool in your design. They are a compile-time construct so have no run-time overhead. There’s no good reason not to use namespaces in your code. They will help you build more maintainable, more portable and more reusable code.

For more, even more exciting exploration of this topic have a look here:

https://www.gotw.ca/publications/mill02.htm

https://www.gotw.ca/publications/mill08.htm

Posted in C/C++ Programming | Tagged , , , | 2 Comments

Debunking priority

Before I start, a disclaimer:

For the purposes of this article I’m limiting the discussion to even-driven systems on priority-based, pre-emptive operating systems, on single processors.

I’m using the word task to mean ‘unit of execution’ or ‘unit of schedulability’, in preference to thread or process. I’m ignoring the different OS memory models.

There seems to be a fair amount of misunderstanding about the concept of priority in concurrent programming:

  • Priority means one task is more ’important’ than another.
  • Priority allows one task to pre-empt another (True, but why is this important?)
  • Priority means one task runs more often than another.
  • Changing priorities will fix race conditions between tasks.

Here are some other viewpoints on the subject.

(https://www.codinghorror.com/blog/2006/08/thread-priorities-are-evil.html)

(https://stackoverflow.com/questions/95649/when-should-i-consider-changing-thread-priority)

The general consensus seems to be that arbitrarily adjusting a task’s priority is ‘bad’. However, there’s not a lot of useful concrete information on why you should adjust a task’s priority.

Task priority should be thought of as a measure of the ‘determinism of latency of response’ for that task. That is, the higher the priority of a task (relative to its peers) the more predictable (deterministic) its latency of response is. 
To understand this let’s consider an example system with a number of tasks, each with a different priority. All of the tasks may pend on some shared (protected) resource or event. 
In scenario 1, only the highest priority task is available to run. When it pends on the resource/event it gets it ‘immediately’ – that is, with the minimum possible latency (there will always be some processing overhead) 
In scenario 2, only the lowest priority task is available to run. When it pends on the resource/event it also gains access with the smallest possible delay. In other words, in this scenario its latency of response is exactly the same as the highest priority task! 
However, in scenario 3, things change. In this scenario let’s have all our tasks available to run and attempt to access/respond to the resource/event. In this case, the highest priority task (by definition) gets access first. It’s latency of response is the same (give or take) as when there are no other tasks running. That is, it has the most predictable latency (it’s almost constant). 
However, the lowest priority task must wait until all other pending tasks have finished. Its latency is: minimum + task1 processing time + task2 processing time + task3 processing time +… 
So, for the low priority task the latency is anywhere from the minimum up to some (possibly unpredictable) maximum. In fact, if we’re not careful, our highest priority task may be ready to access again before the lowest priority task has even had its first access – so-called ‘task starvation’.

image

A task’s priority will affect its worst-case latency – the higher the priority the more predictable the latency becomes.


If all your tasks run at the same priority you effectively have no priority.  Most pre-emptive kernels will typically have algorithms such as time-slicing between equal-priority tasks to ensure every task gets a ‘fair share’.

So, why might I want to adjust my tasks’ priorities? Let’s take a common embedded system example: a pipe-and-filter processing ‘chain’.

The basic premise has a task pending on input events/signals from the environment. These are passed through a chain of filter tasks, via buffer ‘pipes’. The pipes are there to cope with the differences of processing speed of each filter task and the (quite likely) ‘bursty’ nature of event arrival.

In a system with fast-arriving, or very transient events, we may wish to increase the priority of front-end of the filter chain to avoid losing events.

Increasing the priority of the back-end of the filter chain favours throughput over event detection.

In each case the pipes must be sized to accommodate the amount of data being stored between filters. Ideally, we want to avoid the buffers becoming flooded (in which case the filter chain runs at the speed of the slowest filter)

image

Adjusting task priorities to achieve system performance requirements

However, all is not necessarily rosy. Your careful-tuned system can be disrupted by introducing (either explicitly or through some third-party or library code you have no control of) code with its own tasks.

image

Introducing new code (explicitly or implicitly) can disrupt system performance

The introduction of (in this case) another medium-priority task may slew the latency predictability of our original medium-priority task. For example, what happens if the new task runs for a significant period of time? It cannot be pre-empted by our filter task. If we are unlucky (and we so often are!) this can cause our system to stop meeting its performance requirements – even though there is no change in the original code! (I’ll leave it as an exercise for the reader to consider the impact of this for reusable code…)

Finally, a huge caveat for multi-processor systems: Priority only has meaning if the number of tasks exceeds the number of processors. Consider the extreme case where each task has its own processor. Each task, being the only task waiting to execute, will execute all the time. Therefore it is always at the highest priority (on that processor)

If your design assigns multiple tasks to multiple processors then you must appreciate (and account for) the fact that priorities only have meaning on each individual processor. Priority no longer becomes a system-wide determinant.

Posted in Design Issues, RTOS | Tagged , , , , , | Leave a comment

Style vs. Substance in C programming

In an email from UBM Tech this week there was a link to an article titled “A Simple Style for C Programming by Mansi Research“. It was actually authored back on May 2010 by Meetul Kinariwala but appeared this week under the what’s hot section, so I thought I’d take a look [advice to the reader; don’t bother].bad-clothing_16

The problem with guides like this is that style is a very subjective area (as any parent will tell you how their kids like to point out your lack of style). Programming is no exception and you could argue with C being such a compact language, it suffers more than many other languages.

One of the many good things about the MISRA-C guidelines is that it clearly separated out the issue of style vs. coding guidelines, i.e. [Guidelines for the Use of the C Language in Critical Systems, ISBN 978-1-906400-10-1, page 9]

5.2.2   Process activities expected by MISRA C
It  is  recognized  that  a  consistent  style  assists  programmers in understanding  code  written  by others. However, since style is a matter for individual organizations, MISRA C does not make any recommendations related purely to programming style. It is expected that local style guides will be developed and used as part of the software development process.

I couldn’t have put it better myself.

Clearly for larger teams a style guide is a useful and important part of the development process.

A whole host of style issues can be addressed with a “pretty printer” tool such as Artistic Style. These simply allow you to define a standard model for items such as ‘{‘ alignment, tab-to-space ratio and spacing within expressions [e.g. if (a&&b) vs. if ( a && b ), etc.].

However there are many style issues that can’t be addressed with automation, for example naming convention. People of a certain age will have been unfortunate enough to have to use Hungarian notation, which, at its roots, had a good underlying principle (embedded type information in the name). That said, Hungarian notation is, in my opinion, an abomination.

One of those coding styles that always make me want to spit feathers is putting a literal on the left of a comparison expression, e.g.
if (10 == var)
I know, you think it’s a great idea as it stops you accidentally writing:
if(var = 10)

Yes it does, but that also tells me you’re not using any form of static analysis tool (e.g. PC-lint, Coverity, QAC, etc.) which means you’ve got much bigger problems that accidentally assigning 10 to the variable!

My major issue is that, for someone who ends up reviewing a lot of other people’s code, it acts as a mental ‘speed bump‘; I wouldn’t say “if ten is equal to var?“‘ I’d say “if var is equal to 10?“, so why write it that way? Surely we want to make code a readable as possible and I’d argue (10 == var) is just ‘bad‘ style.

Probably the biggest issue I regularly come across is that most company coding standards

  • do not differentiate between rules that are there for safety/security reasons (e.g. Functions shall not call themselves, either directly or indirectly) and rule purely for style (e.g. for pointer variables place the * close to the variable name not pointer type)
  • do not have automation of rule checking; if it’s not autometed it won’t get enforced.

As I’ve already said, I’m not against coding style guidelines, quite the contrary I think, when well done, they aid code readability across a project/company. But what’s really needed is a coding meta-style guide (i.e. a guide to what a coding style guide should address).

For example, a coding style guide should consider the structure of a C file, e.g. ordering items within a file based on some defined criteria, such as:

  • Context then definition
  • External then Internal
  • Public then Private
  • Functional grouping
  • Type grouping
  • Alphabetic sorting

The meta-style-guide tells you that you should consider file structure; the actual-style guide tells you, for you project, how a C file should be structured.

Googling the web hasn’t thrown up any meta-style guides, so here at Feabhas we’re undertaking to develop an open, community driven, meta-style guide. We haven’t defined the best model yet (github, google+, etc.), but as soon as we do I’ll ensure it’s published here.

In the meantime feedback/comments on the meta-guide would be welcome.

UPDATE
I have subsequently come across the following resource that is a great meta-guide C Style: Standards and Guidelines and I highly recommend a visit.

Posted in C/C++ Programming, Design Issues, General | Tagged , , | 7 Comments

Test Driven Development (TDD) with the mbed

One of the most useful fallout’s from the acceptance of Agile techniques is the use of Test-Driven-Development (TDD) and the growth of associated test frameworks, such as GoogleTest and CppUTest, etc.

I won’t get into the details of TDD here as they are well covered elsewhere (I recommend James Grenning’s book “Test Driven Development for Embedded C” for a good coverage of the subject area), but the principle is

  1. Write a test
  2. Develop enough code to compile and build (but will fail the test)
  3. Write the application code to pass the test
  4. repeat until done

Obviously that is massively simplifying the process, but that’s the gist. The key to it all is automation, in that you want to write a new test and then the build-deploy-test-report (BDTR) cycle is automated.

To build a TDD environment with the mbed I needed to solve the following obstacles:

  1. Build – Using a TDD framework and building the project
  2. Deploy –  Download to the mbed
  3. Test – auto-executing the test code on the mbed
  4. Report –  Getting test reports back from the mbed to the host

Continue reading

Posted in ARM, C/C++ Programming, CMSIS, Cortex, Testing | 4 Comments

Python – The everyman’s language

Python is a very nice language in many respects: enforced white-spacing promotes readability, extensibility and Python’s inbuilt Read-Eval-Print-Loop interpreter combined with its introspection capabilities provides a very easy way to learn and get to grips with the language.

But that can’t be all, can it? Why Python?

One of the reasons behind the success of our course has been customers wanting a good language for developing automated testing scripts and Python fits the bill brilliantly – it’s fast (enough), approachable and has great support for the embedded platforms of today and tomorrow (read: Linux  :))

In the scripting ring we have a number of contenders – Bash, Perl, Ruby, Lua, Javascript but each lacks that certain je ne sais quoi that makes Python so good – or maybe it’s just that the others don’t quite do what I want; Perl has a syntax that makes me want to scratch my eyes out, Bash is great on the command line but has control structures and compatibility issues that make the baby Jesus cry but some of the others are worth a look.

Lua is nice, I’m honestly a fan of Lua and have used it in previous projects where Python was just too big to embed (adding in Lua is a ‘tiny’ 400kb) but that’s the issue – Python is a general programming language – I can quickly bring in web services, advanced numerical libraries, GUIs and scientific libraries as well as the built-in things like networking and threading but Lua just simply isn’t designed for the vast contexts that Python fits and that’s part of Lua’s design – it’s not a general scripting language.

JavaScript is the in-vogue scripting language of the moment; it’s easy to test and develop in the web browser and it has a C style language that can appeal but I worry about any language where I can type in the following and not have it shout an error at me…

[nick@zeus ~]$ gjs
gjs> +((+!![]+[])+(!+[]+!![]))
12

12? of course it is. Go home JavaScript. You’re drunk.

I am seeing more and more interest in using JavaScript in the embedded space, one recent example being the new Beaglebone Black, which allows you to interact with the hardware using JavaScript and a Node.JS back-end.  JavaScript, though, it is still too tied to web technologies and less as a general system scripting language.

Ruby… well, I just simply haven’t found a good resource for learning about Ruby in the embedded space – that one is on me, sorry but maybe I was just scared by the famous Wat talk (here’s looking at you too JavaScript).

Problem?

One thing that does let down Python, in my opinion, is the lack of a good developer environment. I appreciate that Python is easy to use and the interactivity is a massive boon but showing IDLE to someone who has used Visual Studio and all it’s spoon feeding goodness does make me a little sad.

IDLE

Approachable huh?

Line numbers? Stability? A carat that will allow you to type when you misclick? – why do you need those when you can have… Detachable Menus!
It’s easy to make fun but IDLE seems un-maintained and could do with some TLC but it’s still useful as a learning tool to bridge the gap between Visual Studio and the command line.  On the plus side, the debugger does bring some good insight into the operation of the code for first-timers.

Summary

Whenever I need to script something, mock up an interface, test a design, develop some back-end code or create a full application – Python is always there for me.
Python’s versatility, compatibility and ‘kitchen sink’ approach make Python a fantastic choice for almost everyone, from non-programmers through to the physicists at CERN using it to create black holes.  It truly is the everyman’s (and woman’s) language.

So why not learn something new?

[nick@zeus ~]$ python
Python 2.7.3 (default, Aug 9 2012, 17:23:58)
[GCC 4.7.1 20120720 (Red Hat 4.7.1-5)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import antigravity
Posted in General, training | Tagged | 8 Comments

Rehosting ARMCC for the mbed with CMSIS-DAP

In this posting I will look at porting the C standard library output (e.g. puts / printf ) to use a UART rather than the default ARM/Keil semihosting.

A-simples-life-001

In my last post, I looked at getting basic user I/O out from a native-mbed via UART0 to a terminal emulator (e.g. Tera Term). This was driven by the fact that, currently, neither printf (via semihosting) or ITM_SendChar do not function on the mbed. Unfortunately, my solution uses a propriety API, such as init_serial0 and putchar0, etc., rather than puts, printf, etc.

ITM_SendChar uses the ITM (Instrumented Trace Macrocell) on the Cortex-M3 core, which in turn uses a Trace Port (either SWO or 4-pin) to send messages. To get output from a Trace Port you need a debug unit with trace capabilities, e.g. a ULINK or J-Link device. The current implementation of CMSIS-DAP does not support any trace capabilities; however I am led to believe that ARM are planning to add some trace capabilities in future versions or variants of CMSIS-DAP (no timeframes).

To reference the ARM website:

Semihosting is implemented by a set of defined software instructions, for example, SVCs, that generate exceptions from program control. The application invokes the appropriate semihosting call and the debug agent then handles the exception. The debug agent provides the required communication with the host.

On a Cortex-M3 (ARMv7–M) you’d typically see “BKPT 0xAB” opcode instead of SVC’s. For the same reason as the ITM_SendChar, currently semihosting is not supported on the mbed.

So, ideally, it would be nice to still be able to use puts/printf (the greatest debug tool of all) but redirect the output to our UART; i.e. rehosting.

Rehosting in the Keil environment is very easy, once you know how! It is easy to go down a couple of dead ends, which hopefully I’ll help you avoid.

First, in our main, where we’re using printf, we need to include the following pre-processor directive in main.c:

#pragma import(__use_no_semihosting_swi)

Continue reading

Posted in ARM, C/C++ Programming, CMSIS, Cortex | 3 Comments

User I/O from mbed with CMSIS-DAP

Following on from my last posting regarding using native C/C++ on the mbed I have found that I currently cannot get output via the standard CMSIS ITM_SendChar function as used in the Cortex-M hard fault handler (I am currently in dialog with the guys at ARM trying to resolve this).

MdebHello

In the standard mbed environment, the mbed can communicate with a host PC through a “USB Virtual Serial Port” over the same USB cable that is used for programming using printf(), e.g.

#include “mbed.h”
int main()
{
printf(“Hello World!\n”);
}

To achieve the same output, an mbed SerialPC object can be defined, e.g.

#include “mbed.h”
Serial pc(USBTX, USBRX); // tx, rx
int main()
{
pc.printf(“Hello World!\n”);
}

Currently, with a native minimal project, the semi-hosting of printf is not supported. This can be overcome by “re-targeting the project”, so I’ll cover that in the future, but for now the is a simple way of getting basic user I/O.

User I/O via UART0

Luckily for us, if we push characters out over the UART0 serial interface they are transmitted via the same channel that the mbed SerialPC uses. To test this out I quickly (using the best agile techniques of course) put together a very basic UART driver. Continue reading

Posted in ARM, C/C++ Programming, CMSIS, Cortex | 3 Comments