29 Dec 2014

Generating Yocto Documents in openSUSE 13.2

If you want to keep up-to-date with the latest Yocto Project documentation, you should be cloning git://git.yoctoproject.org/yocto-docs.

Once cloned, you can build (for example) a PDF version of the Developer's Manual by:
$ pushd documentation
$ make pdf DOC=ref-manual
$ popd
However, for some reason when I try this on my openSUSE 13.2 system I get:
cd dev-manual; ../tools/poky-docbook-to-pdf dev-manual.xml ../template; cd ..
warning: failed to load external entity "http://docbook.sourceforge.net/release/xsl/current/template/titlepage.xsl"
cannot parse http://docbook.sourceforge.net/release/xsl/current/template/titlepage.xsl
A quick work-around for this is to (temporarily) uninstall the docbook-xsl-stylesheet package:
#  rpm -e --nodeps docbook-xsl-stylesheets
Once done building the Yocto Project documentation simply re-install the package:
# zypper install docbook-xsl-stylesheets

22 Dec 2014

The Yocto Project: Introducing devtool

Paul Eggleton, and a group of his co-workers at the Intel Open Source Technology Centre, have been working on a new tool to help developers and build engineers work better and more efficiently together. Integrating The Yocto Project into a developer's workflow has traditionally been a source of pain, and has held back some projects from adopting The Yocto Project and its tools.

I have written a quick tutorial on this new tool (called devtool) which you can find here. Enjoy!

7 Nov 2014

FSOSS 2014 Report - The 2020 Datacentre

Chris Tyler gave a very thought-provoking and amusing talk at this year's FSOSS entitled The 2020 Datacentre.

For this talk Chris pretended (I think) to be a time traveler who had just returned from the year 2020 and was giving this talk to enlighten us about what had happened to a typical datacentre (and computing in general) during the intervening years.

What made the talk so humourous was how both Chris and the audience (!) stayed "in character" and used the past, present, and future tenses as though Chris really were from 2020. Even when asking questions, audience members would phrase a question as "...at what point did <technology> become cheap..." instead of "...when do you think <technology> will become cheap...".

But for me all of this "staying in character" also made the talk very thought-provoking. It was very interesting to pretend as though someone did live in a time when (as one example) NVDIMMs are ubiquitous. If NVDIMMs are everywhere, do people still use (traditional) hard disks? If not, what do you see when you type "ls"? Do you see files, or memory? How does one see memory as the output of an "ls" command?

Thanks Chris, for a very interesting talk!

28 Oct 2014

FSOSS 2014 Report - Project Management for Open Source Development

I had the pleasure of being able to attend a couple sessions from this year's FSOSS 2014 Symposium. One such session was a presentation by David Zinman titled: Project Management for Open Source Development.

It may seem strange, for some, to put "project management" and "open source development" together in the same sentence. But as more and more companies adopt open source, there will be more interest in adopting traditional management techniques with respect to open source work (maybe not for the open source projects themselves, but certainly for a company's involvement with such projects). In other words: where there are schedules, so shall there be management :-)

David has been working for many years as a project manager at Linaro. Linaro is a company which employes engineers (who develop software in the open) and has adopted an agile project management model with which to manage said engineers. As a consequence, David has much personal experience on which to draw for a presentation such as this.

It seemed to me as though the main point of David's talk is that the success of any project is going to hinge on communication and collaboration; regardless of whether a project is closed or worked on in the open. In addition to presenting his slides, David shared his knowledge with the audience by way of anecdotes and even an exercise to emphasize the points he was making in his talk.

Overall David's talk went very well and there ended up being more questions from the inquisitive audience than could be answered during the session.

23 Jun 2014

Integrate valgrind with your Testing

When testing your development work (automatically, preferably), it would be nice if the case of "forgetting to free memory" (as detected by valgrind) could be reported as a failure. To be honest, it strikes me as odd that valgrind doesn't return some sort of error status by default when it detects memory which was not freed by the developer.

Given the following code (which has an obvious memory leak, but returns a good status):

 * Copyright (C) 2014  Trevor Woerner <trevor.woerner@linaro.org>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
main (void)
        char *ptr;
        ptr = malloc(sizeof(*ptr) * 50);
        if (ptr == NULL) {
                perror ("malloc()");
        strcpy(ptr, "hello");
        printf("ptr: %s\n", ptr);
        return 0;

Running valgrind against it:

$ valgrind ./memleak
==31562== Memcheck, a memory error detector
==31562== Copyright (C) 2002-2013, and GNU GPL'd, by Julian Seward et al.
==31562== Using Valgrind-3.10.0.SVN and LibVEX; rerun with -h for copyright info
==31562== Command: ./memleak
ptr: hello
==31562== HEAP SUMMARY:
==31562==     in use at exit: 50 bytes in 1 blocks
==31562==   total heap usage: 1 allocs, 0 frees, 50 bytes allocated
==31562== LEAK SUMMARY:
==31562==    definitely lost: 50 bytes in 1 blocks
==31562==    indirectly lost: 0 bytes in 0 blocks
==31562==      possibly lost: 0 bytes in 0 blocks
==31562==    still reachable: 0 bytes in 0 blocks
==31562==         suppressed: 0 bytes in 0 blocks
==31562== Rerun with --leak-check=full to see details of leaked memory
==31562== For counts of detected and suppressed errors, rerun with: -v
==31562== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
$ echo $?

By default valgrind always returns the return value of the application under test; if the app returns zero, valgrind will return zero. The first step toward getting valgrind to return something other than what the test application returns is to define what it should return if it detects an error:


By itself this is not enough. In addition you also need to explicitly ask valgrind to perform a leak check:

$ valgrind --error-exitcode=22 --leak-check=yes ./memleak
==32424== Memcheck, a memory error detector
==32424== Copyright (C) 2002-2013, and GNU GPL'd, by Julian Seward et al.
==32424== Using Valgrind-3.10.0.SVN and LibVEX; rerun with -h for copyright info
==32424== Command: ./memleak
ptr: hello
==32424== HEAP SUMMARY:
==32424==     in use at exit: 50 bytes in 1 blocks
==32424==   total heap usage: 1 allocs, 0 frees, 50 bytes allocated
==32424== 50 bytes in 1 blocks are definitely lost in loss record 1 of 1
==32424==    at 0x4C280CD: malloc (vg_replace_malloc.c:292)
==32424==    by 0x400651: main (in /home/trevor/devel/code/doodles/memleak/memleak)
==32424== LEAK SUMMARY:
==32424==    definitely lost: 50 bytes in 1 blocks
==32424==    indirectly lost: 0 bytes in 0 blocks
==32424==      possibly lost: 0 bytes in 0 blocks
==32424==    still reachable: 0 bytes in 0 blocks
==32424==         suppressed: 0 bytes in 0 blocks
==32424== For counts of detected and suppressed errors, rerun with: -v
==32424== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0) 
$ echo $?
With these options, valgrind can be integrated into an automated test framework to provide failures if someone forgets to explicitly free allocated memory. Being able to specify the value valgrind will return in this case makes it easy to differentiate between cases where the app fails and cases where valgrind detects an issue.

2 May 2014

ELC2014 Report - OE/Qt

On Tuesday, at the Linux Foundation's Embedded Linux Conference in San Jose, Denys Dmytriyenko gave a fabulous talk entitled "Qt5 & Yocto - adding SDK and easy app migration from Qt4" where he talked about some work he has been doing and is hoping to present for inclusion into the OE mainline.

In essence, if you have a Qt application that you want to add to your OE build and you want it to work regardless of whether your build is adding Qt4 or Qt5 to your device, you will most definitely want to take a look at the work Denys is doing. Specifically have a look at meta-arago and the qt-provider.bbclass and qt-vars.bbclass.

He then discussed how someone would use the various SDK options to work on their Qt code in conjunction with OE. I won't repeat that information here since it's all readily available in the docs and various other places.

30 Apr 2014

ELC2014 Report - MTP

Yesterday afternoon/evening Linus Walleij gave an enjoyable talk entitled "Fear and Loathing in the Media Transfer Protocol" at the Linux Foundation's Embedded Linux Conference (ELC) 2014 in San Jose. Linus is a very good speaker and appears quite comfortable in front of a crowd; his talk was highly informative and often sprinkled with humerous anecdotes.

For me the best part of the talk was the fact I had never heard of the MTP, so it was a great opportunity to learn something new.

MTP is an extension to the Picture Transfer Protocol (PTP). You know when you're connecting your USB device to a Linux host and the first thing the instructions you're following says is "make sure you put the device in 'mass storage' mode and not 'PTP'"? This talk is about that other protocol.

The gist of the MTP (and PTP) was to design a transfer protocol which would be robust enough to survive someone ripping out the USB cable connection in the middle of transfer. Additionally the MTP was designed to handle not only the media data itself (for example the video or the music) but also all the metadata associated with a particular object back in a day when mobile electronics didn't support sophisticated (hierarchical) file systems. So given a flat file layout, describe an album with tracks and provide titles, composers, performers, cover art, and other metadata associated with an object such that it can all be presented coherently to the user. The MTP also contains provisions for other operations too: such as telling a host what capabilities it has or what storage areas it contains.

From the sounds of it, implementing the MTP is the fine act of navigating a mine field while driving a large, half-working vehicle with a broken GPS. This work has been mostly plagued by a standard which came out late, to which neither device implementors nor host operating systems have adhered either before or after the specifications were drawn. As such, the code has many tricks and special cases as it tries to do the right thing in all situations. This is often the case when a device manufacturer only cares to get their product minimally working with a specific version of a specific OS. As such, Linus warned that your best bet is to try to get an old device working with older software, and a newer device working with newer software; trying to do the converse will either not work so well, or will fail altogether.

With the advent of Android, the MTP has been given a new lease on life; it isn't an old protocol anymore. There is much to do yet with MTP and help is appreciated. Linus mentioned this is entirely a hobby project for him, and would welcome new apprentices and/or co-maintainers.

5 Apr 2014

A day in the life...

This video is so perfect... I could literally write a book about it. If you're studying to be a software developer and are wondering what your future will hold, don't imagine your life as the next Zuckerberg, watch this video. Watch this video until it is no longer funny, because it is not funny, this will be your career:

What it’s like to be an engineer in a sales meeting

25 Mar 2014

Using bmaptool To Create A Memory Card

Here's the scenario: I have just used OE to build a core-image-minimal which I want to run on my Wandboard-dual, I insert my 4GB microSD card into my desktop, use dd to write the image to the card, insert the card into my board, boot, and get:

Size=62.0M Used=19.1M

But it's a 4GB card?! Where's the rest of my disk?

OE has no idea how big of a card you want to use, so by default it makes an image that is just a bit bigger than required (or 8MB, whichever is larger).

Writing this small image is quick:


If I want to use (roughly) the entire 4GB card I simply ask OE to build an image of that size. Edit conf/local.conf and add/edit:
Now when I build my image, the output from OE will be roughly 3.7GB in size. Writing this image to a card will take much longer:


The funny thing is, the data hasn't changed; I'm still using the same amount of data on the card. What has changed is that I now have access to (roughly) the entire card, but at the cost of having it take ~160 times longer to write the image!

Size=3.4G Used=86.9M

In this case we're wasting lots of time (and flash write cycles) writing empty portions of the image to the disk. This is where the bmaptools come in. In essence, bmaptool looks at your image and determines which parts are important, and which are empty. When you go to actually write your image, only the non-empty parts are transferred -- saving you lots of write time (and flash cycles).

Using bmaptool is a two-step process:
  1. use bmaptool create to create a mapping file
  2. use bmaptool copy to write your image to a disk (with the help of the mapping file you just created)
Applying bmaptool to our 4GB image:


It's not the 18 seconds from above (i.e. dd'ing the 80MB image), but it's still better than the 49 minutes required to dd the 4GB image. The image written with bmaptool works:

Size=3.4G Used=86.9M

Note that if I use bmaptool on the first (80MB) image, there isn't much savings:


The real benefits are seen when trying to write an image such that most of the card is then available for use, and most of the image to be written is empty.

21 Jan 2014

OE/Yocto Bug Weekend - Jan 17 to 20, 2014 -- RESULTS

"Had a bug sprint week last week, resolved 26 bugs, better than we normally do."
-- https://www.mail-archive.com/yocto@yoctoproject.org/msg17312.html
Awesome!! Thanks to everyone who participated, especially on such short notice :-D

17 Jan 2014

OE/Yocto Bug Weekend - Jan 17 to 20, 2014

Starting today the OE an Yocto projects are having a bug squashing weekend! The purpose is to raise awareness of the ever-increasing open bug counts and to inspire people to take a look at the issues and see if they can work on fixing one or more of them.


Please take the opportunity to have a look at the open issues in the bugzilla database. If you don't have an account, please consider signing up. Play with the "Search" capability and see if there are issues to which you might want to contribute.

Obviously if you're a maintainer or some sort of a developer there are issues you could consider addressing. But there are also lots of issues an OE/Yocto "user" could investigate as well -- for example there are documentation issues, there are several issues in the "NEEDINFO" state, and sometimes just being able to reproduce a bug (or not) and confirm (or not) that an issue can be demonstrated on more than one host can be valuable information for the person who does eventually get assigned to solve the problem.

There are usually plenty of friendly, knowledgeable people around who can help. You can use the mailing lists, IRC channels, or bugzilla itself to communicate.

Thanks for your participation!

13 Jan 2014

Building gcc-arm-embedded on openSUSE 13.1

Ideally, if you were to start a Cortex-M-based development project today, you'd simply download the "latest and greatest" from GCC, build a cross-compiler targetting your device/CPU, and get coding. The reality, unfortunately, is that the latest improvements to GCC to support the latest CPUs tend to be found outside the latest GCC releases (as they await inclusion and/or the next release).

Over time, the "preferred" GCC compiler to use for Cortex-M development changes. Originally everyone used the compilers from CodeSourcery (now Mentor). Then the summon-arm-toolchain (SAT) became quite popular[1]. But even the people behind it have moved on to the current GCC compiler du jour: gcc-arm-embedded. The gcc-arm-embedded toolchain does seem to have good backing, as it is maintained by ARM employees.

In my experience (and, according to the download statistics) most people prefer to download pre-build binaries and simply install them into their system. If you're like me, however, you prefer to compile the toolchain yourself... just for fun.

The sources for every release of the gcc-arm-embedded are readily available. Unfortunately the tarballs of each of the components of the toolchain are themselves wrapped up in a mega-tarball. So if you download, for example, the source to the 4.8-2013-q4 release and unpack it, you'll end up with more tarballs (for each of the components) and a set of home-brew bash build scripts. The problem with not making the sources for each component available separately is that it becomes harder to integrate these sources into existing embedded development frameworks (such as OE, crosstool-ng, buildroot, etc).

Surprisingly, the verified build environment is a 32-bit Ubuntu 8.10 host! In any case,  using the provided "home-brew" bash build script works reasonably well for me on my 64-bit openSUSE 13.1 machine. Ironically, the only places where my build messes up is when it's trying to build the documentation. Building the documentation is fairly pointless, and wastes time and disk space.

Starting from a fresh, basic, default install of 64-bit openSUSE 13.1, the steps I use to build the 4.8-2013-q4 release of gcc-arm-embedded are as follows. Make sure, before starting, you have roughly 20GB of hard disk available.

$ sudo zypper -n install \
        autoconf \
        m4 \
        automake \
        libtool \
        patch \
        make \
        makeinfo \
        flex \
        bison \
        termcap \
        ncurses-devel \
        mpfr-devel \
        gmp-devel \
        mpc-devel \

<enter password>
$ wget https://launchpad.net/gcc-arm-embedded/4.8/4.8-2013-q4-major/+download/gcc-arm-none-eabi-4_8-2013q4-20131204-src.tar.bz2
$ bzip2 -d < gcc-arm-none-eabi-4_8-2013q4-20131204-src.tar.bz2 | tar xfv -
$ cd gcc-arm-none-eabi-4_8-2013q4-20131204/src
$ find . -name "*tar*" -print | xargs -I% tar -xvf %
$ cd zlib-1.2.5
$ patch -p1 < ../zlib-1.2.5.patch
$ cd ../..
$ ./build-prerequisites.sh --skip_mingw32 2>&1 | tee LOG.prereq

At this point you need to apply the patch provided below to the build-toolchain.sh script. The point of this patch is to turn off the building of the documentation. Find the patch below (between the dashed lines) save it to a file named build.patch, then carry on with the following instructions:

$ patch -p1 < build.patch
$ ./build-toolchain.sh --ppa --skip_mingw32 2>&1 | tee LOG.toolchain

The above should complete without issue. You'll find your results in the "install-native" folder. Be sure to add "~/gcc-arm-none-eabi-4_8-2013q4-20131204/install-native/bin" to your PATH so you can start using your freshly-built toolchain.


 --- old/build-toolchain.sh      2013-12-03 13:52:00.000000000 -0500
+++ new/build-toolchain.sh      2014-01-12 14:39:12.490232430 -0500
@@ -133,7 +133,7 @@
     make -j$JOBS

-make install install-html install-pdf
+make install

@@ -212,16 +212,6 @@

 make install

-make pdf
-cp $BUILDDIR_NATIVE/newlib/arm-none-eabi/newlib/libc/libc.pdf $INSTALLDIR_NATIVE_DOC/pdf/libc.pdf
-cp $BUILDDIR_NATIVE/newlib/arm-none-eabi/newlib/libm/libm.pdf $INSTALLDIR_NATIVE_DOC/pdf/libm.pdf
-make html
-copy_dir $BUILDDIR_NATIVE/newlib/arm-none-eabi/newlib/libc/libc.html $INSTALLDIR_NATIVE_DOC/html/libc
-copy_dir $BUILDDIR_NATIVE/newlib/arm-none-eabi/newlib/libm/libm.html $INSTALLDIR_NATIVE_DOC/html/libm

@@ -302,7 +292,7 @@

-make install install-html install-pdf
+make install

 rm -rf bin/arm-none-eabi-gccbug
@@ -400,7 +390,7 @@
     make -j$JOBS

-make install install-html install-pdf
+make install



[1] Note: the SAT isn't really a toolchain in the same way the others are toolchains, technically the SAT is just a home-brew bash script to create an ARM toolchain based on the Linaro toolchain releases.