The STM32 range of Microcontrollers have a built in bootloader that may be used to update the firmware running on the MCU. Depending upon the STM32 variant being used this update may be done using a USART, USB, or SPI interface. The usual way of triggering the bootloader is by using the BOOT1 and BOOT0 pins of the MCU:
Typically the BOOT1 pin is held high on the hardware and bootloader mode is controlled by a hardware jumper on BOOT0. These pins are sampled upon restart of the MCU and require access to a reset button (or power-cycle) and the BOOT0 jumper.
For devices that are deeply embedded in a system accessing the a hardware jumper for the BOOT0 control may not be practical. Indeed, on some systems where General Purpose Input/Output pins are at a premium, the designer may require the BOOT0 pin for GPIO.
What is required in the above situations is a means of forcing the MCU into system bootloader mode from the resident firmware. The following code snippet enables the system bootloader to be accessed from the main firmware and possibly triggered by a USB or USART command.
typedef void (*pFunction)(void);
const uint32_t ApplicationAddress = 0x1FFF0000;
register uint32_t JumpAddress = 0;
register uint32_t addr = 0x20018000;
static pFunction Jump_To_Application;
/* We start here */
uint32_t value = 0 - 1;
NVIC_ICER (0) = value;
NVIC_ICER (1) = value;
NVIC_ICER (2) = value;
NVIC_ICPR (0) = value;
NVIC_ICPR (1) = value;
NVIC_ICPR (2) = value;STK_CSR = 0;
/* Reset the RCC clock configuration to the default reset state ------------*/
/* Reset value of 0x83 includes Set HSION bit */
RCC_CR |= (uint32_t) 0x00000082;
/* Reset CFGR register */
RCC_CFGR = 0x00000000;
/* Disable all interrupts */
RCC_CIR = 0x00000000;
FLASH_ACR = 0;
__asm volatile ("isb");
__asm volatile ("dsb");
JumpAddress = *((uint32_t *) (ApplicationAddress + 4));
Jump_To_Application = (pFunction) JumpAddress;
set up the stack for the bootloader
__asm__ ("mov sp,%[v]" : : [v]"r"(addr));
Once again I find that the best technology doesn’t always prevail. In the mid-1970s Sony launched the Betamax video format into the consumer marketplace and the “Videotape Wars” began; in spite of the superior quality of the video recordings made with the Betamax format, VHS prevailed and won the battle.
There have been many similar examples of products performing up-to and beyond competing products and still failing to win a significant market share. Such is the case of Windows Phone; I have been a Windows Phone user for quite a few years, all the way up until this last week when I finally jumped to the “Dark Side” and purchased a Google Pixel phone with its Android operating system. Windows phone is/was a great product, the almost seamless integration with the Windows desktop and tablet ecosystem reflected a well conceived and designed system. In the latest version of Windows Phone (Windows 10) that ecosystem-wide integration is outstanding. So, why did it fail so badly to win a significant market share? The answer, as in all previous technology/product wars is simple … marketing, the art of creating desire for one product over another through brand recognition in the phone case. Apple did a superb job of creating a hip, vibrant image for their products. Google took another approach, ensuring that phone manufacturers had easy access to Android and allowing them the freedom to create products that reflected the phone companies image. Meanwhile, Microsoft had a weak, often confusing marketing approach that just didn’t win the hearts and minds of the phone manufacturers or the consumer.
So, what does this have to do with embedded systems and software? It is a cautionary tale, just because you have the best product in its category, or even establish a new, potentially exciting category, it will only succeed if people know about it. Technology and performance short-falls do not appear to be a gating factor in new product successes. Promotion is a significant factor in a product’s success.
Next time you have that brilliant idea bear this in mind. Identify the demographic of your potential customers and prepare to market to them. Plus, in this age of social media, live streaming, podcasts, and blogs it is valuable to establish a reputation, one that may be transferred to your eventual product release.
As a part of my Grow with Google Nanodegree I submitted my latest project for review yesterday. Imagine my surprise this morning when the reviewer rejected it because it crashed on startup. How could that be? I mean, if it crashes on startup I wouldn’t have been able to test it at all.
In situations like this my experience tells me the issue must be related to the environment in which the app was being run. What is the difference between my setup and that of the reviewer. There was no more detailed feedback from the reviewer other than a stack trace of the crash. It looked like a null object exception, again, why wasn’t I seeing it?
Based upon my experience I realized that the issue was probably due to the fact that the reviewer was running my app for the first time on his computer, whereas it had been run many times on mine. For testing I use the Android Studio device emulator (AVD), I used AVD to wipe all the data from the emulated device in order to get my setup as close as possible to that of the reviewer. Once I did that I was able to reproduce the exact problem reported. Not only that, I was able to fix the issue in a few minutes and re-submit it for review.
The takeaway of this exercise is that even experienced programmers sometimes forget that the testing of apps needs to be done in a way that gets as close as possible to the end users “first contact” with it. A small error on my part, introduced several revisions ago, turned out to be a “ticking bomb” when the app was installed on a device for the very first time.
The Black Magic Debug Probe (BMP) v2.1 was launched through a Kickstarter campaign. It is an Open Hardware, Open Source device that enables attachment of a source-level debug tool to an embedded Cortex-M MCU using either JTAG or SWD. The software for the device has a comprehensive build system that allows building for multiple platforms in addition to the Blacksphere v2.1 hardware. However, if one is a Windows-centric developer building the code can be challenging. This article aims to detail the process for building the BMP software on a Windows development computer.
Installation of required tools.
Since the build system for BMP requires some Linux features the first prerequisite is the Cygwin Linux-like environment for Windows. It may be downloaded from cygwin.com and installed by following the instructions on the web site. In addition to the default packages installed also select and install:
make: The GNU version of the ‘make’ utility
BMP makes use of the Open Source Cortex-M3 hardware abstraction library libOpenCM3, a submodule of the BMP repository. This library has some dynamic file generation that requires Python, it may be downloaded and installed from the python.org web site. Note that it is recommended to use v2.7 of Python. There may be issues using v3.x. Once Python is installed add the path to “python.exe” to your system path so that it is found when executed from the Cygwin terminal. TO test this open the Cygwin terminal and execute “python –version”:
If you do not already have git installed go to the git web site and download and install it. When the installation is complete you may test it by entering the command “git –version”:
Finally the GNU ARM Embedded Toolchain should be installed from the developer.arm.com web site. When the installation completed dialog appears make sure to check the “Add path to environment variable” option. Once the installation is complete, start the Cygwin terminal and execute the command “arm-none-eabi-gcc –version” to check the installation:
Getting the source code.
I like to keep my project files under a single root folder to make backup a little easier, rather than having to backup multiple locations. This folder is named “C:\DataRoot\Projects” and I cloned any Github repositories into this folder. This requires and extra step or two to have the project source available to the Cygwin terminal, this is described below.
The source code for the BMP is hosted on GitHub. The recommended procedure to acquire the source is to following these steps:
Clone the repository, or fork and clone. If you are using git on the command line it would be something like:
Initialize the libOpenCM3 sub-module using the following steps:
Set you current working directory to the root of the BMP source, e.g. <path to the root>/blackmagic.
execute -> git submodule init
execute -> git submodule update
If you cloned the BMP repository into the Cygwin file structure you are ready to begin the build process. If however, you have the source outside the Cygwin file structure you will need to mount it for Cygwin to work with it. My BMP repository is cloned into “c:\DataRoot\Projects\blackmagic” and I use the following commands in Cygwin to mount the source:
In spite of using the Cygwin environment there is still an issue with the environment that does not allow the building of all of the various libOpenCM3 MCU libraries using the Makefile supplied. The following procedure is based upon information gleaned from the article “Install libopencm3 for Cygwin” on the CompuSilli blog. I would love to directly cite the author but so far have been unable to identify them.
For the build procedures described below I will use “<project root>” to indicate the folder into which the BMP repo was cloned. On my computer that would be “c:\DataRoot\Projects\blackmagic.”
Start by running the Cygwin command to open a Cygwin terminal window. In order to access the BMP repo it is necessary to mount the repo in the Cygwin environment, I chose to create a folder in the Cygwin home folder called “projects” and to create a folder inside that called “blackmagic.”
Next, mount the actual “blackmagic” repo on the Cygwin “home/projects/blackmagic folder just created:
mount <project root> ~/Projects/blackmagic
The main makefile of the BMP build runs some Python scripts that generate some header files required for the libOpenCM3 builds. This presents a small “catch 22” scenario because the main makefile will not run to completion without the library files being built for the chosen platform. However, the main makefile does the script running first and therefore we can run it just to generate the header files required. So, even though it will result in an error do the following next:
change to the root folder of the repository
In my case ~/projects/blackmagic
Now we need to choose the library to build for the platform supporting the BMP. If you are building software for the BMP v2.1 hardware you will need to build the library for the STM32F1xx; use the following commands to build the library.
To build the “native” BMP software for the v2.1 hardware use the following commands:
To build for a different platform, for example the Nucleo STLinkV2 replace the last line above with:
Flashing the new software onto the BMP
Updating the firmware on Windows requires a driver installation and the use of a tool called DF-Util.
The driver (on Windows 10) is installed using a utility from http://zadig.akeo.ie/. Black Magic Probe uses a driver called libusbK, a selection offered by the Zadig tool.
Place the BMP into DFU mode by cycling the power while holding down the small button on the probe. If the probe enters DFU mode all three LEDs will flash. Then in a command window enter the following command:
Currently, I am taking a Udacity Android Developer Nanodegree after receiving a Grow with Google Scholarship. My current project for the course is the first stage of an app enabling the user to browse The Movie Database (TMDb) for movies. This first stage shows a series of thumbnails filtered by either their popularity or user ratings. This is till a work in progress, my prototype main screen looks like this:
My next step was clean up the look of this main screen a little by removing the white areas around the thumbnails and also to make the thumbnails a little larger. I made a few “tweaks” to the UI and tested them in Android Studio. When I came back to the project the next day and ran it, again from Android Studio, this is what I saw:
Ugh!!! What happened? I then spent a couple of hours trying to debug the issue and it began to look like an issue getting the images from TMDb, however using the debugger in Android Studio revealed the images were being loaded correctly. By this point I was beginning to think I had inadvertently changed a project or SDK setting, but failed to find anything. Time to shelve the project and eat and sleep.
Next morning a new strategy came to mind … as I wrote in my previous blog, I use Git for source-code management and, taking my own advice, I had “committed often.” So began a little source-code forensics, I fired up GitKraken my preferred Git client. Here is snippet of the repository for my project:
The main development branch is “develop”; the first step was to choose a previous commit on that branch and see if it worked. I was pretty sure I had not seen this issue in the commit named “Added basic toolbar and filter icons”, so, I created a branch here named “Test_1”, checked it out of the repo and ran it. It did not have the issue, therefore the issue was introduced between that commit and the current develop branch state.
Next, I chose a commit between the “Added basic toolbar …” commit and develop, this time choosing the commit named “Filter Selection working,” created and checked out a branch here called “Test_2.” Once again the app worked so it looked like the commit named “Small UI teaks” should be my focus, I created and checked out a branch named “Test_3” off that commit. When run it showed the same issue as the develop branch, I had identified the set of changes that broke my application.
So, how to find the problem? Well, I knew the problem was introduced by the commit that is the basis of branch Test_3. GitKraken allows performing a “diff” operation on commits, I selected Test_2 and Test_3 and GitKraken showed me the files with changes between them:
In GitKraken when one of the above files is selected the differences between the two commits is displayed, for example in the file “FilterActivity.java” it showed that I had removed a couple of log outputs:
It was a safe assumption that the above file was not the culprit and the same was true for “MainActivity.java”, just log file changes. However the two XML layout files had some changes worth investigating.
To test these changes I checked out Test_2 and manually applied each of the changes between Test_2 and Test_3. The culprit edit was not one I would have ever expected:
When I deleted lines #14 and #15 from Test_2 the app showed the issue, for some reason having no spacing between rows/columns was messing up the GridView. I put the lines back and reduced the spacing to “1dp” and the app works as expected.
As you can see following the guide of “Commit often” can provide an invaluable debugging tool. Go download GitKraken and follow the “golden rule”, then when you have a problem that just refuses to yield to the debugger, fire up GitKraken and try some source-code forensics.
I am a long-time user of source code management tools, for many years I used Microsoft Visual SourceSafe (VSS). I tried several times to understand and use Git but found the learning curve rather steep and went back to my comfort zone of VSS. As the use of GitHub grew over recent years I came to the conclusion that the time was right to dive in and find out what Git was all about and start using it.
My first port of call when needing to learn some new technology over the last couple of years has been Udemy and I was pleased to find the course “Git Complete: The definitive, step-by-step guide to Git.” The course helped me get to grips with the fundamentals of Git, in particular, the basic difference between VSS and Git. VSS works with the concept of a single master repository for the project source code. For me this meant hosting that repository on a local network attached hard disk so that it could be accessed from any one of the several computers and virtual machines I use for my work. Git on the other hand, does not have the same concept of a master repository. The repositories of Git can be hosted locally and/or on any number of remote computers. Within the repository the is a master branch but no particular repository is considered the master. I encourage anyone wanting to know more about Git to take the course linked above.
Historically, Git is a command line tool and I prefer to work in a UI most of the time. When I decided to start using Git for all my work I searched for a nice, stable UI client. Several are available and initially I chose SourceTree from Atlassian. While it worked quite well there were some things I didn’t like about it and I switched to using GitKraken from Axosoft and I have been using it exclusively for my projects for a couple of years. Axosoft is very active in bringing new features to GitKraken and they offer excellent support for Pro version users. The Pro version allows commercial use and also adds some extra features. At the time of this writing the price is $49/year.
So, to address the title of this blog; why shouldyou be using Git? Having a good control of your project source code with source code management system is the obvious answer, however, that is not the only reason. What I have found is that Git allows me to experiment with code changes and designs with greater freedom. I just don’t get concerned about trying a radical update to my code for fear of breaking an already working project.
The reason for this is embodied in the best practices published on GitHub, “Commit early and often.” With Git a “commit” is a snapshot of your code changes at that moment. More than that however, Git marks each commit with a unique ID and provides the tools to manage these commits so that reverting to a previous commit is relatively easy. This means that one always has the tools available to go back in time to before a code-breaking change was made. For me, being this free to experiment with code in the secure knowledge that any wrong decisions may be undone really makes a difference to my work flow. Plus, if you do make a code breaking change that goes unnoticed for a while, git allows comparison of file changes between any two commits. GitKraken offers this file compare in its UI, however I prefer to use an external “diff” tool, Beyond Compare. I have it setup in GitKraken as my default diff tool.
If you would like to learn more about GitKraken and Git just visit the GitKraken channel on YouTube.
So, what are you waiting for? Sign up for the Udemy course, download GitKraken, and watch their tutorials. Your code productivity and creativity will both increase.
Just received the first two pre-production prototypes of the Wireless Debug Probe (WDBP) from MacroFab. After initial inspection and power-up I am so far very pleased with the quality of the work done. There are a few firmware issues to be addressed and I will be publishing the results of the full testing and firmware updates in a future blog post.
For several decades I have designed, built, and tested my own PCBs for several products/projects. During all of that time I made use of traditional “through-hole” mounting components. Beyond the schematic capture and PCB design software no special tools were required for this process other than a good soldering iron and some good quality wire snips to trim the component leads after soldering.
In recent years it has become increasingly difficult to source through-hole mounted components, particularly for the modern microcontrollers that offer the kinds of features new designs require. The time was right to transition to using Surface Mount Technology(SMT) components.
A while ago I began a new project, the design of a Wireless Debug Probe (WDBP) to connect a computer based debugger to and embedded system using either JTAG or SWD. This design, that will be open-sourced upon release, is based up the Blacksphere “Black Magic Probe” (BMP), an excellent open source debugging probe. The BMP itself is very, very small, using some of the smallest available SMT components.
The BMP is about 15mm x 35mm … REALLY small.
Since the WDBP is my first project using SMT components I decided not to use the smallest components so the PCB size is about 35mm x 90mm. While using larger components contributes to the size of WDBP, an additional factor was the Wi-Fi module with integrated antenna. Plus, I chose to add a footprint for the Tag-Connect TC-2050-IDC “bed-of-nails” connector for attaching my debug probe to the WDBP for debugging the firmware:
Here is a prototype of WDBP:
The WDBP prototypes have been built by a contact of mine who has been working with SMT components for some long time so I avoided the need to purchase any specialized assembly tools. However, during the testing there were a few issues that needed addressing, it was at this time that decided to investigate the minimum set of tools that would enable me to rework the prototypes.
The first issue when working with such small components is being able to visually inspect soldered joints. I tried a head-mounted magnifier, however the magnification was just not good enough to efficiently find any soldering issues. This led me to research and purchase of a microscope. The one I chose is from AmScope, model number SW-3T24Z, their Trinocular Stereo Microscope. I chose to get a trinocular ‘scope so that at some point in the future I could add a camera to the setup. Also, after reading the reviews of this microscope being used for SMT I purchased an additional lens to ensure the distance between the lens and the item under inspection was as great as possible to allow tools to be used on the PCB.
Apart from some small tools like anti-static tweezers the remaining tool investment was a hot-air rework station. This station permits a PCB to be held securely while hot air flows over it, allowing components to be removed, or simply reflowed to fix poor solder joints. Again after some extensive Internet searches I chose the Aoyue 866 rework station. This station comes with not only the base rework unit, but also a selection of nozzles for the hot air gun and a temperature controlled soldering iron. It is a good starter kit for anyone setting out on the SMT journey.
So far, the above tools have worked out really well in testing my prototype PCBS. The next phase of the WDBP project is the pre-production prototypes arriving in about 10 days. Having invested in this tool-set I feel confident that any minor issues with the PCBS can be addressed.
In my previous post I wrote about interrupts and atomic operations in embedded systems. After getting bitten myself with just such an issue I thought I would write about the details of what I just experienced on my current project.
First, the background of the issue; the device I am working on connects to a wireless access point to provide an interface to some PC-based software. The issue was that when the PC software started up sometimes the device would appear to drop messages or stop responding to messages. The issue was hard to track down because if the device was run under debugger control the messages appeared to be received just fine. This led me to suspect the code’s logic was being corrupted by interrupts disrupting the management of messages out of the receive buffers.
When a message is received from the network it is placed into a circular buffer, the index of the next free location is updated, and the total number of characters in the buffer is updated. This all happens in the response to an interrupt from the wireless module. Since there was no related interrupt that could be disrupting this I turned my attention to the background task that unpacks the data in the circular buffer into packets for processing. However, what I saw there was my buffer manipulation being protected by the wireless module interrupt being turned off, just as I wrote about previously.
This deepened the mystery since it appeared I was following good practice and keeping the buffer management code atomic, at least to the extent of protecting it to further interrupts from the wireless module, the only other code that manipulated the buffer variables.
The next step was to instrument the two methods that enabled and disable the wireless module interrupt. I did this to make sure that each disable call was matched by an enable call. The results of this test revealed that there were more enable operations than disable operations. After some thinking I realized that the enable and disable methods themselves were not atomic, therefore it was possible for an interrupt to arrive during the execution of the disable call, but before the interrupt had been disabled. What was needed was to track the enable and disable calls and only re-enable them when all callers had requested the re-enable.
To do this, a counter was incremented when interrupt enable was requested and decremented when disable was requested. However, the disable action would only be taken if the counter decremented to zero.
This update appears to have resolved my lost message and non-received message issues.
So, if your system has what appear to be random failures, closely examine the data flow through it and the effects that interrupts may have on that flow. Plus, double check that the code being used to manage the interrupts themselves is clean and does not make assumptions like mine did. Just because a caller requests an interrupt be enabled the code needs to be ensure the action is only taken when each caller that requested a disable has also requested an enable.
I have been trying to track down a particularly tricky problem on my Wireless Debug Probe over the last few days. When embedded systems begin to perform in strange, apparently illogical, ways the culprit is usually related to interrupts disrupting the logic of the non-interrupt, or background tasks.
While this turned out not to be the issue I was facing with the project, the first things checked were the potential impacts of interrupts. The first area of interest was to check if there were any variables or code lines that the compiler may have been optimizing out. In general ‘C/C++’ compilers these days have really good optimizers that remove unnecessary code or reduce it to a minimal number of instructions. The compiler does this by analyzing code in the context of the surrounding code. This can be problematic when for example a variable is being examined in a background task and modified in an interrupt task. A typical use of this may be a boolean flag that gets asserted by an interrupt function and is tested in a background function. As the compiler examines the code around the background testing it may find no other references to the variable and decide it can optimize the test out, or change it in some other way. The result of this is that the background task may never detect the boolean being asserted. Fortunately modern ‘C/C++’ compilers have the “volatile” keyword that allows us to inform the compiler that the code should not be optimized when the variable is accessed.
For a good discussion of the volatile keyword see this article.
A second potential issue is reading and writing of variables being interrupted mid-way through the process by a function that also modifies the variable. When a microcontroller needs to modify a memory-based variable it needs to perform a read, modify, write sequence. This sequence may take several instructions and an interrupt may occur at any point during the sequence. Should the interrupt function modify the variable the value partially read or written by the background function may be corrupted. To avoid such non-atomic operations the background function should temporarily disable any interrupt that may affect the variable to be manipulated, restoring the interrupt state once the operation is completed.