[3mdeb blog]

Thoughts dereferenced from scratchpad noise

How to handle a DHT22 sensor using ARM mbed OS?

| Comments

Recently I have encountered with temperature and humidity measurements using DHT22 sensor. I was developing a driver source code in ARM mbed OS SDK on particular STM32 NUCLEO L432KC platform. Thorough analysis of DHT22 documentation led me to the following questions:

  • Is it possible to accurately measure voltage-level durations during read process?
  • What duration time values should be considered as timeout or/and error?
  • Should I weaken the time restrictions in order to avoid random delays in voltage level transitions be considered as failure?

For ARM mbed OS documentation please refer to mbed API documentation

Configuration

Let’s start with a little bit of configuration and statistics.

The STM32 NUCLEO L432KC is clocked with up to 80MHz frequency which gives a period of 125 nanoseconds, so basically 8 periods sums to 1 microsecond.

The 1-Wire pin is configured as an DigitalInOut, it is necessary to operate both directions, due to communication protocol defined in DHT22 datasheet. Also a timer is enabled to measure voltage level duration.

The DHT22 sensor is connected as proposed in section 5 of DHT22 documentation, but I used a 4.7kOhm pull-up resistor between data line and VDD, because 10kOhm resistor was producing too much noise. I also added a 100nF capacitor between GND and VDD for wave filtering.

Read process

Each read operation can be divided into main 2 steps: 1. Host start signal and sensor response 2. Pure data transfer

Start signal and sensor response

Initially the data line should be in high state (high voltage), in this particular case it is 3.3V. High state on the data line is considered as idle. To begin transmission the host must pull the data line down for at least 1 milisecond, it is called a start signal. Then host should pull it up and wait for sensor response. The response should be acquired after 20-40 microseconds.

Procedure described above can be basically carried out like that:

1
2
3
4
5
6
7
8
9
10
/* Define the data line pin first and a timer*/
DigitalInOut dht_data(DATA_PIN);
Timer timer;

dht_data.output();    //sets the pin in output mode
dht_data.write(0);
wait_ms(2);
timer.reset();
dht_data.write(1);
timer.start();

Important Notice that timer’s state is set to 0 before the line is pulled up and then started.

Now is the time for sensor’s response. After 20-40 microseconds the sensor should pull the line down for 80 microseconds and then pull it up again for 80 microseconds. To detect it, a do-while loop can be used:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
do {
    n = timer.read_us();
    if(n > TIMEOUT) {
        timer.stop();
        return DHT_RESPONSE_TIMEOUT;
    }
    // measure the voltage level duration as long 
    // as data line's state does not change
} while(dht_data.read() == 1);

// reset the timer as soon as data line changes state
// to ensure continuity and validity of voltage level measurement 
timer.reset();
// check
if((n < 20) || (n > 40)) {
    timer.stop();
    return DHT_RESPONSE_ERROR;
}

do {
    n = timer.read_us();
    if(n > TIMEOUT) {
        timer.stop();
        return DHT_RESPONSE_TIMEOUT;
    }
} while(dht_data.read() == 0);

timer.reset();
if(n != 80) {
    timer.stop();
    return DHT_RESPONSE_ERROR;
}

do {
    n = timer.read_us();
    if(n > TIMEOUT) {
        timer.stop();
        return DHT_RESPONSE_TIMEOUT;
    }
} while(dht_data.read() == 1);

timer.reset();
if(n != 80) {
    timer.stop();
    return DHT_RESPONSE_ERROR;
}

At this point we can deliberate about the TIMEOUT value and the time restrictions provided in the if expressions. 100 microseconds seems to be reasonable value for TIMEOUT, because there is no voltage level duration longer than 80 microseconds defined by the protocol.

Running this code will certainly lead to returning DHT_RESPONSE_ERROR. Why? The timer restriction provided in if expressions are too strict. Tests conducted by me showed, that timer does not always read the same amount of microseconds passed each time I ran this code. The values fluctuated between 70 and even 90 microseconds. This dispersion is unacceptable considering 125 nanoseconds clock period in STM32 NUCLEO L432KC. It inspired me to investigate the hardware layer for possible faults. I have used the logic analyzer to monitor the sensor’s data line. The result occurred to be little surprising. The data line waveform I captured is showed below. The sampling frequency was set to 12MHz.

The sensor pulled the line down after ca. 22 microseconds, which is appropriate. But then the voltage level durations differ slightly, they are 1.5 microseconds far from 80. In few cases I observed also a 90 microseconds long low voltage level. To confirm the reliability of this measurement I have additionally connected the data line to oscilloscope. The results were the same as on logic analyzer.

These measurements have been taken with two different wiring lengths. With the shorter wiring, voltage level durations were much more repeatable and closer to the sensor’s read protocol. So the time restrictions should be provided as follows:

1
2
3
4
if((n < 70) || (n > 100)) {
    timer.stop();
    return DHT_RESPONSE_ERROR;
}

Data transfer

After sent response, the sensor is transmitting 40 bits of data containing measured temperature, humidity and a checksum. Each bit transfer begins with a 50 microseconds long low voltage level. Then the data line pulls up for 26-28 microseconds or 70 microseconds. The duration of high voltage level determines the bit value:

  • 26-68 microseconds – logic ‘0’
  • 70 microseconds – logic ‘1’

Most significant bit goes first. Reading and storing entire data can be done like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
for(int i = 0; i < N_BYTES; i++) {
    for (int b = 0; b < N_BITS; b++) {
        do {
            n = timer.read_us();
            if(n > TIMEOUT) {
                timer.stop();
                return DHT_READ_BIT_TIMEOUT;
            }
        } while(dht_data.read() == 0);

        timer.reset();
        if(n == 50) {
            timer.stop();
            return DHT_READ_BIT_ERROR;
        }

        do {
            n = timer.read_us();
            if(n > TIMEOUT) {
                timer.stop();
                return DHT_READ_BIT_TIMEOUT;
            }
        } while(dht_data.read() == 1);
        timer.reset();

        if((n > 26) && (n < 28)) {
            /* Received '0' */
            buffer[i] <<= 1;
        } else if (n == 70) {
            /* Received '1' */
            buffer[i] = ((buffer[i] << 1) | 1);
        }
    }
}

As expected, the time restrictions provided in if expressions lead to returning DHT_READ_BIT_ERROR. The reason is the same as mentioned previously in Start signal and sensor response. Checking the waveform of data line leads to following results:

Picture above shows a fragment of data bits transfer. The voltage level durations are clearly going beyond the acceptable scope. For example, the 64.42 and 73.58 microseconds long voltage level duration are corresponding to logic ‘1’ sent by the sensor, where it should be 50 and 70 microseconds. On the other hand, next bit sequence is 53.92 and 73.58 microseconds, which is little bit more accurate, but still far away from sensor’s specification. Only logic ‘0’ high voltage duration lasts properly long – 26 microseconds. To ensure reading all bytes without returning an error, I have adjusted the time restrictions in if expressions with following values:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
if((n < 45) || (n > 70)) {
  timer.stop();
    return DHT_READ_BIT_ERROR;
}

// ...

if((n > 15) && (n < 35)) {
    /* Received '0' */
    buffer[i] <<= 1;
} else if ((n > 65) && (n < 80)) {
    /* Received '1' */
    buffer[i] = ((buffer[i] << 1) | 1);
}

I also have included possible timer inaccuracy to ensure transmission without error returning.

After these changes I was finally able to gather correct measurements repeatedly without failure. To end the transmission, the data line should be pulled up by the host to leave it in an idle state.

1
2
3
timer.stop();
dht_data.output(); // switch back to output mode
dht_data.write(1);

It is also necessary to implement an interval mechanism to prevent polling the sensor sooner than 2 seconds since last poll.

Summary

Above steps gave me ability to take temperature and humidity measurements by DHT22. This particular sensor has its pros as well as cons.

Its advantage certainly is simplicity in hardware design. Only a pull up resistor, filtering capacitor and 3 wire connections are needed.

Its disadvantage is specially designed 1-Wire bus communication. It prevents the usage of common Maxim/Dallas 1-Wire bus standard and forces the developer to implement software driver for handling data reading. Although it is not that difficult, many other problems can occur. As I proved, timing issues are the source of accidental misinterpretation of data and developer confusion.

It is recommended to use as short wiring as possible. The tests i have conducted showed, that longer wires have significant influence on the data line waveform and voltage transition timings. The reason of STM32 L432KC timer inaccuracy still remains unknown to me. Despite very high frequency clock (80 MHz), the counted microseconds were different each time I was debugging the code.

Taking into consideration the relatively low price and pretty high popularity, dht22 is undoubtedly a good choice, but I have found many versions of documentation which made me a little confused. A standardized one should be provided to eliminate different approaches to handling this sensor.

Better datasheet with acceptable timing deviations ought to be published to avoid problems mentioned by me. It took me a significant amount of time to investigate these issues. Hope that this article will help further developers implementing DHT22 sensors in their projects.

We are always open for discussion about issues you faced during embedded system development, so please do not hesitate to leave comment below. DHT22 is just one case where do-not-trust-datasheet and read-between-lines rules apply. Malfunction and unexpected behaviour is bread and butter for us. Small problem in prototype may be huge on mass market. If you think we provide valuable information please share this post, also if you are interested in commercial level support we are always open for new challenges. There are many ways to contact us, but easiest would be to drop us email contact<at>3mdeb<dot>com.

How to use Ansible via Python

| Comments

alt text

Ansible is designed around the way people work and the way people work together

What is Ansible

Ansible is simple IT engine for automation, it is designed for manage many systems, rather than just one at a time. Ansible automates cloud provisioning, configuration management, application deployment, intra-service orchestration, and many other IT operations. It is easy to deploy due to using no agents and no additional custom security infrastructure. We can define own configuration management in simple YAML language, which is named ansible-playbook. YAML is easier to read and write by humans than other common data formats like XML or JSON. Futhermore, most programming languages contain libraries which operate and work YAML.

Inventory

Ansible works collaterally on many systems in your infrastructure, so it is important to specify a roster to keep hosts. This list is named inventory, which can be in one of many formats. For this example, the format is an INI-like and is saved in /etc/ansible/hosts.

1
2
3
4
5
6
7
8
9
10
mail.example.com

[webservers]
foo.example.com
bar.example.com

[dbservers]
one.example.com
two.example.com
three.example.com

Ansible Playbook

Playbook is the way to direct systems, which is particularly powerful, compact and easy to read and write. As we said configuration multi-system management is formatted in YAML language. Ansible playbook consists of plays, which contain hosts we would like to manage, and tasks we want to perform.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
---
- hosts: webservers
  vars:
    http_port: 80
    max_clients: 200
  remote_user: root
  tasks:
  - name: ensure apache is at the latest version
    yum: name=httpd state=latest
  - name: write the apache config file
    template: src=/srv/httpd.j2 dest=/etc/httpd.conf
    notify:
    - restart apache
  - name: ensure apache is running (and enable it at boot)
    service: name=httpd state=started enabled=yes
  handlers:
    - name: restart apache
      service: name=httpd state=restarted

Ansible for Embedded Linux

Note: This paragraph is relevant to Yocto build system

There is possibility to need build image with custom Linux-based system for embedded with Ansible, using complete development environment, with tools, metadata and documentation, named Yocto. In addition, we would like to run ansible-playbook via python. It seems to be hard to implement. Nothing more simple! It is needed to add recipe to image with ansible from Python Ansible package

Additional information

For more information go to Ansible Documentation

Python API

Python API is very powerful, so we can manage and run ansible-playbook from python level, there is possibility to control nodes, write various plugins, extend Ansible to respond to various python events and plug in inventory data from external data sources.

Note: There is a permament structure to build python program which operates ansible commands:

First of all we have to import some modules needed to run ansible in python.

  • Let’s describe some of them:

    • json module to convert output to json format
    • ansible module to manage e.g. inventory or plays
    • TaskQueueManager is responsible for loading the play strategy plugin, which dispatches the Play’s tasks to hosts
    • CallbackBase module – base ansible callback class, that does nothing here, but new callbacks, which inherits from CallbackBase class and override methods, can execute custom actions
1
2
3
4
5
6
7
8
import json
from collections import namedtuple
from ansible.parsing.dataloader import DataLoader
from ansible.vars import VariableManager
from ansible.inventory import Inventory
from ansible.playbook.play import Play
from ansible.executor.task_queue_manager import TaskQueueManager
from ansible.plugins.callback import CallbackBase
  • ResultCallback which inherits from CallbackBase and manage the output of ansible. We can create and modify own methods to regulate the behaviour of the ansbile in python controller.
1
2
3
4
5
class ResultCallback(CallbackBase):

  def v2_runner_on_ok(self, result, **kwargs):
    host = result._host
    print json.dumps({host.name: result._result}, indent=4)

Note: we can override more methods. All specification can be found in CallbackBase

  • Next step is to initialize needed objects. Options class to replace Ansible OptParser. Since we’re not calling it via CLI, we need something to provide options.
1
2
3
4
5
Options = namedtuple('Options', ['connection', 'module_path', 'forks', 'become', 'become_method', 'become_user', 'check'])
variable_manager = VariableManager()
loader = DataLoader()
options = Options(connection='local', module_path='/path/to/mymodules', forks=100, become=None, become_method=None, become_user=None, check=False)
passwords = dict(vault_pass='secret')
  • Instantiate our ResultCallback for handling results as they come in
1
results_callback = ResultCallback()
  • Then the script creates a VariableManager object, which is responsible for adding in all variables from the various sources, and keeping variable precedence consistent. Then create play with tasks – basic jobs we want to handle by ansible.
1
2
inventory = Inventory(loader=loader, variable_manager=variable_manager, host_list='localhost')
variable_manager.set_inventory(inventory)
1
2
3
4
5
6
7
8
9
10
play_source =  dict(
        name = "Ansible Play",
        hosts = 'localhost',
        gather_facts = 'no',
        tasks = [
            dict(action=dict(module='shell', args='ls'), register='shell_out'),
            dict(action=dict(module='debug', args=dict(msg='')))
         ]
    )
play = Play().load(play_source, variable_manager=variable_manager, loader=loader)
  • Actually run it, using the Runner object for collecting needed data and running the Ansible Playbook executor. The actual execution of the playbook is in a run method, so we can call it when we need to. The __init__ method just sets everything up for us. This should run your roles against your hosts! It will still output the usual data to Stderr/Stdout.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
tqm = None
try:
    tqm = TaskQueueManager(
              inventory=inventory,
              variable_manager=variable_manager,
              loader=loader,
              options=options,
              passwords=passwords,
              stdout_callback=results_callback,  # Use our custom callback instead of the ``default`` callback plugin
          )
    result = tqm.run(play)
finally:
    if tqm is not None:
        tqm.cleanup()

Conclusion

Ansible delivers IT automation that ends repetitive tasks and frees up DevOps teams for more strategic work. Manage the Ansbile via Python API is easy, it can be applied to operate a configuration on many systems at the time, using only simple python program.

Summary

We hope you enjoyed this post. If you have any comments please leave it below, if you think this post provide valuable information please share with interested parties.

We are always open for leveraging Ansible and Python in IoT and embedded environment. If you have project that can benefit from those IT automation do not hesitate to drop us email contact<at>3mdeb.com.

SWUpdate for feature-rich IoT applications

| Comments

When you work with embedded systems long enough, sooner or later you realize that some sort of update mechanism is required. This is especially true when more complex systems, running with an operating system, are taken into account. Nowadays Linux is being picked increasingly as operating system for embedded IoT devices. In following post we will focus on those in particular.

In fact, from my experience update mechanism is vital part of many embedded applications. When project is aimed to be maintained in a long run, it is one of the first features being developed.

Update IoT device vs update on desktop

On standard Linux machines updates are generally performed using one of the package managers. This approach may seem tempting, but for embedded devices it usually leads to more issues than it has advantages. When number of possible packages reaches hundreds or thousands, it becomes impossible to test application stability with various revisions of those packages. Approach where we release one thoroughly tested rootfs image is both more reliable and less time consuming in a long term.

Our vision of update system

In most of our project where software is concerned, we are heading towards double copy approach. The main idea is to have two separate rootfs partitions, which always leaves us with at least one copy of correct software. Core of developed update systems is usually similar to the one presented on the graph below.

What is SWUpdate?

SWUpdate is application designed for updating embedded Linux devices. It is strongly focused on reliability of each update. Every update should be consistent and atomic. Major goal is to make it completely power-cut safe. Power-off in any phase of an update should not brick the device and we always should end up having fully-functional system.

Purpose of this post

My goal is not to rewrite SWUpdate documentation here. Instead, I plan to point out it’s interesting features and present the way how it is being used in 3mdeb. This is why I will often leave a link to related chapter in SWUpdate documentation for more information.

In the end I will give short example of implementation of such update system used in 3mdeb.

SWUpdate example features

SWU image

*.swu image is a cpio container which contains all files needed during update process (images, scripts, single files and so on). In addition it requires sw-description file to be present. This file describes .swu image content and allows to plan various update scenarios through setting appropriate flags in each section.

Software collections

SWUpdate supports dual image approach by providing software collections in sw-description file. Such simple collection inside can be written as:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
software =
{
        version = "1.0.0";
        stable:
        {
                mmcblk0p2:
                {
                        images: (
                        {
                                filename = "example-rootfs-image.ext4";
                                device = "/dev/mmcblk0p2";
                        }
                        );
                };
                mmcblk0p3:
                {
                        images: (
                        {
                                filename = "example-rootfs-image.ext4";
                                device = "/dev/mmcblk0p3";
                        }
                        );
                };
        };
}

As you can see below, there are two software modes to choose from: * stable,mmcblk0p2 will install rootfs image into /dev/mmcblk0p2 partition * stable,mmcblk0p3 will install rootfs image into /dev/mmcblk0p3 partition

Selection of given mode is made by using -e command line switch, e.g.:

1
swupdate -e "stable,mmcblk0p2" -i example.swu-image.swu

In double copy approach we are using software collections mainly to point to target partition on which update will be performed. File (image) name is usually the same in both.

Hardware compatibility

It can be used to exclude the risk of installing software on the wrong platform. sw-descrption should contain list of compatible hardware revisions:

1
hardware-compatibility = [ "1.0.1", "1.0.0", "1.0.2" ];

Hardware revision is saved in file (by default /etc/hwrevision) using following format:

1
board_name board_revision

When I last checked this, only board_revision string was taken into account when it came to checking for image compatibility. So in these terms, boards: board1 revA and board2 revA would be compatible.

The first string (board_name) was only used for board specific settings.

As for hwrevision file – when using Yocto, I usually ship it through swupdate bbappend file – specific for each target machine.

Image downloading

Basic usage of SWUpdate involves executing it from command line, passing several arguments. In this scenario image can either be downloaded from given URL, or obtained from local file shipped on USB stick for example. For obvious reason in case of multiple IoT devices we are rather interested in downloading images.

Download support is provided by curl library. In current SWUpdate implementation it supports fetching from http(s) or ftp. However curl supports many other protocols. In fact, at the moment we are using SWUpdate with fetching from sftp server with no source code modification. In this case, private key (id_dsa) must be located in $HOME/.ssh path as explained in curl documentation regarding CURLOPT_SSH_PUBLIC_KEYFILE. This behavior could be documented in SWUpdate documentation or even another command line parameter added for key location. This could be in scope of further contribution into the project.

To download image from given URL, following command line parameters should be passed:

1
swupdate -d "-u http://example.com/mysoftware.swu"

Note that there’s been syntax change a while ago. In previous releases (for example in the one present in Yocto krogoth release, which is still in use) it was just: swupdate -d http://example.com/mysoftware.swu

Compressed images

One of the concerns while using whole rootfs image update approach may be the size of single update image. SWUpdate offers handling of gzip compressed images as well. From my experience, size of such compressed images is not grater than 50 – 100 MB, depending on complexity of given application. With today’s network speed is not that much as long as there is no serious connection restrictions. When delivering compressed image, copressed flag must be set in corresponding sw-description section. It may look like below:

1
2
3
4
5
6
7
images: (
{
        filename = "rootfs-image-name.img.gz";
        device = "/dev/sda3";
        compressed = TRUE;
}
);

I always use this feature, as it drastically decreases update image size. Thing to remember is that you need to compress rootfs image itself (not whole SWU image). Also it requires gz compression, so use gzip application.

Streaming images

SWUpdate offers streaming feature that allows to stream downloaded image directly onto second partition, without temporary copy in /tmp. This might be especially desired when RAM amount is not enough to store whole rootfs image. This can be enabled by setting installed-directly flag in given image section. In this case it would look like this:

1
2
3
4
5
6
7
8
images: (
{
        filename = "rootfs-image-name.img.gz";
        device = "/dev/sda3";
        compressed = TRUE;
        installed-directly = TRUE;
}
);

By default, temporary copy is done by SWUpdate to check for image correctness. I feel that with dual copy approach it is not really necessary as if anything goes wrong we are always left with working software and ready to perform update again. This is why we tend to use this feature pretty often.

GRUB support

When developing application for embedded system there can be a problem with not enough of hardware platforms for testing. Testing on host can also be faster and more efficient. When using Virtualbox, even update system could be tested. The issue was that, it usually uses GRUB as a bootloader and SWUpdate was supporting U-Boot only. With little effort we managed to add basic support for GRUB environment update in SWUpdate project and this feature has been recently upstreamed.

Complete example

I will try to present an example setup that allows to experience mentioned SWUpdate features. It can be a base (and usually it is in case for our projects) for actual update system.

Below example fits for any embedded device running Linux with U-Boot as bootloader. In my case Hummingboard from Solidrun was used.

Rootfs image

Of course you need a rootfs image to perform update with it. It can be prepared in may ways. For test purpose, you can even perform dd command to obtain raw image from SD card. An example command would be:

1
sudo dd if=/dev/mmcblk0 of=rootfs-image.img bs=16M

However, preferred method would be to use Yocto build system. Along with meta-swupdate it allows for automated building of rootfs image, as well as .swu containter image in one run. In this case, krogoth revision of Yocto was used.

U-Boot boot script

In dual image approach goal is to pass information to bootloader after update has finished successfully. In case of U-Boot we can tell which partition to use as rootfs when booting. With below script we will boot into newly updated software once.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# if fback is not defined yet, boot from 2 partition as default
if test x${fback} != x; then
    boot_part=${fback}
else
    boot_part=2
fi

# Boot once into new system after update
if test x${next_entry} != x; then
    boot_part=${next_entry}
    setenv next_entry
fi

saveenv

setenv bootargs "console=ttymxc0,115200n8 rootfstype=ext4 rootwait panic=10 root=/dev/mmcblk0p${boot_part}"
ext4load mmc 0:${boot_part} 0x13000000 boot/${fdtfile}
ext4load mmc 0:${boot_part} 0x10800000 boot/zImage
bootz 0x10800000 - 0x13000000

When booted into newly updated partition, some sort of sanity checks can be made. If passed, new software is marked as default by setting fback environment variable to point to this partition. We can modify bootloader environment using SWU image with just sw-description file. Below is an example of such:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
software =
{
        version = "1.0.0";

        hardware-compatibility = [ "Hummingboard", "som-v1.5" ];
        confirm =
        {
                mmcblk0p3:
                {
                        uboot: (
                        {
                                name = "fback";
                                value = "3";
                        }
                        );
                };
                mmcblk0p2:
                {
                        uboot: (
                        {
                                name = "fback";
                                value = "2";
                        }
                        );
                };
        }
}

Prepare sw-description file

Below is an example sw-description file including features mentioned above:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
software =
{
        version = "1.0.0";
        hardware-compatibility = [ "Hummingboard", "som-v1.5" ];
        stable:
        {
                mmcblk0p2:
                {
                        images: (
                        {
                                filename = "example-rootfs-image.ext4.gz";
                                device = "/dev/mmcblk0p2";
                                installed-directly = TRUE;
                                compressed = TRUE;
                        }
                        );
                        uboot: (
                        {
                                name = "next_entry";
                                value = "2";
                        },
                        {
                                name = "fback";
                                value = "3";
                        }
                        );
                };
                mmcblk0p3:
                {
                        images: (
                        {
                                filename = "example-rootfs-image.ext4.gz";
                                device = "/dev/mmcblk0p3";
                                installed-directly = TRUE;
                                compressed = TRUE;
                        }
                        );
                        uboot: (
                        {
                                name = "next_entry";
                                value = "3";
                        },
                        {
                                name = "fback";
                                value = "2";
                        }
                        );
                };
        };
}

Creation of SWU image

Yocto based

  • Follow with setup from building with Yocto section
  • Create recipe for SWU image, e.g. recipes-extended/images/test-swu-image.bb. It could be based on bbb-swu-image recipe from meta-swupdate repository. sw-desciption file should end up in images/test-swu-image directory. If another files (such as scripts) should as well be part of compound SWU image, they should also go there. Assuming that hummingboard is our machine name in Yocto, such recipe could look like below:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# Copyright (C) 2015 Unknown User <unknow@user.org>
# Released under the MIT license (see COPYING.MIT for the terms)

DESCRIPTION = "Example Compound image for Hummingboard"
SECTION = ""

# Note: sw-description is mandatory
SRC_URI_hummingboard= "file://sw-description \
           "
inherit swupdate

LICENSE = "MIT"
LIC_FILES_CHKSUM = "file://${COREBASE}/LICENSE;md5=4d92cd373abda3937c2bc47fbc49d690 \
                    file://${COREBASE}/meta/COPYING.MIT;md5=3da9cfbcb788c80a0384361b4de20420"

# IMAGE_DEPENDS: list of Yocto images that contains a root filesystem
# it will be ensured they are built before creating swupdate image
IMAGE_DEPENDS = ""

# SWUPDATE_IMAGES: list of images that will be part of the compound image
# the list can have any binaries - images must be in the DEPLOY directory
SWUPDATE_IMAGES = " \
                core-image-full-cmdline \
                "

# Images can have multiple formats - define which image must be
# taken to be put in the compound image
SWUPDATE_IMAGES_FSTYPES[core-image-full-cmdline] = ".ext4"

COMPATIBLE = "hummingboard"
  • SWU image can be build with following command:
1
bitbake test-swu-image

It can be found in standard directory for built images:

tmp/deploy/images/${MACHINE}.

Manual

Refer to building a single image section from SWUpdate documentation.

Perform update

Assuming SWU image is already uploaded and current partition is /dev/mmcblk0p2:

1
swupdate -d http://example.com/mysoftware.swu -e "stable,mmcblk0p3"

Conclusion

I have only shortly described features that we commonly use. These are of course not all that are available. You can find out more in the list of supported features. Definitely worth to mention would be:

SWUpdate provides really powerful and reliable update mechanism. It’s job is only to download and reliably perform update according to metadata written into sw-description file. The rest such as picking right software from collection, getting current and available partitions, preparing bootloader scripts etc. is up to user. It may be overwhelming at first, but it is the reason why SWUpdate can be so flexible. You can pick features from (still growing) list to design update system that is perfect for your needs. SWUpdate will only assure that it is safe and reliable.

Summary

We hope that content of this blog post was entertaining and useful for you. If you have any comments or questions do not bother to drop us a note below. If you feel this blog post contains something useful please share with the others. As 3mdeb we are always ready to give you professional support. Just let us know by sending an email to contact@3mdeb.com.

Installing OpenWRT on APU3 platform

| Comments

This guide should be considered as a simple walk-through for using APU3 platform in some generic use-cases. I’m trying to explain how to work with the device and use it in a generic manner. There is a part about the coreboot firmware, which could be used as a reference of how to start customizing it for own purposes.

Configuring the hardware

At first, let’s figure out some basic requirements for our new device:

  1. It will be wireless router with some advenced functionality provided by OpenWRT.
  2. In order for it to be wireless, we need to add WiFi network adapters.
  3. I want it to be dual-band simultaneous connection, so we will need 2 separate WiFi adapters.
  4. Operating system will be placed on µSD card.
  5. There will be an additional storage in the form of mSata disk.

APU3 has 3 mPcie slots. Unfortunately it supports PCI express only on slot mPCIe 1, so WiFi card has to use it. For the second WiFi card, we could use mPCIe 2 slot, but we would need USB only type, which are rare. Instead I’m using some cheap Ralink RT5370 based USB dongle WiFi adapter. mPCIe 3 slot will be used for mSata drive.

For the OS drive, I’ll use some generic µSD card with adapter.

mPCIe 2 slot could be used in future for GSM modem or some other kind of USB device in the form of mPcie card.

Getting the sources

We will use latest stable version which is Chaos Calmer, in order to be compatible with upstream packages. Thanks to that we can just use the opkg to download new version of packages from the main OpenWRT’s repositories.

The sources we need are located on github.

Let’s clone the needed version:

1
2
3
4
5
6
7
$ git clone -b chaos_calmer https://github.com/openwrt/openwrt.git
Cloning into 'openwrt'...
remote: Counting objects: 360802, done.
remote: Compressing objects: 100% (5/5), done.
remote: Total 360802 (delta 4), reused 2 (delta 2), pack-reused 360795
Receiving objects: 100% (360802/360802), 132.94 MiB | 8.24 MiB/s, done.
Resolving deltas: 100% (241401/241401), done.

Building

To build our first image we first need to configure the OpenWRT:

1
2
$ cd openwrt
$ make menuconfig

Our target is APU system, which has AMD x86_64 CPU. So let’s use generic settings:

  • Target System > x86
  • Subtarget > x86_64

… and then Exit and make.

After compilation our image is in bin/x86 dir. We need a SD card to burn the image and boot the system on the target platform. On my host system, card is present under the device file /dev/sde.

Warning! Carefully check the device the card is present on your system. This is potentially dangerous operation and can lead to lost data, when used wrong device!

1
2
cd bin/x86
sudo dd if=openwrt-x86-64-combined-ext4.img of=/dev/sde bs=4M

First boot

Default username after first boot is root and no password. Password should be set using passwd.

To make the first boot we need some kind of serial adapter (USB to RS232) and null-modem cable. There is a RS232 port on the back of the APU board. We need to connect it there.

To make the connection, I’m using screen, but other kind could be used (e.g. minicom). Default parameters for COM port are 115200 8N1. This is the command I’m using:

1
screen /dev/ttyUSB0 115200

Immediately after powering the device, the coreboot welcome string should be seen and one could enter simple boot menu. Default configuration should be ok and SD card will have priority over different devices (it can be changed).

First OpenWRT boot will most propably hang on this string:

1
2
3
4
5
6
7
8
9
10
11
12
13
...
[    2.424534] bridge: automatic filtering via arp/ip/ip6tables has been deprecated. Update your scripts to load br_netfilter if you need this.
[    2.437154] 8021q: 802.1Q VLAN Support v1.8
[    2.441432] NET: Registered protocol family 40
[    2.447418] rtc_cmos 00:01: setting system clock to 2016-07-25 00:04:49 UTC (1469405089)
[    2.455798] Waiting for root device PARTUUID=6c097903-02...
[    2.659998] usb 3-1: new high-speed USB device number 2 using ehci-pci
[    2.666595] usb 4-1: new high-speed USB device number 2 using ehci-pci
[    2.820863] hub 3-1:1.0: USB hub found
[    2.824725] hub 4-1:1.0: USB hub found
[    2.828501] hub 3-1:1.0: 4 ports detected
[    2.832586] hub 4-1:1.0: 4 ports detected
[    2.950313] Switched to clocksource tsc

Problem lies here: [ 2.455798] Waiting for root device PARTUUID=6c097903-02...

SDHCI controller issue

After short investigation it appears, that we don’t have support for the SDHCI controller on APU board, so we need to enable it. We need to modify the kernel configuration, so we use this command:

1
make kernel_menuconfig

In the config we need to select those drivers:

  • Device Drivers > MMC/SD/SDIO card support:
    • MMC block device driver
    • Secure Digital Host Controller Interface support
    • SDHCI support on PCI bus

Now the system should boot without problems.

Network problems

After booting the systems it appears we don’t have any connectivity (ethernet nor wifi). When trying ifconfig -a we can see only the lo interface.

Let’s install some additional packages, which should help us investigate

  • Base system > busybox > Customize busybox options > Linux System Utilities:
    • lspci
    • lsusb
  • Base system > wireless-tools

When the image is built and system is booted on target we can use lspci -k to check which devices have kernel modules assigned to them and which don’t. This lspci flavour is pretty poor, compared to usual one, supplied with main Linux distributions, but should be enough for our uses.

Among others, we can find these devices (VID:DID), which look interesting:

1
2
3
4
01:00.0 Class 0200: 8086:1539
02:00.0 Class 0200: 8086:1539
03:00.0 Class 0200: 8086:1539
04:00.0 Class 0280: 168c:003c

According to this page we’re looking for these devices:

  • 8086:1539 – this is Intel Ethernet controller (I211 Gigabit Network Connection)
  • 168c:003c – this is Atheros QCA986x/988x 802.11ac Wireless Network Adapter

We need to find drivers for those. It seems, that Intel is using CONFIG_IGB kernel option for its driver. Module for Atheros card is in OpenWRT. Let’s deal first with ethernet controllers:

1
make kernel_menuconfig

Need to mark this driver:

  • Device Drivers > Network device support > Ethernet driver support:
    • Intel® 82575/82576 PCI-Express Gigabit Ethernet support

As for the rest:

1
make menuconfig

First let’s mark the driver for our wireless card: * Kernel modules > Wireless Drivers:

* kmod-ath10k

And also some packages we’ll need to set up the access-point: * Network:

* hostapd
* wpa_supplicant

Unfortunately, during my build I got and error. After rerunning make V=s it appears that kernel hasn’t got the full configuration it wants. I managed get by this problem checking this option in make kernel_menuconfig:

  • Power management and ACPI options:
    • ACPI (Advanced Configuration and Power Interface) Support

After successful build and boot. I got:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
root@OpenWrt:/# ifconfig -a
br-lan    Link encap:Ethernet  HWaddr 00:0D:B9:44:11:B8
          inet addr:192.168.1.1  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fd0e:a001:d70e::1/60 Scope:Global
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth0      Link encap:Ethernet  HWaddr 00:0D:B9:44:11:B8
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
          Memory:f7900000-f791ffff

eth1      Link encap:Ethernet  HWaddr 00:0D:B9:44:11:B9
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
          Memory:f7a00000-f7a1ffff

eth2      Link encap:Ethernet  HWaddr 00:0D:B9:44:11:BA
          BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
          Memory:f7b00000-f7b1ffff

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:240 errors:0 dropped:0 overruns:0 frame:0
          TX packets:240 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:16320 (15.9 KiB)  TX bytes:16320 (15.9 KiB)

wlan0     Link encap:Ethernet  HWaddr 04:F0:21:1B:5E:28
          BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

So I’ve got the network.

Basic configuration

It’s good idea to set the default password using passwd utility. Thanks to this we can login using the SSH connection.

If you want to add your public key to authorized_keys file, which is usually placed under .ssh dir in user’s home dir, it has to be placed in /etc/dropbear/ dir or it will be ignored. Make sure that permissions to the file are also set right (600).

We can also set some other IP address for the ethernet connection, if our host computer occupies the one or is using other mask. In my case, my host computer has static address, which happens to be the same as the default one in OpenWRT.

Here’s short example how to change it:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
root@OpenWrt:/# uci show network
network.loopback=interface
network.loopback.ifname='lo'
network.loopback.proto='static'
network.loopback.ipaddr='127.0.0.1'
network.loopback.netmask='255.0.0.0'
network.lan=interface
network.lan.ifname='eth0'
network.lan.type='bridge'
network.lan.proto='static'
network.lan.ipaddr='192.168.1.1'
network.lan.netmask='255.255.255.0'
network.lan.ip6assign='60'
network.wan=interface
network.wan.ifname='eth1'
network.wan.proto='dhcp'
network.wan6=interface
network.wan6.ifname='eth1'
network.wan6.proto='dhcpv6'
network.globals=globals
network.globals.ula_prefix='fd0e:a001:d70e::/48'
root@OpenWrt:/# uci set network.lan.ipaddr=192.168.1.254
root@OpenWrt:/# uci commit
root@OpenWrt:/# /etc/init.d/network restart

We also want to enable the AP using the wifi adapter:

1
2
3
4
5
6
root@OpenWrt:~# uci set wireless.radio0.disabled=0
root@OpenWrt:~# uci set wireless.@wifi-iface[0].encryption='psk2+aes'
root@OpenWrt:~# uci set wireless.@wifi-iface[0].key='key123'
root@OpenWrt:~# uci set wireless.@wifi-iface[0].ssid='YourSSID'
root@OpenWrt:~# uci commit
root@OpenWrt:~# wifi

After a while you can establish a connection with SSID YourSSID and password key123.

Second wireless interface

The second adapter is connected to the USB port on the back of the device. It’s some cheap Ralink RT5370 based chip, which are popular and in nice form factor (small footprint and removable antenna).

Using the lsusb it’s detected like that:

1
Bus 001 Device 002: ID 148f:5370

In order to enable it, we need additional kernel module, which is available in OpenWRT:

  • Kernel modules > Wireless Drivers > kmod-rt2800-usb

After building and booting the new image, interface should be available by checking ifconfig -a.

Unfortunately we don’t have the new interface in OpenWRT’s configuration system. Right now the /etc/config/wireless file looks like that:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
config wifi-device 'radio0'
  option type 'mac80211'
  option hwmode '11a'
  option path 'pci0000:00/0000:00:02.5/0000:04:00.0'
  option htmode 'VHT80'
  option disabled '0'
  option channel '36'

config wifi-iface
  option device 'radio0'
  option network 'lan'
  option mode 'ap'
  option ssid 'YourSSID'
  option encryption 'psk2+aes'
  option key 'key123'

In order to add new device, I found that it’s easiest to generate generic one, with all interfaces detected and add the new one to the file. We can do it this way:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
root@OpenWrt:~# wifi detect
config wifi-device  radio0
  option type     mac80211
  option channel  36
  option hwmode   11a
  option path     'pci0000:00/0000:00:02.5/0000:04:00.0'
  option htmode   VHT80
  option disabled 1

config wifi-iface
  option device   radio0
  option network  lan
  option mode     ap
  option ssid     OpenWrt
  option encryption none

config wifi-device  radio1
  option type     mac80211
  option channel  11
  option hwmode   11g
  option path     'pci0000:00/0000:00:10.0/usb1/1-2/1-2:1.0'
  option htmode   HT20
  option disabled 1

config wifi-iface
  option device   radio1
  option network  lan
  option mode     ap
  option ssid     OpenWrt
  option encryption none

There is an additional section with the new adapter (radio1 and wifi-iface for radio1). We can copy this section to /etc/config/wireless and change the options we need. After that, we can run wifi command to accept the settings and enable all radios.

Some bandwidth results

Here are some results I’ve got when done some tests using iperf3.

802.11a (the WLE600VX card)

VHT80 mode

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 192.168.1.121, port 60530
[  5] local 192.168.1.254 port 5201 connected to 192.168.1.121 port 60532
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-1.00   sec  17.4 MBytes   146 Mbits/sec
[  5]   1.00-2.00   sec  22.6 MBytes   189 Mbits/sec
[  5]   2.00-3.00   sec  24.2 MBytes   203 Mbits/sec
[  5]   3.00-4.00   sec  25.1 MBytes   211 Mbits/sec
[  5]   4.00-5.00   sec  25.7 MBytes   215 Mbits/sec
[  5]   5.00-6.00   sec  25.2 MBytes   212 Mbits/sec
[  5]   6.00-7.00   sec  25.4 MBytes   213 Mbits/sec
[  5]   7.00-8.00   sec  25.4 MBytes   213 Mbits/sec
[  5]   8.00-9.00   sec  27.6 MBytes   232 Mbits/sec
[  5]   9.00-10.00  sec  31.0 MBytes   260 Mbits/sec
[  5]  10.00-10.02  sec   663 KBytes   243 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  5]   0.00-10.02  sec   251 MBytes   210 Mbits/sec    0             sender
[  5]   0.00-10.02  sec   250 MBytes   209 Mbits/sec                  receiver

HT40 mode

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 192.168.1.121, port 60220
[  5] local 192.168.1.254 port 5201 connected to 192.168.1.121 port 60222
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-1.00   sec  8.90 MBytes  74.7 Mbits/sec
[  5]   1.00-2.00   sec  10.1 MBytes  85.1 Mbits/sec
[  5]   2.00-3.00   sec  10.9 MBytes  91.3 Mbits/sec
[  5]   3.00-4.00   sec  11.0 MBytes  92.3 Mbits/sec
[  5]   4.00-5.00   sec  11.1 MBytes  93.3 Mbits/sec
[  5]   5.00-6.00   sec  11.2 MBytes  93.7 Mbits/sec
[  5]   6.00-7.00   sec  13.7 MBytes   115 Mbits/sec
[  5]   7.00-8.00   sec  13.8 MBytes   116 Mbits/sec
[  5]   8.00-9.00   sec  13.6 MBytes   114 Mbits/sec
[  5]   9.00-10.00  sec  13.6 MBytes   114 Mbits/sec
[  5]  10.00-10.02  sec   307 KBytes   112 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  5]   0.00-10.02  sec   118 MBytes  99.1 Mbits/sec    0             sender
[  5]   0.00-10.02  sec   118 MBytes  99.0 Mbits/sec                  receiver

802.11n (the Ralink’s RT5370 adapter)

HT20 mode

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 192.168.1.121, port 60032
[  5] local 192.168.1.254 port 5201 connected to 192.168.1.121 port 60034
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-1.00   sec  2.72 MBytes  22.8 Mbits/sec
[  5]   1.00-2.00   sec  2.68 MBytes  22.5 Mbits/sec
[  5]   2.00-3.00   sec  2.64 MBytes  22.1 Mbits/sec
[  5]   3.00-4.00   sec  2.78 MBytes  23.3 Mbits/sec
[  5]   4.00-5.00   sec  2.76 MBytes  23.1 Mbits/sec
[  5]   5.00-6.00   sec  2.72 MBytes  22.8 Mbits/sec
[  5]   6.00-7.00   sec  2.71 MBytes  22.7 Mbits/sec
[  5]   7.00-8.00   sec  2.78 MBytes  23.3 Mbits/sec
[  5]   8.00-9.00   sec  2.78 MBytes  23.4 Mbits/sec
[  5]   9.00-10.00  sec  2.73 MBytes  22.9 Mbits/sec
[  5]  10.00-10.02  sec  52.3 KBytes  18.7 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  5]   0.00-10.02  sec  27.4 MBytes  22.9 Mbits/sec    0             sender
[  5]   0.00-10.02  sec  27.3 MBytes  22.9 Mbits/sec                  receiver

Completed setup

Summary

We hope you have enjoyed reading this article. If you have faced problems installing your system of choice on any PC Engines platform, please let us know by using comments below or by social media channels. We would be glad, to help you solve your issues. If you are in need of a professional support, we are always open for new challenges, so do not hesitate to drop us an email at contact@3mdeb.com

OpenOCD and development environment for Zephyr on NXP FRDM-K64F

| Comments

In this post I would like to describe process of setting up NXP FRDM-K64F development environment under Linux and start Zephyr development using it.

Why NXP FRDM-K64F ? I choose this platform mostly because of ready to use guide about using 802.15.4 communication by attaching TI CC2520, which was presented here.

Typical wireless stack starts with 802.15.4, then 6LoWPAN adaptation and then IPv6, which carries application protocols. 6LoWPAN compress IPv6 so it can fit BLE and 802.15.4 and it is dedicated for embedded systems with very limited stack. Using IPv6 is very important for IoT market because scalability, security and simplified application implementation in comparison to custom stack also it can provide known protocols like UDP on transport layer.

I tried to evaluate Zephyr networking stack for further use in customer applications. But having even greatest idea for project requires development environment and ability to debug your target platform that’s why I wrote this tutorial.

NXP FRDM-K64F setup

I started with initial triage if my NXP FRDM-K64F board works:

1
2
3
4
5
6
7
git clone https://gerrit.zephyrproject.org/r/zephyr && cd zephyr && git checkout tags/v1.7.0
cd zephyr
git checkout net
source zephyr-env.sh
cd $ZEPHYR_BASE/samples/hello_world/
make BOARD=frdm_k64f
cp outdir/frdm_k64f/zephyr.bin /media/pietrushnic/MBED/

On /dev/ttyACM0 I get:

1
2
**** BOOTING ZEPHYR OS v1.7.99 - BUILD: Mar 18 2017 14:14:37 *****                                                                    |
Hello World! arm   

So it works great out of the box. Unfortunately it is not possible to flash using typical Zephyr OS command:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[15:16:13] pietrushnic:hello_world git:(k64f-ethernet) $ make BOARD=frdm_k64f flash
make[1]: Entering directory '/home/pietrushnic/storage/wdc/projects/2017/acme/src/zephyr'
make[2]: Entering directory '/home/pietrushnic/storage/wdc/projects/2017/acme/src/zephyr/samples/hello_world/outdir/frdm_k64f'
  Using /home/pietrushnic/storage/wdc/projects/2017/acme/src/zephyr as source for kernel
  GEN     ./Makefile
  CHK     include/generated/version.h
  CHK     misc/generated/configs.c
  CHK     include/generated/generated_dts_board.h
  CHK     include/generated/offsets.h
make[3]: 'isr_tables.o' is up to date.
Flashing frdm_k64f
Flashing Target Device
Open On-Chip Debugger 0.9.0-dirty (2016-08-02-16:04)
Licensed under GNU GPL v2
For bug reports, read
        http://openocd.org/doc/doxygen/bugs.html
Info : only one transport option; autoselect 'swd'
Info : add flash_bank kinetis k60.flash
adapter speed: 1000 kHz
none separate
cortex_m reset_config sysresetreq
Error: unable to find CMSIS-DAP device

Done flashing

NXP FRDM-K64F have problems with debugger firmware and that’s why OpenOCD refuse to cooperate. Recent CMSIS-DAP firmware can be installed by using this guide although it has speed and debugging limitation about which you can read here.

I followed this post to build recent version of OpenOCD from source.

Custom OpenOCD can be provided to Zephyr make system, by using OPENOCD variable, so:

1
2
OPENOCD=/usr/local/bin/openocd make BOARD=frdm_k64f flash
OPENOCD=/usr/local/bin/openocd make BOARD=frdm_k64f debug

Both worked fine for me. I realized that I use 0.8.2 Zephyr SDK, so this could be possible issue, but it happen not. Neither OpenOCD provided in 0.8.2 nor 0.9 Zephyr SDK worked for me.

Zephyr SDK upgrade

For those curious how to upgrade Zephyr SDK below commands should help.

To get location of SDK:

1
2
3
$ source zephyr-env.sh 
$ echo $ZEPHYR_SDK_INSTALL_DIR
/home/pietrushnic/projects/2016/acme/zephyr_support/src/sdk

To upgrade:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
$ wget https://nexus.zephyrproject.org/content/repositories/releases/org/zephyrproject/zephyr-sdk/0.9/zephyr-sdk-0.9-setup.run
$ chmod +x zephyr-sdk-0.9-setup.run 
$ ./zephyr-sdk-0.9-setup.run 
Verifying archive integrity... All good.
Uncompressing SDK for Zephyr  100% 
Enter target directory for SDK (default: /opt/zephyr-sdk/): /home/pietrushnic/projects/2016/acme/zephyr_support/src/sdk
Installing SDK to /home/pietrushnic/projects/2016/acme/zephyr_support/src/sdk
The directory /home/pietrushnic/projects/2016/acme/zephyr_support/src/sdk/sysroots will be removed!
Do you want to continue (y/n)?
y
 [*] Installing x86 tools...
 [*] Installing arm tools...
 [*] Installing arc tools...
 [*] Installing iamcu tools...
 [*] Installing nios2 tools...
 [*] Installing xtensa tools...
 [*] Installing riscv32 tools...
 [*] Installing additional host tools...
Success installing SDK. SDK is ready to be used.

Flashing sample Zephyr application

Because SDK provided OpenOCD didn’t worked for me I started to use one compiled by myself.

zperf is network traffic generator included in sample applications of Zephyr. It supports K64F, so it was great place to start with networking.

1
2
cd $ZEPHYR_BASE/samples/net/zperf
OPENOCD=/usr/local/bin/openocd make BOARD=frdm_k64f flash

On terminal I saw:

1
2
3
4
5
zperf>
[zperf_init] Setting IP address 2001:db8::1
[zperf_init] Setting destination IP address 2001:db8::2
[zperf_init] Setting IP address 192.0.2.1
[zperf_init] Setting destination IP address 192.0.2.2

Testing scenarios are described here. Unfortunately basic test hangs, what could be great to those who want to help in Zephyr development. I tried to debug that problem.

Debugging problems

To debug zpref application I used tui mode of gdb:

1
OPENOCD=/usr/local/bin/openocd TUI="--tui" make BOARD=frdm_k64f debug

Please note that before debugging you have to flash application to your target.

Unfortunately debugging didn’t worked for me out of the box. I struggle with various problems trying different configuration. My main goal was to have pure OpenOCD+GDB environment. It happen very problematic with breakpoints triggering exception handlers and GDB initially stopping in weird location (ie. idle thread).

I asked on mailing list question about narrowing down this issue. Moving forward with limited debugging functionality would be harder, but not impossible – print is your friend.

NXP employee replies on mailing list were far from being satisfying. Main suggestion was to use KDS IDE.

Digging in OpenOCD

In general there were two issues I faced:

1
2
3
Error: 123323 44739 target.c:2898 target_wait_state(): timed out (>40000) while waiting for target halted
(...)
Error: 123917 44934 armv7m.c:723 armv7m_checksum_memory(): error executing cortex_m crc algorithm (retval=-302)

timeout value and retval value were added for debugging purposes. First conclusion was that increasing timeout doesn’t help and that crc failure could be caused by problems with issuing halt, so it sound like both problems were connected. On the other hand those error had no visible effect on flashed application.

DAPLink

Recently DAPLink was introduced and on mentioned previously mbed site it replaced previous CMSIS-DAP firmware, but there is no clear information about support in OpenOCD except that pyOCD should debug target with this firmware. Unfortunately DAPLink firmware provided by NXP for FRDM-K64F didn’t worked for me out of the box, what I tried to resolve by asking question here.

It looked like more people have problems with debugging. Proposed solutions are KDS, using Segger and P&M firmware instead of CMSIS-DAP.

Kinetis Design Studio

This was suggested as solution, by NXP and I get to point where I have to give it a try. It is obvious that each vendor will force its solution.

I don’t like idea of bloated Eclipse-based IDEs forced on us by big guys. It looks like all of semiconductors go that way TI, STM, NXP – this is terrible for industry. We loosing flexibility, features start to be hidden in hundreds of menus and lot of Linux enthusiast have to deal memory consuming blobs. Not mention Atmel, which is even worst going Visual Studio path and making whole ecosystem terrible to work with.

Of course there is no way to validate such big ecosystem, so it have to be buggy.

I know they want to attract junior developers with “simple” and good looking interface, but number of option hidden and quality of documentation lead experts to rebel against this choice. Learning junior developers how custom, vendor Eclipse works is useless for true skill set needed. It makes people learn where options are in menu, but not how those options really work and what is necessary to enable those. We wrapping everything to make its simple, but it turns us into users that don’t really know how system works and if anything will happen different then usual we will have problems figuring out the way.

Portability of projects created in Eclipse-based IDEs is far from being useful. Tracking configuration files to give working development environment to other team members is also impossible. Finally each developer have different configuration and if something doesn’t work there is no easy way to figure out what is going on. Support is slow and configuration completely not portable.

Best choice for me would be well working command line tool and build system. All those components should be wrapped in portable containers. We were successful in building such development environment for embedded Linux using either Poky or Buildroot. Why not to go mbedCLI way ?

Luckily KDS is available in DEB package, but it couldn’t be smaller then 691MB. I have to allow this big bugged environment to hook into my system and I’m really unhappy with that.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[1:31:54] pietrushnic:Downloads $ sudo dpkg -i  kinetis-design-studio_3.2.0-1_amd64.deb 
[sudo] password for pietrushnic:
Selecting previously unselected package kinetis-design-studio.
(Reading database ... 405039 files and directories currently installed.)
Preparing to unpack kinetis-design-studio_3.2.0-1_amd64.deb ...
Unpacking kinetis-design-studio (3.2.0) ...
Setting up kinetis-design-studio (3.2.0) ...

**********************************************************************
* Warning: This package includes the GCC ARM Embedded toolchain,     *
*          which is built for 32-bit hosts. If you are using a       *
*          64-bit system, you may need to install additional         *
*          packages before building software with these tools.       *
*                                                                    *
*          For more details see:                                     *
*          - KDS_Users_Guide.pdf:"Installing Kinetis Design Studio". *
*          - The Kinetis Design Studio release notes.                *
**********************************************************************
Processing triggers for gnome-menus (3.13.3-9) ...
Processing triggers for desktop-file-utils (0.23-1) ...
Processing triggers for mime-support (3.60) ...

Then this:

It was very clear information. Maybe adding path log would be also useful ? Finally problem was in lack of disk space.

KDS OpenOCD

Interestingly OpenOCD in KDS behave little bit different then upstream. There were still problems with halt and crc errors. Unfortunately flashing is terribly slow (0.900 KiB/s). NXP seems to use old OpenOCD Open On-Chip Debugger 0.8.0-dev (2015-01-09-16:23) It doesn’t seem that OpenOCD and CMSIS-DAP can provide reasonable experience for embedded systems developer.

What works ?

After all above tests it happen that the only solution that seem to work without weird errors is Segger Jlink V2 firmware with Segger software provided in KDS.

To configure working configuration you need correct firmware which can be downloaded on OpenSDA bootloader and application website. After updating firmware you can follow with further steps.

Flashing with Segger

To flash you can use JLinkExe inside Zephyr application:

1
/opt/Freescale/KDS_v3/segger/JLinkExe -if swd -device MK64FN1M0VLL12 -speed 1000 -CommanderScript ~/tmp/zephyr.jlink

Where ~/tmp/zephyr.jlink

1
2
3
h
loadbin outdir/frdm_k64f/zephyr.bin 0x0
q

Debugging with Segger

Then you can use JLinkGDBServer for debugging purposes:

1
2
3
/opt/Freescale/KDS_v3/segger/JLinkGDBServer -if swd -device MK64FN1M0VLL12 \
-endian little -speed 1000 -port 2331 -swoport 2332 -telnetport 2333 -vd \
-ir -localhostonly 1 -singlerun -strict -timeout 0

Output should look like that:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
SEGGER J-Link GDB Server V5.10n Command Line Version

JLinkARM.dll V5.10n (DLL compiled Feb 19 2016 18:45:10)

-----GDB Server start settings-----
GDBInit file:                  none
GDB Server Listening port:     2331
SWO raw output listening port: 2332
Terminal I/O port:             2333
Accept remote connection:      localhost only
Generate logfile:              off
Verify download:               on
Init regs on start:            on
Silent mode:                   off
Single run mode:               on
Target connection timeout:     0 ms
------J-Link related settings------
J-Link Host interface:         USB
J-Link script:                 none
J-Link settings file:          none
------Target related settings------
Target device:                 MK64FN1M0VLL12
Target interface:              SWD
Target interface speed:        1000kHz
Target endian:                 little

Connecting to J-Link...
J-Link is connected.
Firmware: J-Link OpenSDA 2 compiled Feb 28 2017 19:27:22
Hardware: V1.00
S/N: 621000000
Checking target voltage...
Target voltage: 3.30 V
Listening on TCP/IP port 2331
Connecting to target...Connected to target
Waiting for GDB connection...

To debug application you can use debugger provided wit Zephyr SDK that you used to compile application.

1
2
cgdb -d $ZEPHYR_SDK_INSTALL_DIR/sysroots/x86_64-pokysdk-linux/usr/bin/arm-zephyr-eabi/arm-zephyr-eabi-gdb \
outdir/frdm_k64f/zephyr.elf

Then you have to connect to JLinkGDBServer:

1
2
target remote :2331
load

For zperf same application output should look like that:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
GNU gdb (GDB) 7.11.0.20160511-git
Copyright (C) 2016 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "--host=x86_64-pokysdk-linux --target=arm-zephyr-eabi".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from outdir/frdm_k64f/zephyr.elf...done.
(gdb) target remote :2331
Remote debugging using :2331
__k_mem_pool_quad_block_size_define () at /home/pietrushnic/storage/wdc/projects/2017/acme/src/zephyr/include/kernel.h:3146
(gdb) load
Loading section text, size 0xaa7e lma 0x0
Loading section devconfig, size 0xe4 lma 0xaa80
Loading section net_l2, size 0x10 lma 0xab64
Loading section rodata, size 0xc40 lma 0xab74
Loading section datas, size 0xf68 lma 0xb7b4
Loading section initlevel, size 0xe4 lma 0xc71c
Loading section _k_sem_area, size 0x14 lma 0xc800
Loading section net_if, size 0x400 lma 0xc814
Loading section net_if_event, size 0x18 lma 0xcc14
Loading section net_l2_data, size 0x8 lma 0xcc2c
Start address 0x970c, load size 52274
Transfer rate: 25524 KB/sec, 2613 bytes/write.
(gdb) bt
#0  __start () at /home/pietrushnic/storage/wdc/projects/2017/acme/src/zephyr/arch/arm/core/cortex_m/reset.S:64

If you need to reset remote side use:

1
monitor reset

It happens that load piece was also missing part for CMSIS-DAP. This command gives GDB access to program symbols when using remote debugging.

Summary

In terms of speed there is no comparison between Segger and CMSIS-DAP. First gave me speed of ~50MB/s second ~2MB/s. Unfortunately Segger have to be externally installed with KDS or from binaries provided by Segger. Zephyr also would require some modification to support that solution. CMSIS-DAP has a lot of weird errors, which can confuse user. There is no information if those errors affect firmware anyhow, but professional developers don’t want to wonder if their tools work correctly, because there is plenty of other tasks to worry about. CMSIS-DAP is very slow OpenOCD from KDS version is 20x slower then upstream OpenOCD, but advantage of this is that it works out of the box with Zephyr what can be good for people starting.

If you struggle with development on FRDM-K64F or have some issues with Zephyr we would be glad help. You can easily contact us via socialmedia or through email contact<at>3mdeb<dot>com. Please share this post if you feel it has valuable information.

Nerves project triage on BeagleBone Black

| Comments

Recently one of my customers brought to my attention Nerves. It aims to simplify use of Elixir (functional language leveraging Erlang VM) in embedded systems. This system has couple interesting features that are worth of research and blog post.

First is booting directly to application which is running in BEAM (Erlang VM). Nerves project replace systemd process with programming language virtual machine running application code. Concept is very interesting and I wonder if someone tried to use that with other VMs ie. JVM.

Second Nerves seems to utilize dual image update procedure. In my opinion any development of modern embedded system should start with update system. Any design that you can to your system update arsenal will be useful.

Third, Nerves use Buildroot as build system, which will I’m familiar with. Using popular build systems means simplified support for huge set of platforms (at point of writing this article Buildroot have 142 config files).

Let’s start with documentation

If you don’t want to go through all installation steps and you use Debian testing, you can run:

1
2
sudo apt-get install erlang elixir ssh-askpass squashfs-tools \
git g++ libssl-dev libncurses5-dev bc m4 make unzip cmake

Erlang

Checking exact Erlang version for non Erlang developers is trivial:

1
2
3
4
$ erl -eval '{ok, Version} = file:read_file(filename:join([code:root_dir(), \
"releases", erlang:system_info(otp_release), "OTP_VERSION"])), \
io:fwrite(Version), halt().' -noshell
19.2.1

Elixir

Checking Elixir version:

1
2
3
4
$ elixir --version
Erlang/OTP 19 [erts-8.2.1] [source] [64-bit] [smp:4:4] [async-threads:10] [kernel-poll:false]

Elixir 1.3.3

Unfortunately Nerves Project requires at least 1.4.0, what can be solved by:

1
2
3
4
5
6
7
sudo apt-get remove elixir
wget https://packages.erlang-solutions.com/erlang/elixir/FLAVOUR_2_download/elixir_1.4.1-1\~debian\~jessie_all.deb
sudo dpkg -i elixir_1.4.1-1~debian~jessie_all.deb
$ elixir --version
Erlang/OTP 19 [erts-8.2.1] [source] [64-bit] [smp:4:4] [async-threads:10] [kernel-poll:false]

Elixir 1.4.1

fwup

fwup have to be installed from deb package:

1
2
wget https://github.com/fhunleth/fwup/releases/download/v0.13.0/fwup_0.13.0_amd64.deb
sudo dpkg -i fwup_0.13.0_amd64.deb

I don’t understand why Nerves Projects used fwup, when software like swupdate from Denx is available. I don’t see difference in feature set and would say that swupdate is more flexible and covers more use cases. It looks like Nerves Project is main user of fwup.

Maybe it would be worth to consider comparison of fwup and swupdate ?

nerves_bootstrap

1
2
3
mix local.hex
mix local.rebar
mix archive.install https://github.com/nerves-project/archives/raw/master/nerves_bootstrap.ez

hello_nerves for BeagleBone Black

1
2
3
4
5
mix nerves.new hello_nerves
export MIX_TARGET=bbb
cd hello_nerves
mix deps.get
mix firmware

Flashing to SD card

1
mix firmware.burn -d /dev/sdX

booting

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
U-Boot SPL 2016.03 (Mar 07 2017 - 18:34:42)
Trying to boot from MMC
reading args
spl_load_image_fat_os: error reading image args, err - -1
reading u-boot.img
reading u-boot.img


U-Boot 2016.03 (Mar 07 2017 - 18:34:42 +0000)

       Watchdog enabled
I2C:   ready
DRAM:  512 MiB
Reset Source: Power-on reset has occurred.
MMC:   OMAP SD/MMC: 0, OMAP SD/MMC: 1
Using default environment

Net:   <ethaddr> not set. Validating first E-fuse MAC
cpsw, usb_ether
Press SPACE to abort autoboot in -2 seconds
switch to partitions #0, OK
mmc0 is current device
Scanning mmc 0:1...
Found U-Boot script /boot.scr
reading /boot.scr
2308 bytes read in 5 ms (450.2 KiB/s)
## Executing script at 80000000
Running Nerves U-Boot script
reading uEnv.txt
** Unable to read file uEnv.txt **
reading zImage
4350536 bytes read in 243 ms (17.1 MiB/s)
reading am335x-boneblack.dtb
55541 bytes read in 9 ms (5.9 MiB/s)
Kernel image @ 0x82000000 [ 0x000000 - 0x426248 ]
## Flattened Device Tree blob at 88000000
   Booting using the fdt blob at 0x88000000
   Loading Device Tree to 8ffef000, end 8ffff8f4 ... OK

Starting kernel ...

[    0.000508] clocksource_probe: no matching clocksources found
[    0.377452] wkup_m3_ipc 44e11324.wkup_m3_ipc: could not get rproc handle
[    0.587493] omap_voltage_late_init: Voltage driver support not added
[    0.691687] bone_capemgr bone_capemgr: slot #0: No cape found
[    0.735661] bone_capemgr bone_capemgr: slot #1: No cape found
[    0.779680] bone_capemgr bone_capemgr: slot #2: No cape found
[    0.823659] bone_capemgr bone_capemgr: slot #3: No cape found
Erlang/OTP 19 [erts-8.2] [source] [async-threads:10] [kernel-poll:false]

Interactive Elixir (1.4.1) - press Ctrl+C to exit (type h() ENTER for help)
iex(1)>

It look that things work out of the box and Elixir started and D2 LED blinks continuously.

Nerves booting

It looks like developers configured Linux kernel bootargs used by U-Boot to run elrinit as init process. erlinit is relatively simple application that can parse configuration file and do some basic system initialization. Depending on needs this may be considered quite weird approach. Of course adding systemd is not best approach for all solutions. For sure having custom init binary remove need for complex init system and makes updates much smaller. Also this solution targets dedicated embedded systems that whole purpose is running Elixir application.

Using custom init binary also limit attack vector to small amount of code. In typical build from Buildroot or Yocto final image contain quite a lot of process run by default. Nerves limit that to one that is needed for very specific use case that can be fully handled by Elixir application. Of course still some hardware setup is needed. In that case only Linux kernel or Elixir application can be attacked.

As one of my associate mention this is very similar approach to Busybox although here we replace shell with Elixir interpreter, but idea is similar to have one application that is entry point to the system.

From performance perspective this is also good solution since there a no daemons working in background that consuming resources. Lack of additional processes means that all server type of work have to be written in Elixir.

It would be very interesting to see how this approach can work for other VMs and if there are real world use cases for that.

erlinit & erlexec

erlinit is MIT licensed /sbin/init replacement. In general it:

  • setup pseudo-filesystems like /dev, /proc and /sys
  • setup serial console
  • register signal hendlers (SIGPWR, SIGUSR1, SIGTERM, SIGUSR2)
  • forks into cleanup process and new that start erlexec

elrexec is mix of C++ and Erlang that aim to control OS processes from Erlang application.

Source code can be found on Github: erlinit and erlexec.

Note about building natively

Recently I’m huge fan of containers and way this technology can be utilized by embedded software developers. Installing all dependencies in your environment is painful and can cause problems if you do not pay attention. Containers give you ability to separate tools for each project. In that way you create one Dockerfile for whole development environment and then share it with your peers. I believe Nerves Project shall share containers to build system images instead of maintaining documentation explaining how to setup development for lot of various environments.

For example steps for Debian required more of jumping between pages and googling then it was worth since correct set of packages solve issue.

Summary

Do you plan to use Nerves in your next embedded systems project ? Maybe you struggle with adapting similar approach for different VM ? Feel free to share your ideas and issues in comments. If you think content valuable please share this help us in providing more content to our blog.

nRF51822 programming with OpenOCD under Linux

| Comments

Some time ago we bought BLE400 from Waveshare as probably one of the cheapest option to enter nRF51822 market. As our readers know, we prefer to use the Linux environment for embedded systems development. Because of that, we’re following the guide for using Waveshare nRF51822 Eval Kit: icarus-sensors. Kudos due to great post that helped us enter nRF51822 and mbed OS land under Linux.

BLE400 is pretty cheap, because it hasn’t got integrated debugger/programmer. Key is to realize, that you can use BLE400 eval kit and STM32 development board ie. Discovery or any Nucleo (only for its stlink integrated debugger/programmer), which are also cheap. Of course other boards or standalone ST-Link could be used.

Hardware connections

On the Nucleo board both jumpers from CN2 connector should be removed. Thanks to this ST-LINK could be used in stand-alone mode.

Connection should be made this way:

1
2
3
4
5
6
7
8
9
Nucleo CN2 connector             BLE400 SWD connector
-----------------+               +------------------
VCC     (pin 1)  |-x             | .
SWD CLK (pin 2)  |---------------| (pin 9) SWD CLK
GND     (pin 3)  |---------------| (pin 4) GND
SWD IO  (pin 4)  |---------------| (pin 7) SWD IO
RST     (pin 5)  |-x             | .
SWO     (pin 6)  |-x             | .
-----------------+               +------------------

images

Both boards should be connected to host’s USB ports. USB port on BLE400 is used for power supply and debug UART connection (cp210x converter should be detected and ttyUSBx exposed).

OpenOCD basic test

No stlink tools are needed. Only OpenOCD.

OpenOCD version we’re using:

1
2
3
4
5
$ openocd -v
Open On-Chip Debugger 0.9.0 (2016-04-27-23:18)
Licensed under GNU GPL v2
For bug reports, read
  http://openocd.org/doc/doxygen/bugs.html

Enable user access to Debugger

First we need to check, that our debugger is detected. There should be line like this:

1
2
3
4
$ lsusb
...
Bus 003 Device 015: ID 0483:3748 STMicroelectronics ST-LINK/V2
...

Note the ID's: 0483:3748. Create rule in /etc/udev/rules.d (as root):

1
2
3
$ cat > /etc/udev/rules.d/95-usb-stlink-v2.rules << EOF
SUBSYSTEM=="usb", ATTR{idVendor}=="0483", ATTR{idProduct}=="3748", GROUP="users", MODE="0666"
EOF

Reload udev rules (as root):

1
2
$ udevadm control --reload
$ udevadm trigger

Reconnect the st-link. After that, debugger should be accessible by user.

Test the OpenOCD connection

Run this command to connect the debugger to the target system (attaching example output). cfg files location depend on your setup, if you compiled OpenOCD from source those files should be in /usr/local/share/openocd/scripts:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ openocd -f interface/stlink-v2.cfg  -f target/nrf51.cfg
Open On-Chip Debugger 0.9.0 (2016-04-27-23:18)
Licensed under GNU GPL v2
For bug reports, read
  http://openocd.org/doc/doxygen/bugs.html
Info : auto-selecting first available session transport "hla_swd". To override use 'transport select <transport>'.
Info : The selected transport took over low-level target control. The results might differ compared to plain JTAG/SWD
adapter speed: 1000 kHz
Info : Unable to match requested speed 1000 kHz, using 950 kHz
Info : Unable to match requested speed 1000 kHz, using 950 kHz
Info : clock speed 950 kHz
Info : STLINK v2 JTAG v14 API v2 SWIM v0 VID 0x0483 PID 0x3748
Info : using stlink api v2
Info : Target voltage: 2.935549
Info : nrf51.cpu: hardware has 4 breakpoints, 2 watchpoints

If you see error like this:

1
2
3
4
5
6
7
8
9
10
11
12
censed under GNU GPL v2
For bug reports, read
        http://openocd.org/doc/doxygen/bugs.html
Info : auto-selecting first available session transport "hla_swd". To override use 'transport select <transport>'.
Info : The selected transport took over low-level target control. The results might differ compared to plain JTAG/SWD
adapter speed: 1000 kHz
Info : Unable to match requested speed 1000 kHz, using 950 kHz
Info : Unable to match requested speed 1000 kHz, using 950 kHz
Info : clock speed 950 kHz
Error: open failed
in procedure 'init'
in procedure 'ocd_bouncer'

This means you may have STLink v2.1, so your command should look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ openocd -f interface/stlink-v2-1.cfg  -f target/nrf51.cfg
Open On-Chip Debugger 0.10.0-dev-00395-g674141e8a7a6 (2016-10-20-15:01)
Licensed under GNU GPL v2
For bug reports, read
        http://openocd.org/doc/doxygen/bugs.html
Info : auto-selecting first available session transport "hla_swd". To override use 'transport select <transport>'.
Info : The selected transport took over low-level target control. The results might differ compared to plain JTAG/SWD
adapter speed: 1000 kHz
Info : Unable to match requested speed 1000 kHz, using 950 kHz
Info : Unable to match requested speed 1000 kHz, using 950 kHz
Info : clock speed 950 kHz
Info : STLINK v2 JTAG v27 API v2 SWIM v15 VID 0x0483 PID 0x374B
Info : using stlink api v2
Info : Target voltage: 0.000000
Error: target voltage may be too low for reliable debugging
Info : nrf51.cpu: hardware has 4 breakpoints, 2 watchpoints

After that OpenOCD is waiting for incoming telnet connections on port 4444. This sample connection, to check everything is ok:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
$ telnet 127.0.0.1 4444
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
Open On-Chip Debugger
> halt
target state: halted
target halted due to debug-request, current mode: Thread
xPSR: 0x61000000 pc: 0x00011434 msp: 0x200022a8
> reg
===== arm v7m registers
(0) r0 (/32): 0x20000093
(1) r1 (/32): 0x0000003B
(2) r2 (/32): 0xE000E200
(3) r3 (/32): 0x0000003B
(4) r4 (/32): 0x0001CEB8
(5) r5 (/32): 0x00000001
(6) r6 (/32): 0x0001CEB8
(7) r7 (/32): 0xFFFFFFFF
(8) r8 (/32): 0xFFFFFFFF
(9) r9 (/32): 0xFFFFFFFF
(10) r10 (/32): 0xFFFFFFFF
(11) r11 (/32): 0xFFFFFFFF
(12) r12 (/32): 0xFFFFFFFF
(13) sp (/32): 0x200022A8
(14) lr (/32): 0x0000114F
(15) pc (/32): 0x00011434
(16) xPSR (/32): 0x61000000
(17) msp (/32): 0x200022A8
(18) psp (/32): 0xFFFFFFFC
(19) primask (/1): 0x00
(20) basepri (/8): 0x00
(21) faultmask (/1): 0x00
(22) control (/2): 0x00
===== Cortex-M DWT registers
(23) dwt_ctrl (/32)
(24) dwt_cyccnt (/32)
(25) dwt_0_comp (/32)
(26) dwt_0_mask (/4)
(27) dwt_0_function (/32)
(28) dwt_1_comp (/32)
(29) dwt_1_mask (/4)
(30) dwt_1_function (/32)
> reset
> exit
Connection closed by foreign host.

Testing the example program

First we need proper SDK for out device. ICs that we tested were revision 2 and 3 (QFAA and QFAC code, see the print on the NRF chip). You can check the revision table and compatibility matrix to determine SDK version. We used SDK v.12.1.0 for the rev3 chip.

After downloading and uncompressing the SDK. We can find the blinky example in examples/peripheral/blinky/hex/blinky_pca10028.hex. Now we can try to program it:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
$ telnet 127.0.0.1 4444
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
Open On-Chip Debugger
> halt
target state: halted
target halted due to debug-request, current mode: Handler HardFault
xPSR: 0xc1000003 pc: 0xfffffffe msp: 0xffffffd8
> program /home/mek/work/nrf51/sdk/examples/peripheral/blinky/hex/blinky_pca10028.hex
target state: halted
target halted due to debug-request, current mode: Thread
xPSR: 0xc1000000 pc: 0xfffffffe msp: 0xfffffffc
** Programming Started **
auto erase enabled
using fast async flash loader. This is currently supported
only with ST-Link and CMSIS-DAP. If you have issues, add
"set WORKAREASIZE 0" before sourcing nrf51.cfg to disable it
target state: halted
target halted due to breakpoint, current mode: Thread
xPSR: 0x61000000 pc: 0x2000001e msp: 0xfffffffc
wrote 2048 bytes from file /path/to/nrf51/sdk/examples/peripheral/blinky/hex/blinky_pca10028.hex in 0.114289s (17.499 KiB/s)
** Programming Finished **
> reset
> exit
Connection closed by foreign host.

During that procedure you may face this problem:

1
2
3
4
5
6
7
8
9
10
11
12
> program /path/to/work/nrf51/sdk/examples/peripheral/blinky/hex/blinky_pca10028.hex
nrf51.cpu: target state: halted
target halted due to debug-request, current mode: Thread
xPSR: 0xc1000000 pc: 0x00012b98 msp: 0x20001c48
** Programming Started **
auto erase enabled
Cannot erase protected sector at 0x0
failed erasing sectors 0 to 1
embedded:startup.tcl:454: Error: ** Programming Failed **
in procedure 'program'
in procedure 'program_error' called at file "embedded:startup.tcl", line 510
at file "embedded:startup.tcl", line 454

To solve that please issue nrf51 mass_erase and retry program command. This have to be done only once.

After that, LED3 and LED4 should start blinking on the target board.

Sample script for flashing

I’ve created this script to simplify the flashing operation:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
#!/bin/bash

if [ $# -lt 1 ]; then
    echo "Usage: $0 BINARY_HEX"
    exit 0
fi

if [ ! -f $1 ]; then
    echo "$1: file not found"
    exit 1
fi

openocd -f interface/stlink-v2.cfg -f target/nrf51.cfg \
-c "init" \
-c "halt" \
-c "nrf51 mass_erase" \
-c "program $1" \
-c "reset" \
-c "exit"

Note: openocd does not accept filenames containing space in path.

Summary

As you can see, it’s possible to work with nRF51822 under Linux using only OpenOCD. Whole workflow can be scripted to match your needs. With this knowledge, we can start to deploy mbed OS and Zephyr, which both have great support for Linux through command line interface.

Zephyr initial triage on Nucleo-64 STM32F411RE

| Comments

As I mention in previous post Zephyr RTOS is an interesting initiative started by Intel, NXP and couple other strong organizations. With so well founded background future for this RTOS should look bright and I think it will quickly became important player on IoT arena.

Because of that it is worth to dig little bit deeper in this RTOS and see what problems we faced when trying to develop for some well known development board. I choose STM32 F411RE mainly because it start to gather dust and some customers ask about it recently. As always I will present perspective of Linux enthusiast trying to use Debian Linux and command line for development as I did for mbed OS.

Let’s start

To not repeat documentation here please first follow Getting Started Guide.

After setting up environment and running Hello World example we are good to go with trying Nucleo-64 STM32F411RE. This is pretty new thing, so you will need recent arm branch:

1
2
git fetch origin arm
git checkout arm

Then make help should show f411re:

1
2
$ make help|grep f411
  make BOARD=nucleo_f411re            - Build for nucleo_f411re

Let’s try to compile that (please note that I’m still in hello_world project):

1
2
3
4
5
6
$ make BOARD=nucleo_f411re
(...)
  AR      libzephyr.a
  LINK    zephyr.lnk
  HEX     zephyr.hex
  BIN     zephyr.bin

OpenOCD and flashing

To flash binaries OpenOCD was needed:

1
2
3
4
5
6
git clone git://git.code.sf.net/p/openocd/code openocd-code
cd openocd-code
./bootstrap
./configure
make -j$(nproc)
sudo make -j$(nproc) install

It would be great to have mbed way of flashing Nucleo-64 board.

Using OpenOCD I get libusb access error:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
$ make BOARD=nucleo_f411re flash
make[1]: Entering directory '/home/pietrushnic/storage/wdc/projects/2016/acme/zephyr_support/src/zephyr-project'
make[2]: Entering directory '/home/pietrushnic/storage/wdc/projects/2016/acme/zephyr_support/src/zephyr-project/samples/hello_world/outdir/nucleo_f411re'
  Using /home/pietrushnic/storage/wdc/projects/2016/acme/zephyr_support/src/zephyr-project as source for kernel
  GEN     ./Makefile
  CHK     include/generated/version.h
  CHK     misc/generated/configs.c
  CHK     include/generated/offsets.h
  CHK     misc/generated/sysgen/prj.mdef
Flashing nucleo_f411re
Flashing Target Device
Open On-Chip Debugger 0.9.0-dirty (2016-08-02-16:04)
Licensed under GNU GPL v2
For bug reports, read
        http://openocd.org/doc/doxygen/bugs.html
Info : The selected transport took over low-level target control. The results might differ compared to plain JTAG/SWD
adapter speed: 2000 kHz
adapter_nsrst_delay: 100
none separate
srst_only separate srst_nogate srst_open_drain connect_deassert_srst
Info : Unable to match requested speed 2000 kHz, using 1800 kHz
Info : Unable to match requested speed 2000 kHz, using 1800 kHz
Info : clock speed 1800 kHz
Error: libusb_open() failed with LIBUSB_ERROR_ACCESS
Error: open failed
in procedure 'init'
in procedure 'ocd_bouncer'

Done flashing

I added additional udev rules from OpenOCD project:

1
sudo cp contrib/99-openocd.rules /etc/udev/rules.d

And added my username to plugdev group:

1
sudo usermod -aG plugdev $USER

The result was:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
[0:39:48] pietrushnic:hello_world git:(arm) $ make BOARD=nucleo_f411re flash
make[1]: Entering directory '/home/pietrushnic/storage/wdc/projects/2016/acme/zephyr_support/src/zephyr-project'
make[2]: Entering directory '/home/pietrushnic/storage/wdc/projects/2016/acme/zephyr_support/src/zephyr-project/samples/hello_world/outdir/nucleo_f411re'
  Using /home/pietrushnic/storage/wdc/projects/2016/acme/zephyr_support/src/zephyr-project as source for kernel
  GEN     ./Makefile
  CHK     include/generated/version.h
  CHK     misc/generated/configs.c
  CHK     include/generated/offsets.h
  CHK     misc/generated/sysgen/prj.mdef
Flashing nucleo_f411re
Flashing Target Device
Open On-Chip Debugger 0.9.0-dirty (2016-08-02-16:04)
Licensed under GNU GPL v2
For bug reports, read
        http://openocd.org/doc/doxygen/bugs.html
Info : The selected transport took over low-level target control. The results might differ compared to plain JTAG/SWD
adapter speed: 2000 kHz
adapter_nsrst_delay: 100
none separate
srst_only separate srst_nogate srst_open_drain connect_deassert_srst
Info : Unable to match requested speed 2000 kHz, using 1800 kHz
Info : Unable to match requested speed 2000 kHz, using 1800 kHz
Info : clock speed 1800 kHz
Info : STLINK v2 JTAG v27 API v2 SWIM v15 VID 0x0483 PID 0x374B
Info : using stlink api v2
Info : Target voltage: 3.234714
Info : stm32f4x.cpu: hardware has 6 breakpoints, 4 watchpoints
    TargetName         Type       Endian TapName            State
--  ------------------ ---------- ------ ------------------ ------------
 0* stm32f4x.cpu       hla_target little stm32f4x.cpu       running
target state: halted
target halted due to debug-request, current mode: Thread
xPSR: 0x01000000 pc: 0x0800203c msp: 0x20000750
auto erase enabled
Info : device id = 0x10006431
Info : flash size = 512kbytes
target state: halted
target halted due to breakpoint, current mode: Thread
xPSR: 0x61000000 pc: 0x20000042 msp: 0x20000750
wrote 16384 bytes from file /home/pietrushnic/storage/wdc/projects/2016/acme/zephyr_support/src/zephyr-project/samples/hello_world/outdir/nucleo_f411re/zephyr.bin in 0.727563s (21.991 KiB/s)
target state: halted
target halted due to debug-request, current mode: Thread
xPSR: 0x01000000 pc: 0x0800203c msp: 0x20000750
verified 12876 bytes in 0.118510s (106.103 KiB/s)
shutdown command invoked
Done flashing
make[2]: Leaving directory '/home/pietrushnic/storage/wdc/projects/2016/acme/zephyr_support/src/zephyr-project/samples/hello_world/outdir/nucleo_f411re'
make[1]: Leaving directory '/home/pietrushnic/storage/wdc/projects/2016/acme/zephyr_support/src/zephyr-project'

Hello world verification

Unfortunately I was not able to verify if hello_world example works at first time. I posted my experience on mailing list and after couple days I received information that there was bug in clock initialisation and fix was pushed to gerrit.

So I tried one more time:

1
2
3
4
5
6
7
git checkout master
git fetch origin
git branch -D arm
git checkout arm
source zephyr-env.sh
cd samples/hello_world
make BOARD=nucleo_f411re

Unfortunately arm branch seems to rebase or change in not linear manner, so just pulling it cause lot of conflicts.

After correctly building I flashed binary to board:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
make[1]: Entering directory '/home/pietrushnic/storage/wdc/projects/2016/acme/zephyr_support/src/zephyr-project'
make[2]: Entering directory '/home/pietrushnic/storage/wdc/projects/2016/acme/zephyr_support/src/zephyr-project/samples/hello_world/outdir/nucleo_f411re'
  Using /home/pietrushnic/storage/wdc/projects/2016/acme/zephyr_support/src/zephyr-project as source for kernel
  GEN     ./Makefile
  CHK     include/generated/version.h
  CHK     misc/generated/configs.c
  CHK     include/generated/offsets.h
Flashing nucleo_f411re
Flashing Target Device
Open On-Chip Debugger 0.9.0-dirty (2016-08-02-16:04)
Licensed under GNU GPL v2
For bug reports, read
        http://openocd.org/doc/doxygen/bugs.html
Info : The selected transport took over low-level target control. The results might differ compared to plain JTAG/SWD
adapter speed: 2000 kHz
adapter_nsrst_delay: 100
none separate
srst_only separate srst_nogate srst_open_drain connect_deassert_srst
Info : Unable to match requested speed 2000 kHz, using 1800 kHz
Info : Unable to match requested speed 2000 kHz, using 1800 kHz
Info : clock speed 1800 kHz
Info : STLINK v2 JTAG v27 API v2 SWIM v15 VID 0x0483 PID 0x374B
Info : using stlink api v2
Info : Target voltage: 3.232105
Info : stm32f4x.cpu: hardware has 6 breakpoints, 4 watchpoints
    TargetName         Type       Endian TapName            State
--  ------------------ ---------- ------ ------------------ ------------
 0* stm32f4x.cpu       hla_target little stm32f4x.cpu       running
target state: halted
target halted due to debug-request, current mode: Thread
xPSR: 0x01000000 pc: 0x080020a0 msp: 0x20000750
auto erase enabled
Info : device id = 0x10006431
Info : flash size = 512kbytes
target state: halted
target halted due to breakpoint, current mode: Thread
xPSR: 0x61000000 pc: 0x20000042 msp: 0x20000750
wrote 16384 bytes from file /home/pietrushnic/storage/wdc/projects/2016/acme/zephyr_support/src/zephyr-project/samples/hello_world/outdir/nucleo_f411re/zephyr.bin in 0.663081s (24.130 KiB/s)
target state: halted
target halted due to debug-request, current mode: Thread
xPSR: 0x01000000 pc: 0x08001c84 msp: 0x20000750
verified 11900 bytes in 0.109678s (105.956 KiB/s)
shutdown command invoked
Done flashing
make[2]: Leaving directory '/home/pietrushnic/storage/wdc/projects/2016/acme/zephyr_support/src/zephyr-project/samples/hello_world/outdir/nucleo_f411re'
make[1]: Leaving directory '/home/pietrushnic/storage/wdc/projects/2016/acme/zephyr_support/src/zephyr-project'

Log looks the same as previously, but this time on /dev/ttyACM0 I found some output by using minicom:

1
minicom -b 115200 -o -D /dev/ttyACM0

Result was:

1
2
***** BOOTING ZEPHYR OS v1.6.99 - BUILD: Jan 14 2017 22:03:14 *****
Hello World! arm

The same method worked with basic/blinky example.

Summary

This was short introduction, which took couple weeks to publish. I will continue Zephyr research and as initial project I choose to add i2c driver for F411RE development board.

Overall Zephyr looks very promising with lot of documentation. Community could me more responsive, because at this point I think it is pushed more by corporation related then deeply engaged enthusiasts.

Important think to analyze for Zephyr is cross platform verification on application level. By that I mean exercising proposed abstraction model to see if for example I can run the same application on emulation and on target platform. Giving that ability would be huge plus.

Also what would be interesting to see is some general approach to application validation. This could shift verification from target hardware to emulated environment, what would be very interesting for future embedded developers.

Failure of ECC508A crypto coprocessor initial triage with SAM G55 Xplained Pro Evaluation Kit

| Comments

Some time ago (around August 2016) embedded community media were hit with hype around simplified flow for AWS IoT provisioning (1, 2, 3). I’m personally very interested in all categories related to those news:

  • IoT – is 3mdeb business core and despite this term was largely abused these days, we just love to build connected embedded devices. Building this kind of devices is inherently related with firmware deployment, provisioning and update problems.

  • AWS – truly it is had to find similar level of quality and feature-richness and because I was lucky to invest my time and work with grandfather of AWS IoT (namely 2lemetry ThingFabric) I naturally try to follow this trend and make sure 3mdeb customers use best in class product in IoT cloud segment. To provide that service we try to be on track with all news related to AWS IoT.

  • Security – there will be not so much work for Embedded System Consultants if IoT will be rejected because of security issues. I’m sure I don’t have to convince anyone about important of security. Key is to see typical flow that we face in technology (especially in security area):

1
2
3
4
5
mathematics -> 
proof of concept software -> 
mature software -> 
hardware acceleration -> 
hardware implementation

AWS IoT cryptography is not trivial and doing it right is even more complex. Using crypt chips like ECC508A should simplify whole workflow.

Initial idea for this blog post was to triage ECC508A with some Linux or mbed OS enabled platform. Atmel SAM G55 seem to have support in mbed OS here, but diving into CryptoAuthentication with development stack that I’m not sure work fine is not best choice. That’s why I had to try stuff on Windows 10 and then after understanding things better I move to something more convenient.

I mostly relied on ATECC508A Node Authentication Example Using Asymmetric PKI Application Note.

What we need to start is:

Atmel Studio

Welcome in the world of M$ Windows. I wonder who get idea of excluding Mac and Linux users from Atmel SAM developers community, but this decision was really wrong. Of course there are options like ASF but this requires much more work for setup and is probably not feasible for initial triage post. Unfortunately number of examples in ASF is limited and I can’t find anything related to crypt or i2c.

Atmel Studio is obviously inspired or even build on Visual Studio engine.

CryptoAuthentication Node Basic Example Solution

To make things simple CryptoAuthentication Node Basic Example Solution.zip, which you can be downloaded here is 15MB and contain almost 2k of files. Download and unpack archive.

After starting Atmel Studio choose Open Project..., navigate to CryptoAuthentication example and choose node-auth-basic you should get funny pop-up that tells you to watch out for malicious Atmel Studio projects:

images

Then you have window with info Please select your project, so choose node-auth-basic, then try Build -> Rebuild Solution, of course this doesn’t work out of the box.

One of problems that I faced was described here this is just incorrect OPTIMIZE_HIGH macro. After fixing that both examples compile fine.

I realized that Atmel Studio use older ASF (3.28.1) then what is available (3.32.0), but upgrading ASF leads to upgrading whole project and take time. After upgrade you get report if everything went fine for your 2k files.

The problem with node-auth-basic is that it is not prepared for SAM G55. Whole code in AT88CKECC-AWS-XSTK documents target SAM D21. So you have to change target device and this is possible only after update. To change device enter node-auth-basic project properties and got to Device tab, then use Change Device find SAMG family and use SAMG55J19. Please note that SAM G55 devices are not visible if not change Show devices to All Parts. Result should look like this:

images

I can only imagine how outdated this post will be with next version of Atmel Studio.

Now we get more compilation errors:

1
2
Error       sam/sleepmgr.h: No such file or directory   node-auth-basic \
C:\(...)\cryptoauth-node-auth-basic\node-auth-basic\src\ASF\common\services\sleepmgr\sleepmgr.h 53

With above problem I started to think I’m getting really useless expertise. The issue was pretty clear – we compile for SAMG not for SAMD and we need different header.

ASF installation madness

Moreover when I tried to reinstall ASF I had to register on Atmel page which complained on LastPass and identify my location as Russian Federation (despite I’m in Poland). Of course Atmel Studio open Edge to login me into their website. This whole IDE sucks and do a lot of damage to Atmel – how I can recommend them after all that hassle ? Then after going through password/login Windows 10 detect that something is wrong with Atmel Studio and decided that it have to be restarted. What I finally started installation I get this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
2016-11-26 23:46:10 - Microsoft VSIX Installer
2016-11-26 23:46:10 - -------------------------------------------
2016-11-26 23:46:10 - Initializing Install...
2016-11-26 23:46:10 - Extension Details...
2016-11-26 23:46:10 -   Identifier      : 4CE20911-D794-4550-8B94-6C66A93228B8
2016-11-26 23:46:10 -   Name            : Atmel Software Framework
2016-11-26 23:46:10 -   Author          : Atmel
2016-11-26 23:46:10 -   Version         : 3.33.0.640
2016-11-26 23:46:10 -   Description     : Provides software drivers and libraries to build applications for Atmel devices. The minimum supported ASF version is 3.24.2.
2016-11-26 23:46:10 -   Locale          : en-US
2016-11-26 23:46:10 -   MoreInfoURL     : http://asf.atmel.com/docs/latest/
2016-11-26 23:46:10 -   InstalledByMSI  : False
2016-11-26 23:46:10 -   SupportedFrameworkVersionRange : [4.0,4.5]
2016-11-26 23:46:10 - 
2016-11-26 23:46:10 -   Supported Products : 
2016-11-26 23:46:10 -           AtmelStudio
2016-11-26 23:46:10 -                   Version : [7.0]
2016-11-26 23:46:10 - 
2016-11-26 23:46:10 -   References      : 
2016-11-26 23:46:10 - 
2016-11-26 23:46:14 - The extension with ID '4CE20911-D794-4550-8B94-6C66A93228B8' is not installed to AtmelStudio.
2016-11-26 23:46:28 - The following target products have been selected...
2016-11-26 23:46:28 -   AtmelStudio
2016-11-26 23:46:28 - 
2016-11-26 23:46:28 - Beginning to install extension to AtmelStudio...
2016-11-26 23:46:29 - Install Error : System.IO.IOException: There is not enough space on the disk.

   at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
   at System.IO.FileStream.WriteCore(Byte[] buffer, Int32 offset, Int32 count)
   at System.IO.FileStream.Write(Byte[] array, Int32 offset, Int32 count)
   at Microsoft.VisualStudio.ExtensionManager.ExtensionManagerService.WriteFilesToInstallDirectory(InstallableExtensionImpl extension, String installPath, ZipPackage vsixPackage, IDictionary`2 extensionsInstalledSoFar, AsyncOperation asyncOp, UInt64 totalBytesToWrite, UInt64& totalBytesWritten)
   at Microsoft.VisualStudio.ExtensionManager.ExtensionManagerService.InstallInternal(InstallableExtensionImpl extension, Boolean perMachine, Boolean isNestedExtension, IDictionary`2 extensionsInstalledSoFar, List`1 extensionsUninstalledSoFar, IInstalledExtensionList modifiedInstalledExtensionsList, AsyncOperation asyncOp, UInt64 totalBytesToWrite, UInt64& totalBytesWritten)
   at Microsoft.VisualStudio.ExtensionManager.ExtensionManagerService.BeginInstall(IInstallableExtension installableExtension, Boolean perMachine, AsyncOperation asyncOp)
   at Microsoft.VisualStudio.ExtensionManager.ExtensionManagerService.InstallWorker(IInstallableExtension extension, Boolean perMachine, AsyncOperation asyncOp)

This should be enough to throw it away. Of course I have ~500MB on disk, but this is not enough. I assume that MS way in Windows 10 of providing information to user is throwing exceptions or this was method of handling lack of free space in Atmel Studio.

I gave up

Couple more things that I found:

  • There is no easy way to convert examples for ECC508A to make them work with SAMG55 as those examples are mostly created for SAMD21. Clearly Atmel do a lot noise about 250USD kit for which you don’t have examples.
  • CryptoAuthentication library doesn’t have HAL for SAMG55
  • Atmel engagement in process of supporting community is poor, what can be found here 1,2
  • Full datasheet is available only under NDA

Summary

I waste lot of time to figure out that evaluation of well advertised product is terribly difficult. I’m sure that lack of knowledge of Atmel ecosystem probably added to my problems. I also didn’t bother to contact community, which is not fair to judge from my side.

Key idea behind this triage was to check ECC508A in environment suggested by manufacturer. It happens that manufacturer didn’t prepare infrastructure and documentation to be able to evaluate product in advertised way. Initial triage was needed for implementation in more complex system with Embedded Linux on board. Luckily during whole this process I found cryptoauth-openssl-engine Github repository. Which I will evaluate in next posts.

If you will struggle with similar problems and pass through some mentioned above or you successfully triaged ECC508A on AT88CKECC-AWS-XSTK please let me know. Other comments as always welcome.

Starting with Nucleo-F411RE and mbed OS for command line enthusiasts

| Comments

When I first time read about mbed OS I was really sceptical, especially idea of having web browser as my IDE and compiler in the cloud seems to be very scary to me. ARM engineers proved to provide high quality products, but this was not enough to me. Then I heard very good words about mbed OS IDE from Jack Ganssle, this was still not enough. Finally customers started to ask about this RTOS and I had to look deeper.

There are other well known OSes, but most of them have issues:

  • FreeRTOS – probably most popular, GPL license with exceptions and restrictions, doesn’t have drivers provided this is mostly filled by MCU vendor in SDK. This can lead to problems ie. lack of well supported DTLS library or specific communication protocol. It often happen that MCU vendors doesn’t maintain community, so code base grows internally and is not revealed.

  • RIoT – well known and popular, LGPL 2.1 license what is typically problematic when your work affect system core. Contain lot of features, but number of supported platforms is limited. Targeted at academics and hobbyists.

  • Zephyr – great initiative backed by Linaro, Linux Foundation, Qualcomm/NXP/Freescale and Intel. License Apache 2.0, which IMO is much better for embedded then (L)GPL. Unluckily this is brand new and support is very limited. For sure porting new platform to Zephyr can be great fun and principles are very good, but support is very limited and it will take time to make it mature enough to seriously consider in commercial product.

  • mbed OS – this one looks really great. Apache 2.0. Tons of drivers, clean environment, huge, good-looking and well written documentation. Wide range of hardware is already supported and it came from designed of most popular core in the world. Community is big but it is still not so vibrant as ie. RIoT.

Below I want to present Linux user experience from my first contact with mbed OS on Nucleo-F411RE platform.

images

First contact

I have to say that at first glance whole system is very well documented with great look and feel. Main site requires 2 clicks to be in correct place for Embedded System Engineer. In general we have 3 main path when we choose developer tools: Online IDE, mbed CLI and 3rd party. Last covers blasting variety of IDEs including Makefile and Eclipse CDT based GCC support.

Things that are annoying during first contact we web page:

  • way to contribute documentation is not clear
  • there is no description how to render documentation locally
  • can’t upload avatar on forum – no information what format and resolution is supported

But those are less interesting things. Going back to development environment for me 2 options where interesting mbed CLI and plain Makefile.

mbed CLI

I already have setup vitrualenv for Python 2.7:

1
pip install mbed-cli

First thing to like in mbed-cli is that it was implemented in Python. Of course this is very subjective since I’m familiar with Python, but it good to know that I can hack something that doesn’t work for me. Is is Open Source.

I also like the idea of mimicking git subcommands. More information about mbed CLI can be found in documentation.

It is also great that mbed CLI tries to manage whole program dependencies in structured way, so no more hassle with external libraries versioning and trying to keep sanity when you have to clone your development workspace. Of course this have to be checked on battlefield, since documentation promise may be not enough.

So first thing that hit me when trying to move forward was this message:

1
2
3
4
5
6
7
8
9
10
11
$ mbed new mbed-os-program                                                  
[mbed] Creating new program "mbed-os-program" (git)
[mbed] Adding library "mbed-os" from "https://github.com/ARMmbed/mbed-os" at branch latest
[mbed] Updating reference "mbed-os" -> "https://github.com/ARMmbed/mbed-os/#d5de476f74dd4de27012eb74ede078f6330dfc3f"
[mbed] Auto-installing missing Python modules...
[mbed] WARNING: Unable to auto-install required Python modules.
---
[mbed] WARNING: -----------------------------------------------------------------
[mbed] WARNING: The mbed OS tools in this program require the following Python modules: prettytable, intelhex, junit_xml, pyyaml, mbed_ls, mbed_host_tests, mbed_greentea, beautifulsoup4, fuzzywuzzy
[mbed] WARNING: You can install all missing modules by running "pip install -r requirements.txt" in "/home/pietrushnic/tmp/mbed-os-program/mbed-os"
[mbed] WARNING: On Posix systems (Linux, Mac, etc) you might have to switch to superuser account or use "sudo"

This appeared to be some problem with my distro:

1
2
3
4
5
6
(...)
    ext/_yaml.c:4:20: fatal error: Python.h: No such file or directory
     #include "Python.h"
                        ^
    compilation terminated.
(...)

This indicate lack of python2.7-dev package, so:

1
2
sudo aptitude update && sudo aptitude dist-upgrade
sudo aptitude install python2.7-dev

After verifying that you can create program, let’s try to get well known hello world for embedded:

1
mbed import https://github.com/ARMmbed/mbed-os-example-blinky

Toolchain

To compile example we need toolchain. The easiest way would be to get distro package:

1
sudo apt-get install gcc-arm-none-eabi

Now you should set toolchain configuration, if you won’t error like this may pop-up:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
$ mbed compile -t GCC_ARM -m NUCLEO_F411RE
Building project mbed-os-example-blinky (NUCLEO_F411RE, GCC_ARM)
Scan: .
Scan: FEATURE_BLE
Scan: FEATURE_UVISOR
Scan: FEATURE_LWIP
Scan: FEATURE_COMMON_PAL
Scan: FEATURE_THREAD_BORDER_ROUTER
Scan: FEATURE_LOWPAN_ROUTER
Scan: FEATURE_LOWPAN_BORDER_ROUTER
Scan: FEATURE_NANOSTACK
Scan: FEATURE_THREAD_END_DEVICE
Scan: FEATURE_NANOSTACK_FULL
Scan: FEATURE_THREAD_ROUTER
Scan: FEATURE_LOWPAN_HOST
Scan: FEATURE_STORAGE
Scan: mbed
Scan: env
Compile [  0.4%]: AnalogIn.cpp
[ERROR] In file included from ./mbed-os/drivers/AnalogIn.h:19:0,
                 from ./mbed-os/drivers/AnalogIn.cpp:17:
./mbed-os/platform/platform.h:22:19: fatal error: cstddef: No such file or directory
compilation terminated.

[mbed] ERROR: "python" returned error code 1.
[mbed] ERROR: Command "python -u /home/pietrushnic/tmp/mbed-os-example-blinky/mbed-os/tools/make.py -t GCC_ARM -m NUCLEO_F411RE --source . --build ./BUILD/NUCLEO_F411RE/GCC_ARM" in "/home/pietrushnic/tmp/mbed-os-example-blinky"
---

Toolchain configuration is needed:

1
mbed config --global GCC_ARM_PATH "/usr/bin"

But then we get another problem:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
$ mbed compile -t GCC_ARM -m NUCLEO_F411RE
Building project mbed-os-example-blinky (NUCLEO_F411RE, GCC_ARM)
Scan: .
Scan: FEATURE_BLE
Scan: FEATURE_UVISOR
Scan: FEATURE_LWIP
Scan: FEATURE_COMMON_PAL
Scan: FEATURE_THREAD_BORDER_ROUTER
Scan: FEATURE_LOWPAN_ROUTER
Scan: FEATURE_LOWPAN_BORDER_ROUTER
Scan: FEATURE_NANOSTACK
Scan: FEATURE_THREAD_END_DEVICE
Scan: FEATURE_NANOSTACK_FULL
Scan: FEATURE_THREAD_ROUTER
Scan: FEATURE_LOWPAN_HOST
Scan: FEATURE_STORAGE
Scan: mbed
Scan: env
Compile [  1.9%]: main.cpp
[ERROR] In file included from ./mbed-os/rtos/Thread.h:27:0,
                 from ./mbed-os/rtos/rtos.h:28,
                 from ./mbed-os/mbed.h:22,
                 from ./main.cpp:1:
./mbed-os/platform/Callback.h:21:15: fatal error: new: No such file or directory
compilation terminated.

[mbed] ERROR: "python" returned error code 1.
[mbed] ERROR: Command "python -u /home/pietrushnic/tmp/mbed-os-example-blinky/mbed-os/tools/make.py -t GCC_ARM -m NUCLEO_F411RE --source . --build ./BUILD/NUCLEO_F411RE/GCC_ARM" in "/home/pietrushnic/tmp/mbed-os-example-blinky"
---

I’m not sure what is the reason but I expect lack of g++-arm-none-eabi but it is not provided by Debian at this point. So its time to switch to toolchain downloaded directly from GNU ARM Embedded Toolchain page.

1
2
wget https://launchpadlibrarian.net/287101520/gcc-arm-none-eabi-5_4-2016q3-20160926-linux.tar.bz2
tar xvf gcc-arm-none-eabi-5_4-2016q3-20160926-linux.tar.bz2

Then change your global mbed configuration:

1
mbed config --global GCC_ARM_PATH "/path/to/gcc-arm-none-eabi-5_4-2016q3/bin"

Now compilation works without problems:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
$ mbed compile -t GCC_ARM -m NUCLEO_F411RE
Building project mbed-os-example-blinky (NUCLEO_F411RE, GCC_ARM)
Scan: .
Scan: FEATURE_BLE
Scan: FEATURE_UVISOR
Scan: FEATURE_LWIP
Scan: FEATURE_COMMON_PAL
Scan: FEATURE_THREAD_BORDER_ROUTER
Scan: FEATURE_LOWPAN_ROUTER
Scan: FEATURE_LOWPAN_BORDER_ROUTER
Scan: FEATURE_NANOSTACK
Scan: FEATURE_THREAD_END_DEVICE
Scan: FEATURE_NANOSTACK_FULL
Scan: FEATURE_THREAD_ROUTER
Scan: FEATURE_LOWPAN_HOST
Scan: FEATURE_STORAGE
Scan: mbed
Scan: env
Compile [  1.9%]: BusIn.cpp
Compile [  2.3%]: AnalogIn.cpp
Compile [  2.7%]: BusInOut.cpp
(...)
Compile [ 99.2%]: serial_api.c
[Warning] serial_api.c@333,35: unused variable 'tmpval' [-Wunused-variable]
[Warning] serial_api.c@821,27: unused variable 'tmpval' [-Wunused-variable]
[Warning] serial_api.c@823,27: unused variable 'tmpval' [-Wunused-variable]
[Warning] serial_api.c@825,27: unused variable 'tmpval' [-Wunused-variable]
[Warning] serial_api.c@827,27: unused variable 'tmpval' [-Wunused-variable]
[Warning] serial_api.c@954,23: unused variable 'tmpval' [-Wunused-variable]
Compile [ 99.6%]: stm_spi_api.c
Compile [100.0%]: test_env.cpp
Link: mbed-os-example-blinky
Elf2Bin: mbed-os-example-blinky
+--------------------+-------+-------+------+
| Module             | .text | .data | .bss |
+--------------------+-------+-------+------+
| Fill               |   130 |     4 |    5 |
| Misc               | 21471 |  2492 |  100 |
| drivers            |   118 |     4 |  100 |
| hal                |   536 |     0 |    8 |
| platform           |  1162 |     4 |  269 |
| rtos               |    38 |     4 |    4 |
| rtos/rtx           |  5903 |    20 | 6870 |
| targets/TARGET_STM |  5950 |     4 |  724 |
| Subtotals          | 35308 |  2532 | 8080 |
+--------------------+-------+-------+------+
Allocated Heap: unknown
Allocated Stack: unknown
Total Static RAM memory (data + bss): 10612 bytes
Total RAM memory (data + bss + heap + stack): 10612 bytes
Total Flash memory (text + data + misc): 37840 bytes

Object file test_env.o is not unique! It could be made from: ./mbed-os/features/frameworks/greentea-client/source/test_env.cpp /home/pietrushnic/tmp/mbed-os-example-blinky/mbed-os/features/unsupported/tests/mbed/env/test_env.cpp
Image: ./BUILD/NUCLEO_F411RE/GCC_ARM/mbed-os-example-blinky.bin

So we have binary now we would like to deploy it to target.

Test real hardware

To test build binary on Nucleo-F411RE the only thing is to connect board through mini USB and copy build result to mounted directory. In my case it was something like this:

1
cp BUILD/NUCLEO_F411RE/GCC_ARM/mbed-os-example-blinky.bin /media/pietrushnic/NODE_F411RE/

This is pretty weird interface for programming, but simplified to the maximum.

Serial console example

Modify your main.cpp with something like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
#include "mbed.h"

DigitalOut led1(LED1);
Serial pc(USBTX, USBRX);

// main() runs in its own thread in the OS
// (note the calls to Thread::wait below for delays)
int main() {
    int i = 0;

    while (true) {
        pc.printf("%d\r\n", i);
        i++;
        led1 = !led1;
        Thread::wait(1000);
    }
}

Recompile and copy result as it was described above. To connect to device please check your dmesg:

1
2
3
4
5
6
7
$ dmesg|grep tty
[    0.000000] console [tty0] enabled
[    0.935792] 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
[    3.219884] systemd[1]: Created slice system-getty.slice.
[    4.058666] usb 3-1: FTDI USB Serial Device converter now attached to ttyUSB0
[10721.756835] ftdi_sio ttyUSB0: FTDI USB Serial Device converter now disconnected from ttyUSB0
[10727.552536] cdc_acm 3-1:1.2: ttyACM0: USB ACM device

This means that your Nucleo registered /dev/ttyAMA0 device and to connect you can use minicom:

1
minicom -b 9600 -o -D /dev/ttyACM0 

Summary

I hope this tutorial add something or help resolving some issue that you may struggle with. As you can see mbed is not perfect, but it looks like it may serve as great replacement for previous environments ie. custom IDE from various vendors. What would be useful to verify is for sure OpenOCD with STLink to see if whole development stack is ready to use under Linux. In next post I will try to start work with Atmel SAM G55 and mbed OS.