What is Dasharo Tools Suite?

DTS is a Linux distribution built upon Yocto Project technologies with
Dasharo/meta-dts as a core layer, and
Dasharo/dts-scripts as a core code and logic repository.
Apart from this, DTS uses other layers and a separate
repository for metadata. The DTS documentation can be found in
docs.dasharo.com.
Dasharo Tools Suite and Dasharo Universe

Dasharo Tools Suite (i.e., DTS) was initially designed for two purposes:
- Support end-users while deploying Dasharo firmware (the
DTS Prodon the image above). - Support Dasharo firmware developers during firmware development (the
Dasharo Tools Suite (DTS) devon the image above).
Hence, DTS is an integral part of Dasharo Universe, and to achieve these goals, it provides, among others, the following functionalities:
- Dasharo Zero Touch Initial Deployment, that is, a list of automated
workflows:
- Initial deployment for Dasharo firmware.
- Update for Dasharo firmware.
- Transition for Dasharo firmware.
- Dasharo Hardware Compatibility List Report (i.e., Dasharo HCL or DTS HCL; you can find more about it here).
- Fusing workflow for some Dasharo firmware (for more information about fusing, check Dasharo documentation).
- Firmware recovery workflow.
Furthermore, the future DTS releases will add even more functionalities:
- Some platforms will get a Dasharo firmware update (check
milestones for more information). The future releases
will also include support for server platforms. To get in touch with the
latest Dasharo and Zarhus teams' success in that field, check the following
posts:
- Porting Gigabyte MZ33-AR1 server board with AMD Turin CPU to coreboot.
- AMD PSP blob analysis on Gigabyte MZ33-AR1 Turin system.
- Mapping and initializing USB and SATA ports on Gigabyte MZ33-AR1.
- Gigabyte MZ33-AR1 Porting Update: PCIe Init, BMC KVM Validation, and HCL Improvements.
- Gigabyte MZ33-AR1 Porting Update: ACPI and bugfixes.
- Full platforms metadata migration from DTS code to Dasharo/dts-configs that will reduce costs per DTS release and increase issue resolution rate.
- The DTS test results can soon be viewed on the OSFV Dashboard results repository (including the DTS E2E test results).
- As the DTS codebase clean-up will continue, some of its code will be shared with other Dasharo and Zarhus projects. Right now, the first one in the queue is the DTS UI (inspired by ChromeOS Device Firmware Utility UI), which will be shared with the Zarhus Provisioning Box.
- Integration of
fwupd. - Further integration with Zarhus Provisioning Box for Root of Trust and Chain
of Trust provisioning and verification.
- Check out Zarhus Team talk “Stop dreading NIS2: Unlock your firmware digital sovereignty with Zarhus” at ZDM#3 for more information about Zarhus Provisioning Box.
- Or Qubes OS Summit talk “Qubes Air: Opinionated Value Proposition for Security-Conscious Technical Professionals” for more information about isolation and management of security artifacts.
- Attestation of Dasharo-supported platforms via procedures and attestation
infrastructure.
- Check out the opening of ZDM#3 for more information about attestation.
And the list of features and the codebase are constantly growing bigger. Let me explain how we are holding all this together.
The challenges
There are two main facts about DTS that cause most of the challenges:
- It is a software that operates on hardware (that is, flashing firmware, reading firmware state, reading hardware state, etc.).
- It has a monolithic architecture.
The first fact results from the DTS goals described before: it was developed for Dasharo firmware that is being developed for specific hardware. While the hardware can be a problem, for example during testing by adding hardware setup overhead, the challenges it brings up can be at least partially solved via mocking mechanisms and emulation. In DTS, it was solved by designing an automated testing framework that uses automation features from Robot Framework, the DTS hardware and firmware states mocking infrastructure, and the emulation powers of QEMU.
The second fact is caused by a popular development flow that starts by developing a monolithic script and then trying to scale it. The general consequences of monolithic software design are well known. But the main point in DTS that causes problems during development is not well-controlled software execution flow. Let me explain this on a diagram.

As you can see, the DTS code can be divided into some groups, responsible for
different functionalities, on one side: remote access, signatures and hashes
verification, etc.. The problem is that Non-firmware/hardware-specific code
is mixed with Firmware/hardware-specific code, causing several problems:
- Non-linear execution flow.
Firmware/hardware-specific codeis mixed withNon-firmware/hardware-specific code; therefore it is hard to reuse generic code.- The amount of
Non-firmware/hardware-specific codegrowing together with amount of platforms supported by DTS, that is caused by mixed logic.
All this led to a scalability headache, because the entire codebase had a dependency on the number of supported platforms:

But as software develops, the monolith architecture key issues arise: “How to scale the software?” and “How to make sure there are no regressions?”. This is especially important for DTS, because as a key component of the Dasharo Universe that is responsible for deploying Dasharo firmware, it must be stable and secure. Hence, the goal right now is to switch from a monolith to a microservices-like architecture to:
- Decrease the amount of surplus code and improve code reusability by
separating
Firmware/hardware-specific codeandNon-firmware/hardware-specific code, which should generally improve the scalability of DTS and decrease the features implementation and bug fixing delays. - Linearise execution flow, fixing the stability problems.
- Separate distinctive pieces of codebase to make adding unit testing possible, further increasing stability and scalability.
- Reuse some pieces of codebase in other Dasharo and Zarhus projects (e.g., the DTS UI shared with Zarhus Provisioning Box), so other projects will invest in the DTS source code evolution.

Ideally, the DTS codebase should look like this:

So we can design and validate the Non-firmware/hardware-specific code and
Firmware/hardware-specific code separately:

How to achieve this? The key is to develop a proper testing methodology before making any changes. Why? Currently DTS has a huge list of workflows per platform, and any change in code without proper automated regression testing is a problem. And the proper testing methodology will both: decrease costs of development by saving time needed for testing, and help in keeping the codebase stable during global changes.
List of DTS workflows per platform for the curious ones.
|
|
Launched according to DTS OSFV documentation.
Testing
To continue developing DTS without constantly facing regressions, we have developed a testing methodology called End to End testing (i.e., E2E). The goals of this methodology are:
- Cover already existing functionalities in DTS without the need to adjust them to the testing methodology.
- Let developers introduce internal DTS architecture changes without testing methodology restrictions.
The entire methodology relies on two core concepts: black box testing (i.e., specification-based testing) and use case testing. In short, black box testing is based on three things: a system input parameters format, a system output results format, and relations between sets of input parameters and output parameters. For the DTS, there are three input parameters:
- User input (e.g., which workflow the user chooses, what data the user provides, etc.).
- Hardware state at the beginning of a DTS workflow (e.g., used CPU or RAM, etc.).
- Firmware state at the beginning of a DTS workflow (e.g., firmware
provider, firmware version,
SMMSTOREpresence, etc.).
And there are two output parameters:
- Output for user (e.g., warnings, errors, questions, etc.).
- Firmware state modifications (e.g., what parts of firmware were written,
whether
SMMSTOREwas migrated or not, etc.).
By manipulating the input parameters and monitoring the output parameters, the DTS can be tested without bothering about what is going on inside until the format of these parameters stays the same, that is:

And the second concept under DTS E2E testing methodology: use case testing.
The use case means that the entire set of DTS execution flows triggered by
input parameters is divided into two distinct groups:
- Success paths - these are the execution flows that are triggered by specific combinations of the input parameters that provide a specific sets of output parameters that result in the firmware and hardware states after the DTS workflow finishes to be expected and correct.
- Error paths - these are the execution flows that are triggered by specific combinations of the input parameters that provide a specific sets of output parameters that result either in a DTS workflow fail or the firmware and hardware states after the DTS workflow finishes to be unexpected and/or incorrect.
The definitions could be visualized by the following diagram, where 🙂 outlines the success paths and 💀 outlines the error paths:

The overall goals are to maintain the success paths and make sure the error paths are properly handled (e.g., terminated and communicated to the user). But enough theory, let’s get to the tech and implementation details.
Testing infrastructure
Currently, we have three DTS testing architectures:

Where:
- The
Testing on real hardwareis covered by OSFV/dasharo-compatibility or done manually. - The
Testing on QEMUandTesting in CI/CD workflowsare covered by OSFV/dts. These testing architectures are available due to the presence of the DTS E2E methodology, as its development triggered the development of several testing technologies. - The OSFV stands for Open Source Firmware Validation: it is a testing framework developed as a part of Dasharo Universe and based on Robot Framework. For more information, check the Dasharo/open-source-firmware-validation repository.
Two different testing workflows apply to these architectures. For the Testing on real hardware, the following general workflow applies:

For the Testing on QEMU and Testing in CI/CD workflows, the following
workflow applies:

Every testing flow and architecture has its own advantages and disadvantages:
- The
Testing on real hardwareadvantage is that it is the closest reflection of a real user experience, hence it is the most trusted architecture. - The
Testing on real hardwaredisadvantage is that it has a dependency on hardware. It is not only that a developer or a tester needs to prepare hardware once before testing (that, even if done once, costs around 90% of the time spent on actual testing), but also if the hardware causes false positives or false negatives, the entire testing, including thePrepare hardwarestep, should be redone. And I am not even mentioning the delays caused by bricked hardware, which sometimes forces software developers to wait for the hardware team’s help. Testing on QEMUorTesting in CI/CD workflowsadvantages are:- It can be done entirely automatically (e.g., in GitHub Actions) whenever a developer wants to test something.
- It does not depend on hardware, hence there is no
Prepare hardwarestep overhead or any false negatives (e.g. bad hardware connection that causes the test to fail) caused by hardware. Therefore, it optimizes the developer’s inner loop by reducing the time needed for testing.
Testing on QEMUorTesting in CI/CD workflowsdisadvantage is, that the test results obtained from testing on mocked hardware must be proven to be trustworthy.
By connecting the testing infrastructure with the black box concept of the DTS E2E testing methodology and the inputs/outputs described at the beginning of the Testing chapter with OSFV, I can provide some examples:
-
The OSFV controls the
User inputandOutput for userparameters by communicating with the DTS UI. Example OSFV keyword for reading and writing to DTS UI:1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22Wait For Either Checkpoint And Write [Documentation] Keywords waits for any of the ${checkpoints} key and if ... it matches then writes value of this element to the console [Arguments] ${bare}=${FALSE} &{checkpoints} Log Waiting for either checkpoint: ${checkpoints} ${out}= Wait For Either Checkpoint @{checkpoints} # Find which checkpoint we found Sleep 1s FOR ${checkpoint} ${write} IN &{checkpoints} IF """${checkpoint}""" in """${out}""" IF ${bare} Write Bare Into Terminal ${checkpoints}[${checkpoint}] ELSE Write Into Terminal ${checkpoints}[${checkpoint}] END Log Waited for """${checkpoint}""" and written "${checkpoints}[${checkpoint}]" RETURN ${out} END END # We shouldn't ever get here Fail Couldn't find checkpoint in returned outputThat uses checkpoints from dts-lib.robot. An example:
1 2 3 4 5 6 7# 2) Check out all warnings. Decline Heads if asked ${checkpoint}= Wait For Either Checkpoint And Write ... ${DTS_SPECIFICATION_WARN}=Y ... ${DTS_HEADS_SWITCH_QUESTION}=N IF """${DTS_HEADS_SWITCH_QUESTION}""" in """${checkpoint}""" Wait For Checkpoint And Write ${DTS_SPECIFICATION_WARN} Y END -
The
Hardware state, andFirmware stateinputs are set:- For
Testing on real hardware- by thePrepare hardwarestep (that actually can be done by the OSFV). - For
Testing on QEMUorTesting in CI/CD workflows- by theMock hardwarestep (that is done by OSFV).
- For
-
The
Firmware stateoutput is verified:- For
Testing on real hardware- by the tester. - For
Testing on QEMUorTesting in CI/CD workflows- by the OSFV.
- For
But how does the mocking work, and how does the OSFV verify the Firmware state
output?
Mocking on QEMU
DTS mocking system is quite an important piece of DTS E2E methodology, as it
helps to provide the mocked configurable Hardware state and Firmware state
inputs on QEMU, as well as play a key role in proving the trustworthiness of
the test results obtained from testing on QEMU.
As was explained before, DTS has the Firmware/hardware-specific code that
consists of calls to Firmware/hardware-specific tools that do two things:
-
Get data from hardware or firmware (that is, the way the DTS code acquires the data from the
Hardware stateandFirmware stateinputs). An example tool call:1 2flashrom -p "$PROGRAMMER_BIOS" ${FLASH_CHIP_SELECT} \ -r "$tmp_rom" >>"$FLASH_INFO_FILE" 2>>"$ERR_LOG_FILE"That dumps the currently-flashed firmware from the platform for further parsing and analysis.
-
Modify the firmware state (that is, the way the DTS code manipulates the
Firmware stateoutput). An example tool call:1 2flashrom -p "$PROGRAMMER_EC" ${FLASH_CHIP_SELECT} \ -w "$EC_UPDATE_FILE" >>$FLASHROM_LOG_FILE 2>>$ERR_LOG_FILEThat writes the EC (i.e., Embedded Controller) firmware.
The idea is to separate the calls in the DTS code from the actual tool calls by a wrapper:
|
|
That is a part of DTS HAL that defines most of its rules:
|
|
The following flow diagram could explain the general execution flow of the wrapper:

And with the tool_wrapper, the aforementioned flashrom calls will change to:
-
For the get data call:
1 2$FLASHROM flashrom_read_firm_mock -p "$PROGRAMMER_BIOS" \ ${FLASH_CHIP_SELECT} -r "$tmp_rom" >>"$FLASH_INFO_FILE" 2>>"$ERR_LOG_FILE" -
For the modify the firmware state call:
1 2$FLASHROM -p "$PROGRAMMER_EC" ${FLASH_CHIP_SELECT} \ -w "$EC_UPDATE_FILE" >>$FLASHROM_LOG_FILE 2>>$ERR_LOG_FILE
Because of the flashrom tool wrapping:
|
|
Hence, for the calls, the following mocks will be executed if the DTS_TESTING
has some value assigned:
-
For the get data call:
1 2 3 4 5 6 7 8 9 10 11 12flashrom_read_firm_mock() { # Emulating dumping of the firmware the platform currently uses. Currently it is # writing into text file, that should be changed to binary instead (TODO). # For -r check flashrom man page: local _file_to_write_into flashrom_verify_internal_chip "$@" || return 1 _file_to_write_into=$(parse_for_arg_return_next "-r" "$@") echo "Test flashrom read." >"$_file_to_write_into" return 0 } -
For the modify the firmware state call:
1 2 3 4 5 6 7 8 9common_mock() { # This mocking function is being called for all cases where mocking is needed, # but the result of mocking function execution is not important. local _tool="$1" echo "${FUNCNAME[0]}: using ${_tool}..." return 0 }
One could ask: “But every platform gets different data via the Hardware state
and Firmware state inputs, do you write a separate mocking function for every
combination of input data?”. The answer is no, every mocking function can be
configured via, though not named so officially, the DTS HAL mocking API, that
is, a set of Bash variables (whose names begin with TEST_) that are set either
by a tester or a testing automation tool. Hence, the Hardware state and
Firmware state DTS inputs for a mocked hardware are controlled via the DTS HAL mocking API. Here is an example of a flashrom mocking function that uses
the variables:
|
|
Let me introduce a quick definition before continuing. DTS mocking
configuration (or just mocking configuration later in this blog post) - is
a set of the DTS HAL mocking API variables that properly mocks hardware X
for DTS workflow Y. And here is an example for a complete mocking
configuration for platform MSI PRO Z690-A DDR4 (the
msi-pro-z690-a-wifi-ddr4 part) for DTS Initial Deployment workflow without DPP
access (that is marked by DCR string, i.e. Dasharo Community Release; the
Initial Deployment - DCR part):
|
|
For information on how it works on the OSFV side, refer to its documentation.
Here is a workflow on how to construct such a mocking configuration for platform X and DTS workflow Y:

But in such a workflow the mocking configuration that controls the Hardware state and Firmware state inputs for running DTS on QEMU with mocked hardware
is being prepared by verifying it not against Hardware state and Firmware state inputs collected from running DTS on real hardware, but against User input and Output for user from running DTS on real hardware. Because the
User input and Output for user cannot be directly mapped on mocked
Hardware state and Firmware state inputs (because, the former is literally
input and output of DTS UI, and the latter is information read from firmware
or hardware). This fact results in the aforementioned need for the test results
obtained from testing on mocked hardware to be proved to be trustworthy.
DTS profiles and QEMU testing results trustworthiness
Now we know how to mock the Hardware state and Firmware state inputs, let’s
clarify to prove the correctness of the mocking, hence proving the
trustworthiness of the DTS E2E test results on mocked hardware. Ideally, we want
to measure the Hardware state and Firmware state directly, so we can treat
the measurements as an ultimate source of trust when preparing the mocking
configuration:

And it is actually possible! Do you remember the word profile that has already
been mentioned several times in this blog post? The profile, or, more
precisely, DTS profile, is a tool that was developed for measuring Hardware state and Firmware state inputs for proving the results' trustworthiness.
As was mentioned before, the DTS profile is being collected by
tool_wrapper(). Here is an example of a profile that
is used to prove the trustworthiness of the mocking configuration for the
aforementioned DTS E2E test case msi-pro-z690-a-wifi-ddr4 Initial Deployment - DCR:
|
|
The above profile presents the commands that are used for:
-
Acquiring information via
Hardware stateinput, e.g.:1fsread_tool test -e /sys/class/power_supply/AC/online 1That checks the power adapter presence.
-
Acquiring information via
Firmware stateinput, e.g.:1flashrom -p internal -r /tmp/dasharo_dump.rom --ifd -i fd -i bios -i me --fmap -i FMAP -i BOOTSPLASH 1That dumps some firmware regions for further analysis.
-
Does some firmware state modifications via
Firmware stateoutput, e.g.:1flashrom -p internal -N --ifd -i bios -w /tmp/biosupdate_resigned.rom 0That flashes Dasharo firmware.
There are other commands that, at first glance, cannot be assigned to any of the mentioned inputs and outputs. Because the commands do not contact the firmware or hardware states directly (that is, via drivers or any middleware, but with real hardware or firmware), but rather use previously dumped into a file data, or use files for any other operations. For example, the command:
|
|
That reads information about the Dasharo firmware image (that is stored in a file) layout. Or:
|
|
That extracts the bootsplash logo from the dumped firmware. We consider such
commands a part of the DTS inputs (the Firmware state input for these two
cases). And the commands:
|
|
That add hardware serial number and system UUID to the to be flashed Dasharo
firmware image. We consider such commands a part of the DTS Firmware state
output.
Okay, now it is clear how the DTS inputs and outputs are being measured, but how to prove the trustworthiness of the DTS E2E test results on mocked hardware? Well, several conditions should be met before stating that the results are trustworthy:
- There should be a trusted up-to-date
DTS profilecollected from real hardware. - The DTS E2E test workflow should provide a
DTS profilecollected during testing on QEMU with mocked hardware. - The profiles from the first condition and the second condition should match.
The profiles can be collected from the real hardware either manually or using automatic or semi-automatic OSFV helpers. The workflow with the OSFV helpers is as follows:

Where DTG is a part of OSFV test case ID (an example of a complete ID can be
found here)For collecting the profiles manually, the workflow is as
follows:

Where:
-
S1: specific for every hardware (check Dasharo Supported hardware page for more information). -
S2.2: specific for every hardware and firmware (check Dasharo Supported hardware page for more information). -
S2.3: according to DTS documentation or according to OSFV scripts (for custom-built DTS). -
S2.4: enable SSH server in DTS and enter shell, then remove logs, profiles, and credentials:1 2rm -rf /etc/cloud-pass /root/.mc /*.tar.gz /root/*.tar.gz /tmp/logs \ /tmp/dts-temp-filesThen create a fake
rebootcommand:1 2 3mkdir -p /tmp/bin echo '#!/bin/bash' >/tmp/bin/reboot chmod +x /tmp/bin/rebootFor NovaCustom laptops faking
dasharo_ectoolis also required:1 2 3 4 5 6 7 8cat <<'EOF' >/tmp/bin/dasharo_ectool #!/bin/bash if [ "$1" != "flash" ]; then /usr/bin/dasharo_ectool "$@" fi EOF chmod +x /tmp/bin/dasharo_ectoolBoot DTS again:
1PATH="/tmp/bin:$PATH" dts-boot -
S2.5: run chosen DTS workflow. -
S2.6: after DTS finishes pressEnterand, without touching DUT (i.e., Device Under Test), copy the profile via SSH from the DUT to the host:1scp root@<DUT_IP>:"/tmp/logs/*profile" ./ -
S7: copy the profile to the OSFV directory with profiles, naming it according to OSFV documentation (in a formatOSFV_TEST_CASE_NAME.profile). -
S8: developer verifies that the state of hardware and firmware is expected and correct.
Now OSFV has access to the profile acquired from real hardware, and you can create a mocking configuration according to the workflows described previously. After the mocking configuration is created, you should add an OSFV DTS E2E test case according to OSFV DTS documentation, that will use the profile you generated. After that, you can launch the test case you have prepared according to the same documentation and check the results. You should expect one of the following results:
-
Test passes.
-
User inputorOutput for userfail: OSFV will report to you that it detected unexpected DTS UI behaviour; This could be caused by issues in the used mocking configuration or a DTS bug. Example:1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17------------------------------------------------------------------------------ E2E008: novacustom-v540tnd Fuse Platform - DCR | FAIL | No match found for 'Fusing is irreversible. Are you sure you want to continue? [n/y]' in 2 minutes Output: 7 Gathering flash chip and chipset information... Flash information: Opaque flash chip Flash size: 2M Waiting for network connection ... Network connection have been established! Downloading board configs repository... Checking if board is Dasharo compatible. Getting platform specific GPG key... Done No release with fusing support is available for your platform. Press Enter to continue.. ------------------------------------------------------------------------------Here OSFV expects the DTS to print
Fusing is irreversible. Are you sure you want to continue? [n/y]. But DTS does not print the string, because to do so, the platformnovacustom-v540tndshould have fusing support, but at the time of testing, the platform did not support the fusing. Hence, the fail is expected. -
Hardware stateinput,Firmware stateinput, orFirmware stateoutput fail: will be signalled by profiles mismatch, e.g:1 2 3 4 5------------------------------------------------------------------------------ E2E043: msi-pro-z690-a-wifi-ddr4 UEFI->Heads Transition - DPP | FAIL | Teardown failed: Profiles are not identical!: 1 != 0 ------------------------------------------------------------------------------This means either the profile collected from hardware is not up to date, an issue in the used mocking configuration, or a bug in DTS. This particular fail was caused by an issue. The comment in the profile means that though the DTS on QEMU returns exactly the same profile, the profile from the real
msi-pro-z690-a-wifi-ddr4platform for DTS workflowUEFI->Heads Transition - DPPwas not proved trustworthy because of the linked issue. Hence, the trustworthiness of the test result on QEMU cannot be proved. -
Some OSFV bug: try to fix it or report via the OSFV issues page.
A note about error paths
All the explanations from the chapters before apply to both the success paths
and the error paths, because the mocking, profiles, testing on QEMU,
and all other technologies presented could be used for testing both paths. The
only difference is in the test cases implementations:
Success paths: the execution flow starts when the user selects a DTS workflow and finishes when the chosen workflow finishes and the platform is ready to be rebooted (an example case, not every DTS workflow causes such a state at the end). Hence, the entire testing technology stack is involved (including the mocking configuration, profiles, etc.).Error paths: the execution flow starts when the user selects a DTS workflow, but finishes at any point of DTS workflow execution flow. Hence, a test case for anerror pathcould use a subset of testing technologies mentioned here.
Some error paths test cases examples:
-
A test case that does not mock a specific platform:
1 2 3 4 5 6 7 8 9E2E013.001 Verify that FUM update doesn't start automatically [Documentation] Test that booting via FUM doesn't start update without ... user input Execute Command In Terminal export DTS_TESTING="true" Execute Command In Terminal export TEST_FUM="true" Write Into Terminal dts-boot Wait For Checkpoint You have entered Firmware Update Mode Wait For Checkpoint ${DTS_ASK_FOR_CHOICE_PROMPT}This test case covers an
error paththat is not platform-dependent and appears at the very beginning of the DTS execution flow: in theNon-firmware/hardware-specific codepart. Hence, it does not need specific mocking or profile checking. -
A test case that mocks a specific platform, but does not use profiles:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19E2E010.001 Failure to read flash during update should stop workflow [Documentation] Test that update stops if flash read in ... set_flashrom_update_params function fails. Export Shell Variables For Emulation ... UEFI Update ... DCR ... ${DTS_PLATFORM_VARIABLES}[novacustom-v540tu] ... ${DTS_CONFIG_REF} Execute Command In Terminal export TEST_LAYOUT_READ_SHOULD_FAIL="true" Write Into Terminal dts-boot VAR @{checkpoints}= @{EMPTY} Add Checkpoint And Write ${checkpoints} ${DTS_CHECKPOINT} ${DTS_DEPLOY_OPT} bare=${TRUE} Add Optional Checkpoint And Write ${checkpoints} ${DTS_HEADS_SWITCH_QUESTION} N Add Checkpoint And Write ${checkpoints} ${DTS_SPECIFICATION_WARN} Y Add Checkpoint And Write ${checkpoints} ${DTS_DEPLOY_WARN} Y Wait For Checkpoints ${checkpoints} Wait For Checkpoint Couldn't read flash Wait For Checkpoint ${ERROR_LOGS_QUESTION}There is no need to prove the test result trustworthiness or mocking correctness via profiles, because a specific case is being tested that defines the exact things that should be mocked on the
Hardware stateandFirmware stateinputs (in this, case only theFirmware stateactually). And there is no need to check what will land on theFirmware stateoutput when theerror pathtriggers DTS workflow execution stopping, because the test case only needs to confirm the execution will stop and the user will be informed accordingly (including asking to report the issue by sending debug logs to 3mdeb, via${ERROR_LOGS_QUESTION}). Hence, checking only theOutput for useris sufficient.
Summary

If you have got here, then I can congratulate you, you are really brave! The DTS E2E testing methodology has been out helping the Zarhus Team maintain DTS for quite some time, for example, by detecting issues during releases and fixing it as soon as possible or helping us omit overhead from testing on hardware during huge hardware-related changes. And we are very positive that it will be a game-changer for maintaining DTS code and adding the aforementioned functionalities in future!
If you want to get even deeper and check all details of DTS E2E testing methodology implementation or any other updates on DTS, then I suggest you to star and watch activities on the following repositories:
- Dasharo/open-source-firmware-validation for further development of DTS E2E testing methodology.
- Dashro/meta-dts for updates on DTS.
- Dasharo/dasharo-issues for tracking activities about all Dasharo projects.
- Dasharo/dts-scripts for updates on the core DTS codebase.
Check out other repositories under the Dasharo and Zarhus organizations. I am sure you will find something interesting to contribute to. Consider joining the DTS Matrix community to share your experience and help us make this world more stable and secure.
If you’re looking to boost your product’s performance and
protect it from potential security threats, our team is here to help. Schedule
a call with
us or
drop us an email at contact<at>3mdeb<dot>com to start unlocking the hidden
benefits of your hardware. And if you want to stay up-to-date on all things
firmware security and optimization, be sure to sign up for our newsletter: