Author Archives: Holger Funke

Happy New Year and summary of 2016

I wish you a happy and prosperous new year 2017. Thanks for visiting my blog in 2016 and thanks for your feedback in various forms like comments and discussions. In 2016 I published several articles on this blog and I plan to continue writing about my activities in the area of protocols and testing also in 2017. Before I start with new blog posts in the next weeks, I give a short summary of 2016.

Most read articles in 2016

Article#
Home Page1
Java Sample Code to access Smart Card2
First results of eMRTD Interoperability Test 20163
Chip Authentication Mapping4
ICAO LDS 1.8 or How to detect a file on an ePassport5
Sending EnOcean telegram6
Chip Authentication Version 3 (CAv3)7
Update of BSI TR-03105 Part 5.1 available (V1.4)8
eMRTD Test Specifications Overview9
Eclipse IoT overview10

First time visiting CeBIT as a blogger in 2016

In 2016 it was the first time that I visited the CeBIT in Hanover as an accredited blogger. There were several companies supporting bloggers and also the CeBIT itself established rooms and areas in the exhibition to work and refresh. ePassports were not focused in the exhibition but several companies and organisations demonstrated their ideas concerning IoT and the protocols used there.

Blogger Press Card CeBIT 2016

Blogger Press Card CeBIT 2016

protocolbench is now a registered trade mark

Last year I decided to protect the name of this blog. so I’ve registered the word mark ‘protocolbench’ at the German Patent and Trade Mark Office (DPMA). Under register number 302015219473 you can find more information about the trade mark.

Certificate of word mark 'protocolbench'

Certificate of word mark ‘protocolbench’

Support this blog

In middle of December I decided to use Flattr and Patreon on my blog. These organisations allow visitors and readers to support my blog in an easy way. If you like this blog, please support my work by donating via Flattr.

New job at secunet

And last but not least, I’m working for a new company. Since September 2016 I’m working as a Principal at secunet Security Networks AG in the division ‘Homeland Security’. My area of responsibility is similar to the one before, like testing, eID and standardisation and of course GlobalTester. With the new facilities in Paderborn secunet has established the ninth location in Germany.

Conclusion and summary

So, also in 2017 I will publish new blog posts here in context of protocols and testing. One of the next articles for example will describe the digitalisation of ePassports. If you have ideas of subjects you are interested in or subjects you work on and you would like to get more visibility, just contact me.

 

How to assure interoperability?

Motivation

Typically protocols are connecting two different systems. In an open system with several stakeholders interoperability between these systems is a fundamental requirement. To assure this interoperability there are various way. In this blog post I present you two popular approaches in cooperation with my colleague Dr. Guido Frank who works at the German Federal Office for Information Security (BSI).

Interoperability

Interoperability is a characteristic of a product or system, whose interfaces are completely understood, to work with other products or systems, present or future, in either implementation or access, without any restrictions (Source: Wikipedia).

puzzle - interoperability test

From a system perspective, this means that all implementations need to comply to the same technical specifications. Interoperability is essential because these systems are open systems with different stakeholders. It refers to the collaboration ability of cross-system protocols.

Crossover Testing vs. Conformity Testing

To ensure interoperability, implementations need to be tested. In general, there are different approaches to test systems or implementations.

Crossover Testing

The scope of a crossover test is to test every system component with all other system components. This procedure allows to detect incompatibilities between existing implementations with a fixed release status.

The efforts to perform this kind of test increases disproportionately with every additional instance of the system. Therefore these kinds of tests can practically be performed only with a small number of involved test partners. The following figure illustrates the interaction between different systems in a crossover test.

Crossover Testing

Crossover Testing

Another problem of crossover testing is maintenance: every new implementation or any new version of existing implementations must be tested with every corresponding system, which again increases the testing efforts significantly. A benefit of crossover testing can be found in the early phase of developing when crossover testing helps the developer to implement their own system and can be used as an indicator to use the right way. On the other side, this kind of testing only indicates a positive test case with two correct systems to test (“smoke test”). The behaviour of the systems in bad cases is not tested. Additionally, with two different systems to be tested in a crossover test it’s difficult to decide which system behaves correctly in case of a failure and which implementation has to be changed.

Conformity Testing

The purpose of conformity tests is to verify that a system implements the specifications correctly (i.e. it “conforms” to the specifications).  These specifications need to be defined by stakeholders and finally implemented e.g. by test labs to run their conformity test tools.

This way these test suites verify the implementation under test with protocol data units which mimic both “correct” and “incorrect” behaviour of the system. The figure below illustrates the interactions between the test suite implementation and the system in a conformity test. The test definitions have to be tested with regard to harmonised interpretation among test participants.

Conformity Testing

Conformity Testing

Conclusion

Both approaches of testing allow assuring interoperability to other systems components to a certain extent. But with increasing complexity of the systems to be tested and the increasing number of systems at all the crossover testing is getting more and more extensive. Only conformity testing scales well with the complexity and the number of systems in an adequate way. The following diagram illustrates the increasing efforts of crossover tests in relation to conformity test.

Compared efforts of crossover testing and conformity testing

Compared efforts of crossover testing and conformity testing

Direct tests between a subset of implementations are be useful during implementation or integration phase of a node.  Such tests could also be performed via bilateral appointment between different stakeholders, e.g. via pilots. Experience from such test could also be used as additional input for conformity tests. Crossover testing or a central coordination of such tests would not be necessary for this purpose. As soon as there are several system components to be tested, conformity testing should be chosen.

The benefits of such an approach of interoperability testing can also be seen in several so called ‘InterOp tests‘ that have been performed for more than a decade in context of eMRTD. Detailed failure analysis allows to improve the stability of the whole eMRTD system. Additionally, the results of ‘InterOp tests’ helps not only to improve the stability of ePassports and inspection systems but also to improve the quality of (test) specifications and test tools.

Another open system with several vendors is banking. All cash cards or credit cards must fulfill international test specifications. This way of interoperability testing allows customers to use their cards worldwide at cash machines of various banks.

To assure systematic interoperability, it is necessary to set up conformity test specifications that systematically test the requirements as defined in the technical specifications. The tests should not only define good cases but also bad cases that mirror the pitfalls typically occurring in a system. Only this way allows to assure real system-wide interoperability.

Setup of test specification

Important components of a conformance test specifications are:

  • Description of general test requirements
  • Test setup / Testing environment
  • Definition of suitable test profiles / implementation profiles
  • Implementation Conformance Statement (ICS)
  • Definition of testing or configuration data
  • Definition of test cases according to a unified data structure
  • Each test case should concentrate on a single feature to be tested

The following structure of a test specification has been established since the beginning of eMRTD testing in 2005. It is based on the ISO/OSI layer model where data is tested on layer 7 and protocols are tested on layer 6.

Typical structure of a test case in this context:

  • Test case ID: unique identifier for each test case
  • Purpose: objective of the test case
  • Version: current version of this test case independent from the test specification
  • Reference: where is this feature / behaviour specified
  • Preconditions: setup of test case
  • Test scenario: description of test case, step by step
  • Postconditions: setdown of test case

Test cases can be combined in test suites to cluster test cases of similar topics or objectives. As the test specifications need to be implemented in suitable testing tools, it is useful to define the test cases already in a way, that eases their implementation, e.g. via XML using a suitable XML scheme.

 

eMRTD Test Specification Overview

Currently I’m preparing a project where an ePassport has to be tested. These tests start with the booklet and end with the chip. During the preparation the need for a test specification overview popped up. This need was the root of a new service here on this blog: an overview of all current specifications in the domains of this blog starting with eMRTDs and their corresponding inspection systems.
Keep calm and continue testingTo list all current specifications I’ve added a new page called ‘test specifications‘ in menu above. I will keep this list up-to-date in the future. Finally with every new version of a test specification I will update this list. Currently the list contains test specification released by ICAO and BSI. Both organisations are in the front of implementing tests in context of eMRTD and the corresponding back-end-systems. These certification schemes of BSI and ANSSI also base on these test specification.

Test specifications are “living documents”, which causes several modifications over the time. You need the test specifications, listed here, to prove conformity and finally certify your passport or inspection system.

With every new protocol you need some more or some modified test cases in the specifications. And also maintenance is an important fact to keep the test cases up-to-date. Additionally, I will list also test specifications of other domains like IoT in the closer future.

So have a look at this page next time when you’re back on this blog.

Update of RF and Protocol Testing Part 4 V2.10 online

Introduction

Simultaneously with Part 3, the ICAO released also version 2.10 of the test specification ‘RF and Protocol Testing Part 4‘ to test the interoperability of inspection systems (IS) in context of eMRTD. While the Technical Advisory Group (TAG) of ICAO endorsed the update on the ICAO website, from now on the test specification can be referenced officially. Finally in version 2.10 of the test specification there are some significant modifications compared with the previous version 1.01 released in 2013:

  • Support of protocol PACE-CAM:
    • New test suite ISO7816_G to test Chip Authentication,ICAO Logo
    • New default configuration including Chip Authentication,
    • Updated implementation conformance statement (ICS) to specify IS supporting PACE-CAM,
    • Updated list of abbreviations,
  • Tests for LDS 1.8,
  • Updated references concerning Doc9303 7th edition,
  • Added Advanced Inspection Procedure (AIP),
  • Additionally, there are some clarifications and minor editorial changes.

Furthermore you can find a more detailed list of changes and modifications in version 2.10 to test interoperability of inspection systems.

New test cases in Version 2.10 Update

Basically the new test cases are testing the protocol PACE-CAM or make use of the new LDS 1.8 data structure where the LDS version number is stored in EF.SOD (additionally to EF.COM).

  • ISO7816_C_29: PACE-CAM with missing tag 8Ah but correct ECAD
  • ISO7816_C_30: PACE-CAM with incorrectly encoded ECAD (no octet string)
  • ISO7816_C_31: PACE-CAM with wrong ECAD
  • ISO7816_C_32: PACE-CAM with wrong tag 8Ah (use 8Bh) but correct ECAD
  • ISO7816_C_33: PACE-CAM with correct tag 8Ah but missing ECAD
  • ISO7816_C_34: PACE-CAM with Passive Authentication
  • ISO7816_C_35: Return additional tag 8Ah during PACE-GM
  • ISO7816_C_36: Use DG14 without SecurityInfo during PACE-CAM
  • ISO7816_C_37: Use EF.CardSecurity with wrong ChipAuthenticationPublicKey during PACE-CAM
  • ISO7816_C_38: Use EF.CardSecurity without ChipAuthenticationPublicKeyInfo during PACE-CAM
  • ISO7816_C_39: Check supported standardized Domain Parameters with Chip Authentication Mapping
  • ISO7816_G_01: Chip Authentication with DH
  • ISO7816_G_02: Chip Authentication with ECDH
  • ISO7816_G_03: DG14 with one key reference
  • ISO7816_G_04: DG14 with two key references
  • ISO7816_G_05: DG14 with three key references
  • ISO7816_G_06: DG14 with invalid key reference
  • ISO7816_G_07: DG14 with corrupted DH public key
  • ISO7816_G_08: DG14 with corrupted ECDH public key
  • ISO7816_G_09: Use old session keys after Chip Authentication
  • ISO7816_G_10: Verify lifetime of ephemeral keys
  • ISO7816_G_11: DG14 with invalid DH public key specification
  • ISO7816_G_12: DG14 with invalid ECDH public key specification
  • ISO7816_G_13: ChipAuthenticationPublicKeyInfo: key reference does not match key reference in ChipAuthenticationInfo
  • ISO7816_G_14: Chip Authentication with Extended Length
  • ISO7816_G_15: Use various status words for invalid key reference
  • LDS_A_10: EF.COM with LDS version 1.8
  • LDS_D_35: EF.SOD with LDS Version 1.8
  • LDS_D_36: Security Object with LDS Version 1.8 but Version wrong number
  • LDS_D_37: Security Object with LDS Version 1.7 but Version number 1
  • LDS_D_38: EF.SOD with future LDS Version 1.9

Modified test cases in Version 2.10 Update

Due to the new document structure of version 2.10, it’s difficult to detect all modifications. Therefore please be aware that the list of modified test cases may not be complete and there might be more changes compared to previous version 1.01.

  • ISO7816_C_04: Added new OID for PACE-CAM in table corresponding to test case
  • ISO7816_D_07: Test case deleted

With the release of this test specification, version 2.10 is relevant for certification. So from now on, your inspection system must fulfill these conformity tests to achieve a certificate.

Update of RF and Protocol Testing Part 3 V2.10 online

Introduction

There is an update of ICAO’s test specification ‘RF and Protocol Testing Part 3‘ available. The specification is focusing on conformity testing and protocol testing for eMRTDs implementing protocols like BAC and PACE.ICAO Part 3 Cover

The Technical Advisory Group (TAG) of ICAO endorsed the updated release on the ICAO website, so from now on the test specification can be referenced officially. In version 2.10 of the test specification there are some major modifications:

  • Additional test cases for PACE-CAM (this includes modifications of existing test cases and also new test cases especially for PACE-CAM).
    • New test suite 7816_S to verify access rights (read and select) of EF.CardSecurity.
    • New test suite LDS_K to test presence and coding of SecurityInfo structures in EF.CardSecurity
  • The referenced documents are updated to Doc 9303 Edition 7 and old specifications including supplements are replaced.
  • With 7th edition of Doc 9303 the wording is changed from ‘PACEv2’ and ‘SAC’ to ‘PACE’.
  • And of course there are some minor editorial corrections.

The interim version 2.08 of this test specification was used during the interoperability test in London 2016 (first results of this event can be found in a previous post). This version was prepared at the meeting of ISO WG3 TF4R in Berlin to establish a valid version for the test event. Version 2.10 includes all the updates and some minor changes. In the following the update of version 2.10 is listed more detailed.

New test cases in layer 6

  • ISO7816_O_55: Accessing the EF.CardSecurity file with explicit file selection.
  • ISO7816_O_56: Accessing the EF.CardSecurity file with implicit file selection (ReadBinary with SFI).
  • ISO7816_O_57: Accessing the EF.CardSecurity file with ReadBinary. The test verifies the enforcement of SM after the PACE-CAM protocol has been performed successfully.
  • ISO7816_O_58: Accessing the EF.CardSecurity file with ReadBinary. The test verifies the enforcement of SM after a PACE protocol different from PACE-CAM has been performed successfully.
  • ISO7816_P_78: Positive test with a complete sequence of PACE without Chip Authentication Mapping commands and with MRZ password. The tag 0x8A during PACE-GM and PACE-IM MUST NOT be returned.
  • ISO7816_S_01: Accessing EF.CardSecurity with explicit file selection and Read Binary.
  • ISO7816_S_02: Accessing EF.CardSecurity with implicit file selection (ReadBinary with SFI).
  • ISO7816_S_03: Accessing EF.CardSecurity with explicit file selection and Read Binary OddIns.
  • ISO7816_S_04: Accessing EF.CardSecurity with implicit file selection (ReadBinary OddIns with SFI).

Modified test cases in layer 6

  • ISO7816_P_01: New step 6 and step 7 added for PACE-CAM, Step 5 return new data object 0x8A used in PACE-CAM.
  • ISO7816_P_02: New step 6 and step 7 added for PACE-CAM, Step 5 return new data object 0x8A used in PACE-CAM.
  • ISO7816_P_03: Step 5 return new data object 0x8A used in PACE-CAM.
  • ISO7816_P_14: Step 6 return new data object 0x8A used in PACE-CAM.
  • ISO7816_P_25: Step 5 return new data object 0x8A used in PACE-CAM.
  • ISO7816_P_26: Step 5 return new data object 0x8A used in PACE-CAM.
  • ISO7816_P_27: Step 5 return new data object 0x8A used in PACE-CAM.
  • ISO7816_P_28: Step 5 return new data object 0x8A used in PACE-CAM.
  • ISO7816_P_41: Adopted profile to handle PACE-CAM.
  • ISO7816_P_42: Adopted profile to handle PACE-CAM.
  • ISO7816_P_43: Adopted profile, step 5 return new data object 0x8A used in PACE-CAM.
  • ISO7816_P_44: Adopted profile, Step 5 return new data object 0x8A used in PACE-CAM.
  • ISO7816_P_45: Adopted profile, step 5 return new data object 0x8A used in PACE-CAM.
  • ISO7816_P_46: Adopted profile, step 5 return new data object 0x8A used in PACE-CAM.
  • ISO7816_P_49: Adopted profile to handle PACE-CAM.
  • ISO7816_P_50: Adopted profile to handle PACE-CAM.
  • ISO7816_P_68: Adopted purpose.
  • ISO7816_P_73: Step 5 return new data object 0x8A used in PACE-CAM.
  • ISO7816_P_74: Step 5 return new data object 0x8A used in PACE-CAM.
  • ISO7816_R_05: Correction in referenced RFC.
  • ISO7816_R_06: Correction in referenced RFC.

New test cases in layer 7

  • LDS_E_09: Test that EF.DG14 contains at least one valid set of SecurityInfos for Chip Authentication. A chip supporting PACE-CAM must also support CA.
  • LDS_I_05: Verify that EF.CardAccess contains at least one valid PACEInfo for PACE-GM or PACE-IM as an additional mapping procedure if PACE-CAM is supported.
  • LDS_K_01: Test the ASN.1 encoding of the SecurityInfos.
  • LDS_K_02: Verify the ASN.1 encoding of the ChipAuthenticationPublicKey.
  • LDS_K_03: Test the coherency between the EF.CardSecurity and EF.CardAccess.
  • LDS_K_04: Verify that the parameterID also denotes the ID of the Chip Authentication key used, i.e. the chip MUST provide a ChipAuthenticationPublicKeyInfo with keyID equal to parameterID.

Modified test cases in layer 7

  • LDS_I_02: Added OIDs for PACE-CAM and new step 3 (to check that a valid OID is present for each declared configuration).
  • LDS_I_03: Added OID for PACE-CAM.
  • LDS_I_04: Test case deleted.
  • LDS_J_04: Correction in referenced RFC.

Previous ideas to migrate this test specification to an ISO document are canceled due to political reasons. Part 3 (eMRTD) and Part 4 (inspection systems) will be ICAO documents furthermore whereas Part 1 (durability of ePassports) and Part 2 (contactless interface) are still migrated to ISO documents (ISO 18745-1 and ISO 18745-2).

Eclipse IoT overview

Introduction

A few days before the Eclipse Neon release, the Eclipse Foundation has released several projects in context of IoT (Internet of things). The Eclipse IoT working group is engaged in projects like SmartHome, Kura, Paho and OM2M.

Logo Eclipse IoTThe Internet of Things is all about connecting devices (sensors and actuators) to the internet. You can find these devices in automobiles, wearables, industrial factories, homes, etc. A key challenge is the complexity of implementing an IoT solution where you need to deal with various hardware platforms, manage the IoT gateways to connect the devices to the internet, manage the connectivity and interoperability, and integrate the data in the existing systems and databases.

An important way to reduce this complexity is to create reusable libraries and frameworks. As a result these frameworks are abstract and implement key features. Consequently right here is the approach of Eclipse IoT delivering several technologies combined in an open source stack with all key features and standards that you need to develop your own IoT solution. Furthermore the Eclipse Foundation set up a community with more than 200 contributors to assure the enhancement of the IoT stack.

The current release includes Eclipse SmartHome Version 0.8 and Eclipse Paho Version 1.2. The projects Eclipse Kura and Eclipse OM2M will be available later this month. Additionally, the foundation starts a new project proposal called Eclipse Kapua. Goal is to create a modular integration platform for IoT devices and smart sensors. On this way there will be a bridge between Operation Technology and Information Technology.

Eclipse IoT

The Eclipse IoT ecosystem contains standards, gateways and of course frameworks. The following paragraphs will describe these modules. In addition you can find a reference to an Eclipse project that is relevant in this domain. Please keep in mind that this list is not complete, there are currently 24 different projects available in context of Eclipse IoT.

IoT stack

The following graphic describes the structure of Eclipse IoT stack. The stack includes frameworks, open communication standards and a gateway to assure management services. Consequently most modules based on Java and OSGi. OSGi describes a modular system and service platform for Java that implements a complete and dynamic component model. Finally the Eclipse IoT stack with its components addresses all key requirements in IoT: interoperability, security and connectivity.

Eclipse Open IoT Stack

Eclipse Open IoT Stack

Open communication standards

It’s an elementary feature in context of IoT to provide several mechanisms for protocols used in this domain. All devices must be connected, secured and managed. On this way the Eclipse projects in the IoT ecosystem supports the relevant protocols and standards:

  • MQTT: Eclipse Paho delivers a MQTT client implementation (Java, C/C++, JavaScript). The corresponding MQTT broker is implemented in Eclipse Mosquito (implementation in C). MQTT (Message Queuing Telemetry Transport) is a light-weight publish and subscribe messaging protocol specified in ISO/IEC 20922. Almost the new version 1.2 of Paho includes for example WebSocket support for Java and Pythons clients.
  • CoAP: Eclipse Californium delivers the CoAP standard in Java, including DTLS support.
  • Lightweight M2M: The server side of LwM2M is delivered by Eclipse Wakaama (C/C++) and the client side by Eclipse Leshan (Java). Wakaama provides an API for server applications to send commands to registered LwM2M clients. Leshan relies on the Eclipse IoT Californium project for the CoAP and DTLS implementation.
  • DNSSEC: Eclipse Tiaki provides a DNSSEC implementation in Java. Domain Name System Security Extensions (DNSSEC) is specified by IETF for securing certain kinds of information provided by the Domain Name System (DNS).
  • DTLS: Eclipse TinyDTLS implements the Data Transport Layer Security (DTLS) standard in C. The implementation covers both the client and the server state machine. DTLS in general is a communication protocol that provides security also for datagram protocols like UDP.

Gateways

A gateway manages the connectivity between devices and provides a platform for the upper applications. Eclipse Kura offers a set of services that helps to manage the devices and applications deployed onto IoT gateways. Same to other Eclipse projects the gateway is based on Java and OSGi services. Kura manages the cloud connectivity, supports different protocols and configures the network.

Frameworks

Eclipse IoT provides a set of frameworks:

  • Eclipse SmartHome is a set of Java and OSGi services for building and running Smart Homes. It is designed to run on “simple” devices like Raspberry Pi, BeagleBone Black or Intel Edison. Additionally, this framework supports typical protocols with their diversity used in Smart Homes like EnOcean, KNX or ZigBee. Most of all this way allows the devices and systems to communicate with each other. Almost the new version 0.8 of Eclipse Smart Home contains now a new REST interface that allows easier interaction with the clients. Furthermore, new bindings are supported, e.g. for DigitalStrom.
  • Eclipse SCADA is a set of Java and OSGi services that implements nearly all of the services required to run a SCADA (supervisory control and data acquisition) system. As one type of an Industrial Control System (ICS) Eclipse SCADA delivers functions for data acquisition, monitoring, visualization, etc. Additionally, the framework supports typical industrial automation standards like ModBus, Siemens PLC, SNMP and OPC.
  • Eclipse OM2M is one implementation of ETSI’s MSM standard. This implementation provides a horizontal Service Capability Layer (SCL) to be deployed in a M2M network.

Summary

In conclusion Eclipse IoT provides an open source stack including all relevant frameworks, protocols and standards to develop your own IoT application. The stack allows you to develop new devices but also to modernise existing ‘legacy’ devices.

First results of eMRTD Interoperability Test 2016

During 10th Security Document World 2016 an additional Interoperability Test for eMRTD with PACE took place in London. In context of ePassports this was the 14th event starting 2003 on Canberra. This time there were two test labs involved, 17 document providers and twelve inspection system providers. Here I will focus on the conformity test including test labs and document providers and the InteropTest results. The event was organised by the colleagues of secunet. The following document providers delivered in sum 27 samples:

Logo of SDW InteropTest

  • Arjo Systems
  • Atos IT Solutions and Services
  • Bundesdruckerei
  • Canadian Banknote Company
  • cryptovision
  • De La Rue
  • Gemalto
  • ID&Trust
  • Imprimerie Nationale Group
  • Iris Corporation Berhad
  • MaskTech
  • Morpho
  • NXP Semiconductors
  • Oberthur Technologies
  • PAV Card
  • Polygraph combine ‘Ukraina’
  • PWPW

And the following test laboratories performed a subset of tests focusing on PACE (and of course PACE-CAM):

  • Keolabs (France)
  • HJP Consulting / TÜViT (Germany)

The test cases performed during the event based on ICAO’s test specification ‘RF Protocol and Application Test Standard for eMRTD – Part 3‘ Version 2.08 RC2 including some minor adaptions based on the last WG3TF4 meeting in Berlin. The final version 2.08 of this test specification will be released soon and deltas will be listed in an additional blog post here. With focus on PACE-CAM the following test suites were performed by both test labs:

  • Test Unit 7816_O (Security Conditions for PACE-protected eMRTDs)
  • Test Unit 7816_P (Password Authenticated Connection Establishment)
  • Test Unit 7816_Q (Select and Read EF.CardAccess)
  • Test Unit 7816_S (Select and Read EF.CardSecurity)
  • Test Unit LDS_E (Data Group 14)
  • Test Unit LDS_I (EF.CardAccess)
  • Test Unit LDS_K (EF.CardSecurity)

Some statistics concerning the samples:

  • PACE-CAM was supported in the following types:
    • Generic Mapping (GM), Chip Authentication Mapping (CAM): 18 samples
    • Integrated Mapping (IM), CAM: 4 samples
    • GM, IM and CAM: 4 samples
  • LDS:
    • 25 samples used LDS1.7
    • 1 sample used LDS1.8
    • 1 sample used LDS2.0 (with backward compatibility to LDS1.7)

In the preliminary InteropTest results presented by Michael Schlüter during the SDW he mentioned, that 8502 test cases were performed during conformity testing by the test labs and 98% of the relevant test cases were passed by the samples. Additionally, the test results of both labs were fairly consistent. There was one test case that causes the most failures and this test case verifies ChipAuthenticationPublicKey in EF.CardSecurity (LDS_K_2). Here we need some clarification in the specification Doc9303 and finally in the test specification.

During the crossover test there were three problems detected: At first the sequence of PACE, CA and TA was performed correctly while the sequence of PACE-CAM and TA causes some problems during the inspection procedure of the readers. This might be based in the fact, that PACE-CAM is specified in an ICAO document and TA in a BSI document. Some inspection systems had also problems with alternative File IDs for EF.CVCA. The alternative FID can be defined in TerminalAuthenticationInfo (see A.1.1.3 in TR-03110 Part 3) and must be used by the inspection systems to address and read EF.CVCA. But a bad surprise in the InteropTest results was, that around 50% of the inspection systems don’t perform Passive Authentication (PA) correctly. During the preparation of the InteropTest a wrong CSCA certificate was distributed and 50% of the systems have not detected this wrong certificate, this means: 50% of the inspection systems failed in Passive Authentication! During the conference Dr. Uwe Seidel (Bundeskriminalamt, BKA) noticed, that this number mirrors the real world and that in fact PA is currently a real problem in border control.

The InteropTest results can be summed up in two statements:

  1. There is a very good quality of the eMRTD samples.
  2. Reader vendors have still some work to do, especially to implement Passive Authentication correctly.

A detailed test report of this event and the InteropTest results will be published by secunet in June 2016.

Update: The final test report can now be downloaded here (after a short registration at the SDW website).

Sending EnOcean telegram

EnOcean is an energy harvesting wireless technology used primarily in building automation systems and smart homes. All modules based on this technology combine on the one hand micro energy converters with ultra low power electronics, and on the other hand enable wireless communications between battery-less wireless sensors, actors and even gateways. The communication is based on so called ‘EnOcean telegram’. Since 2012 the EnOcean standard is specified as the international standard ISO/IEC 14543-3-10.

The EnOcean Alliance is an association of several companies to develop and promote Logo of EnOCeanself-powered wireless monitoring and control systems for buildings by formalizing the interoperable wireless standard. On their website the alliance offers some of their technical specifications for everybody.

To send an EnOcean telegram you need a piece of hardware connected to your host, e.g. an EnOcean USB300 USB Stick for your personal computer or an EnOcean Pi SoC-Gateway TRX 8051 for your Raspberry Pi. In this sample we use the USB300 to send a telegram using a small piece of software implemented in Java. The following photography shows an USB300 stick:

EnOcean USB300 Stick used to send EnOcean telegram

EnOcean USB300 Stick

The EnOcean radio protocol (ERP) is optimised to transmit information using extremely little power generated e.g. by piezo elements. The information sent between two devices is called EnOcean telegram. Depending on the EnOcean telegram type and the function of the device the payload is defined in EnOcean Equipment Profiles (EEP). The technical properties of a device define three profile elements:

  1. The ERP radio telegram type: RORG (range: 00…FF, 8 Bit)
  2. Basic functionality of the data content: FUNC (range 00…3F, 6 Bit)
  3. Type of device in its individual characteristics: TYPE (range 00…7F, 7 Bit)

Since version 2.5 of EEP the various Radio-Telegram types are grouped ORGanisationally:

TelegramRORGDescription
RPSF6Repeated Switch Communication
1BSD51 Byte Communication
4BSA54 Byte Communication
VLDD2Variable Length Data
MSCD1Manufacturer Specific Communication
ADTA6Addressing Destination Telegram
SM_LRN_REQC6Smart Ack Learn Request
SM_LRN_ANSC7Smart Ack Learn Answer
SM_RECA7Smart Ack Reclaim
SYS_EXC5Remote Management
SEC30Secure Telegram
SEC_ENCAPS31Secure Telegram with RORG encapsulation

In this context we use the type VLD (Variable Length Data) to have a closer look to EnOcean telegrams. VLD telegrams can carry a variable payload of data. The following graphic shows the structure of on EnOcean telegram (based on EnOcean Serial Protocol 3, short: ESP3):

This graphic describes the structure of an EnOcean telegram

Structure of EnOcean telegram

ESP3 is a point-to-point protocol with a packet data structure. Every packet (or frame) consists of header, data and optional data. As you can see in the structure, the length of the complete telegram is encoded in the header with two bytes. This suggests a maximum telegram length of 65535 bytes. Unfortunately, the maximum length of such a telegram is reduced to 21 bytes (data) due to limitations of low power electronics. Reduced by overhead information wasted in field data, the resulting net payload has finally a size of 14 Bytes. The following code snippet demonstrates how to send a telegram with 14 bytes payload ’00 11 22 33 44 55 66 77 88 99 AA BB CC DD’. At first we have look at the telegram:

Telegram: 55 00 14 07 01 65 D2 00 11 22 33 44 55 66 77 88 99 AA BB CC DD 00 00 00 00 00 01 FF FF FF FF 44 00 0B
Sync. byte: 55
Header: 00 14 07 01
CRC8 Header 65
Length data: 20 (0x14)
Length optional data: 7 (0x07)
Packet Type: 01
Data: D2 00 11 22 33 44 55 66 77 88 99 AA BB CC DD 00 00 00 00 00
RORG: D2
ID: 00 00 00 00
Status: 00
Data Payload: 00 11 22 33 44 55 66 77 88 99 AA BB CC DD
Optional data: 01 FF FF FF FF 44 00
SubTelNumber: 01
Destination ID: FF FF FF FF
Security: 00
Dbm: 68 (0x44)
CRC8 Data 0B

The following Java code demonstrates one way to send this telegram via USB300. The code snippet uses the library of RXTX to access the serial port.

import java.io.OutputStream;

import gnu.io.CommPort;
import gnu.io.CommPortIdentifier;
import gnu.io.SerialPort;

public class EnOceanSample {
	
	static SerialPort serialPort;
	static String serialPortName = "COM3";

	public static void main(String[] args) {
		
		byte[] sampleTelegram = new byte[] { (byte) 0x55, (byte) 0x00, (byte) 0x14, (byte) 0x07, (byte) 0x01, (byte) 0x65, 
				(byte) 0xD2, (byte) 0x00, (byte) 0x11, (byte) 0x22, (byte) 0x33, (byte) 0x44, (byte) 0x55, (byte) 0x66, (byte) 0x77, (byte) 0x88, (byte) 0x99, (byte) 0xAA, (byte) 0xBB, (byte) 0xCC, (byte) 0xDD, 
				(byte) 0x00, (byte) 0x00, (byte) 0x00, (byte) 0x00, (byte) 0x00, (byte) 0x01, (byte) 0xFF, (byte) 0xFF, (byte) 0xFF, (byte) 0xFF, (byte) 0x44, (byte) 0x00, (byte) 0x0B};
		
		try {
			CommPortIdentifier portIdentifier = CommPortIdentifier
					.getPortIdentifier(serialPortName);
			if (portIdentifier.isCurrentlyOwned()) {
				System.err.println("Port is currently in use!");
			} else {
				CommPort commPort = portIdentifier.open("EnOceanSample", 3000);
	
				if (commPort instanceof SerialPort) {
					serialPort = (SerialPort) commPort;
	
					// settings for EnOcean:
					serialPort.setSerialPortParams(57600, SerialPort.DATABITS_8,
							SerialPort.STOPBITS_1, SerialPort.PARITY_NONE);
					
					System.out.println("Sending Telegram...");
					OutputStream outputStream = serialPort.getOutputStream();
					outputStream.write(sampleTelegram);
					outputStream.flush();
					outputStream.close();
					serialPort.close();
					System.out.println("Telegram sent");
					
				} else {
					System.err.println("Only serial ports are handled!");
				}
			}
		}
		catch (Exception ex) {
		}
	}
}

On this way it’s not possible to send telegrams with a huge payload. If the information to be sent is longer than the described limit above, you can use a mechanism called ‘chaining’. To chain telegram a special sequence of telegrams is necessary. All protocol steps for chaining are specified in EO3000I_API.

Attention: In Europe EnOcean products are using the frequency 868,3 MHz. This frequency can be used by everybody for free but the traffic is limited, e.g. in Germany where it’s only allowed to send 36 seconds within one hour.

In one of my last blog posts I gave you the know how to receive EnOcean telegrams. Now, based on the information above, you can send your own EnOcean telegram in context of your Smart Home or your IoT environment.

New GlobalTester Release 3.1

GlobalTester is an Open Source tool to test (not only) smart cards. It’s developed with Eclipse. You can use the tool as an Eclipse plugin or standalone as Eclipse RCP. With the new release GlobalTester is not reduced to chip cards anymore. From now on you can test various protocols e.g. in context of Smart Home or IoT.

Here is a subset of the benefits of the new version:

  • Supports XML scheme according to BSI TR-03105 for test cases
  • Sample Configuration of  test objects allows switching between test objects and persistence of test information
  • Easy sharing with test houses and certification authorities
  • New, intuitive user interface, and handling
  • Extensive cheat sheets and new user guidance
  • Test Campaign allows easy reproduction of a test run and persistence of test results

The following video clip gives a first impression of the new user interface and the functionality (Video concept and recording by Anke Larkworthy).

If you are interested in the source code: we host the free available basic version of GlobalTester on the versioning system GitHub. Please feel free to download the code and to join the community. You are always welcome :)

 

 

Chip Authentication Version 3 (CAv3)

This post describes a new version 3 of well-known protocol Chip Authentication, which is used in context of eID to authenticate the chip and to establish a strong secure channel between chip and terminal.

In context of the European eIDAS regulation, the German BSI and the French ANSSI have specified in TR-03110 a new version 3 of protocol Chip Authentication (CAv3). It bases on ephemeral-static Diffie-Hellman key agreement, that provides both secure communication and also unilateral authentication of the chip. This new protocol is an alternative to Chip Authentication Version 2 and Restricted Identification (RI) providing additional features. CAv3 provides the following benefits (see TR-03110 part 2):

  • message-deniable strong explicit authentication of the eIDAS token and of the provided sector-specific identifiers towards the terminal,
  • pseudonymity of the eIDAS token without the need of using the same keys on several chips,
  • possibility of whitelisting eIDAS token (even in case of a compromised group key),
  • implicit authentication of stored data by performing Secure Messaging using new session keys derived during CAv3.

Before CAv3 is started the well-known protocol Terminal Authentication Version 2 (TAv2) must performed because the terminal’s ephemeral key pair generated during TAv2 is used during CAv3. It is also recommend that Passive Authentication is performed before CAv3 to assure the authenticity of chip’s public key.

Following table describes the command during CAv3 respective PSA (Source ISO/IEC 19286):

Command description of Chip Authentication V3 (CAv3) protocol (Source ISO/IEC 19286)

Command description of CAv3 protocol (Source ISO/IEC 19286)

The protocol CAv3 consists of the following two steps (where terminal and eIDAS token are involved):

  1. Perform Key Agreement (based on Anonymous Diffie Hellman (ADH))
    • Kee Agreement is performed in this step of the protocol:
      • MSE:SET AT with CA-OID and reference to private key
      • GENERAL AUTHENTICATE with dynamic authentication data (ephemeral public key)
  2. Perform Pseudonymous Signature Authentication (PSA)
    • Pseudonymous Signature is computed in this step of the protocol:
      • MSE:SET AT with PSA-OID and reference to private key
      • GENERAL AUTHENTICATE with dynamic authentication data (public key for domain-specific identifier)

Additionally, the received sector-specific identifier can be checked if it is black-listed (or white-listed).

On this way the new protocol CAv3 can be used in addition to sign data under a chip and sector specific pseudonym as an alternative to Restricted Identification.

 

Maintenance release of BSI TR-03105 Part 5.1

The German BSI has published a maintenance release of technical guideline TR-03105 Part 5.1 Version 1.41 for inspection systems with Extended Access Control (EAC).

Since last release of TR-03105 several (mostly editorial) comments were resolved and integrated in this maintenance release. Part 5.1 describes conformity tests for inspection systems with protocols like PACE, Terminal Authentication and Chip Authentication typically used at (automatic) border control, e.g. eGates.

Maintenance of TR-03105 for inspection systems, http://www.iconarchive.com/artist/oxygen-icons.org.html

Besides some editorial changes the new version 1.41 contains the following modifications:

  • ISO7816_G_36: If a EF.CardAccess contains an invalid OID for PACE-CAM, the inspection system shall use an alternative mapping protocol, that is supported by the chip.
  • ISO7816_G_37: This test case is deleted, because it’s not necessary for an inspection system to check that GM and IM are also supported by the chip besides PACE-CAM.
  • ISO7816_G_41: Curves with parameterID 0, 1 and 2 are based on DH and DH is not supported in context of PACE-CAM. So these curves are deleted.
  • LDS_H_86: Correction in expected result (PASS instead of FAIL).
  • Chapter 7: Relevant algorithms and OIDs for PACE, that must be supported by the inspection system, are added.
  • Chapter 7: Update of hashing algorithms.

For the next major update there should be a discussion how to handle fingerprints (data group 3, EF.DG3) and iris (data group 4, EF.DG4) of people who don’t have a finger or an iris. In this case these data groups should store an empty but valid data structure. Currently there are no test cases specified for these situations in TR-03105 Part 5.1. But inspection systems should be able to handle such cases also, of course.

So you can see, that test specifications in context of eMRTD (ePassports) and inspection systems are always in progress. If you have any comments concerning these test specifications or ideas of test cases, that should also be performed focusing on interoperability, please don’t hesitate to contact me or leave a comment.

Interoperability Test during SDW in May 2016

puzzle - interoperability test

Puzzle of InteropTest

Another interoperability test in context of ePassports (eMRTD) and inspection systems will be performed during SecurityDocumentWorld 2016 in London. The test will be focused on Supplemental Access Control (SAC) respective PACEv2, a security protocol to protect personal data stored in electronic ID documents.

An interoperability test is similar to a plugtest performed e.g. by ETSI. It’s an event during which devices (ePassport, inspection systems and test tools) are tested for interoperability with emerging standards by physically connecting them. This procedure is called crossover testing and allows all vendors to test their devices against other devices. The efforts to perform this kind of test increases very strongly with every ePassport and inspection system. Therefore these kind of tests can be performed only with a small number of devices under test.

Crossover Testing

Crossover Testing

Additionally, there is the opportunity besides this crossover tests to test the devices against conformity test suites implemented in test tools like open source tool GlobalTester. This procedure reduces efforts and allows comprehensive failure analyses of the devices like ePassports or inspection systems. To assure interoperability it is state of the art to set up test specifications. These specifications are implemented by the test labs respectively in the test tools they use.

Conformity Testing

Conformity Testing

There are well established test specifications available, both for ePassports and for inspection systems. Previous interoperability tests took place in Madrid (2014) and London (2013). Both events focused also on SAC/PACE.

If you are interested as a document provider, as a vendor of an inspection system, as a test lab or as an observer, you can register here.

Looking forward to seeing you in London during the InteropTest!

BTW: The EU article 6 group is preparing a document describing how to process an interoperability test and how to prepare such an event.