Category Archives: protocol

First results of eMRTD Interoperability Test 2017 in Ispra

The European Commission (EC) organised another eMRTD interoperability test. This time the event took place in Ispra at the Joint Research Center of the European Union. The objective of this interoperability test was to assure that countries and companies have established a stable PACE protocol in their eMRTD, respective ePassports, and ID cards.

Test setup of eMRTD interoperability test

Every participant had the chance to submit up to two different sets of documents with different implementations. Altogether there were 42 different samples available at the beginning of the event. 2 samples didn’t pass the smoke test and one sample was not suitable to be tested by the labs. All remaining samples were tested in two different test procedures: crossover test and conformity test. Twelve document verification system providers with 16 different solutions took part in the crossover test. And 23 document providers threw 42 sets in the ring (28 countries, 14 industries).

In this blog post the conformity test is focused on, because protocols are in foreground in this kind of test. There were three test labs (Keolabs, UL and secunet) taking part in the interoperability test with their test tools to perform a subset of “ICAO TR RF Protocol and Application Test Standard for e-Passports, Part 3” (Version 2.10). The subset includes the following test suites and test cases:

  • ISO7816_O: Security conditions for PACE protected eMRTDs
  • ISO7816_P: Password Authenticated Connection Establishment (PACEv2)
  • ISO7816_Q: Command READ and SELECT for file EF.CardAccess
  • LDS_E: Matching between EF.DG14 and EF.CardAccess
  • LDS_I: Structure of EF.CardAccess
  • LDS_K: Structure of EF.CardSecurity
  • LDS_D_06: Test case to perform Passive Authentication

Information concerning documents

The document providers describe in the implementation conformance statement (ICS) the features of their chips. Not all ICS were fulfilled consistently, so the following information concerning the documents should be read carefully. Concerning the LDS version 16 providers reported version 1.7 to be used in their documents. And three reported version 1.8, while all others don’t deliver any information concerning the version number.

The following diagram describes the relation between DH and ECDH in PACE:

PACE algorithms

PACE algorithms

The following diagram describes the relation of the mapping protocols in PACE:

Mapping protocols in PACE

Mapping protocols

16 documents supported besides the MRZ also a CAN as a password to get access to the stored data.

Again, the number of PACEInfos store in EF.CardAccess varied:

  • 28 documents stored one PACEInfo,
  • Eight documents stored two PACEInfos,
  • One stored each three, seven or ten PACEInfos.

Investigations concerning EF.ATR/INFO in documents

The file EF.ATR/INFO allows storing some information about the chip, that allows the reader to handle the chip optimally. On this way the chip can offer the ideal buffer size used with extended length during reading and writing. In context of the event I had a closer look at EF.ATR/INFO. 26 documents of 42 stored an EF.ATR/INFO but 5 of them don’t offer information concerning extended length and buffer sizes there. So at the end I’ve investigated the reading buffer sizes of 21 documents of the eMRTD interoperability test with the following result:

  • Seven documents support buffer sizes between 1 and 1.5 KByte,
  • Three documents support buffer sizes between 1.5 and 2 KByte,
  • Eleven documents support buffer sizes of ~64 KByte.
Buffer sizes for reading in EF.ATR/INFO

Buffer sizes in EF.ATR/INFO (reading)

With these large buffer sizes data groups like DG2 storing the facial image or DG3 storing the finger prints can be read in only one command. This allows the inspection system to read the content of the chip faster and improves the reading time at eGates.

Results of conformity testing

During the conformity test, all three test labs performed 18.135 test cases altogether. Less than 1 percent of these test cases failed during the conformity test.

Overall results (layer 6 and layer 7):

  • Passed: 11.563
  • Failed: 155 (0,86%)
  • Not applicable: 6.417

Layer 6 (16.614 test cases performed):

  • Passed: 10.885
  • Failed: 124 (0,74%)

Layer 7 (1.521 test cases performed):

  • Passed: 679
  • Failed: 32 (2,10%)

The following diagram shows failed test cases per document during the eMRTD interoperability test:

Number of failures per document

Number of failures per document

The diagram below shows the number of failure per test case during the eMRTD interoperability test:

Failures per test case

Failures per test case

Observations during conformity testing

  • There are minor differences between implementation conformance statement (ICS) and chip.
  • Test results differ between test labs in some test cases.
  • There are differences in handling errors at the test tools and labs (e.g. no CAN causes a failure at the one lab and a ‘not applicable’ at the other lab).
  • Relatively more failures on layer 7 (personalisation) than on layer 6 (COS).
  • Very good quality of chip and personalisation.
  • Improvements during the last interoperability tests in London 2016 and Madrid 2014.
  • Stable specifications (BSI TR-03110, ICAO Doc 9303) and test specifications (BSI TR-03105, ICAO TR Part 3).

Flattr this!

Security risk Smart Home – The Lives of Others

During IT Flash Paderborn #3 I gave a short presentation and demo concerning security risk smart home. With the described passive attack you can profile all residents of the smart home. And with the described active attack you can manipulate the smart home and the devices used there.

Picture of the film 'The Lives of Others'

From the movie ‘The Lives of Others’

In one of my previous blog post I described how to run a passive attack on a smart home in context of the protocol EnOcean. With the collected information you can set up a profile of all people living in this home.

For the passive attack I used a new tool that I own for a few weeks now: HackRF One. It’s an typical piece of hardware that you can use in context of Software Defined Radio (SDR). You can see this helpful tool on the following picture (Source: Great Scott Gadgets):

Picture of HackRF One

HackRF One

HackRF One was initiated as a Kickstarter project a few years ago and is used by a large community in the area of reengineering protocols. In my demonstration I used HackRF One on the one hand to find the exact frequency that is used by the EnOcean devices that I used. And on the other hand I used HackRF One to capture and replay the EnOcean telegram.

In a first step you need the exact frequency that is used by your EnOcean stuff. To find this frequency I used the tool gqrx. Gqrx is an open source software defined radio receiver powered by the project GNU Radio. The tool allows visualizing frequencies that are used in your environment. For example: EnOcean is works on 868 MHz, but gqrx helps you to find the exact frequency of this protocol, in my case: 868,290 MHz. The following screenshot shows the way gqrx works (Source gqrx-website):

Screenshot of gqrx

Screenshot of gqrx

As soon as you have found the exact frequency, you can use the software distributed with HackRF one to capture and to replay the messages in your smart home. In the demonstration I used a pushed button and a light actuator adapter to visualize the attack.

In case of EnOcean there are mechanisms to protect against these attacks available. One of these mechanisms is called ‘Rolling Code’ where telegrams are encrypted which makes the capture and replay attack above useless. The following command stores the traffic in a file:

hackrf_transfer -t switch.raw -f 869290000

Once the traffic is stored in a file, you can send this information again (capture and replay) with your HackRF One with the following command:

hackrf_transfer -t switch.raw -f 869290000 -x 47

You can find some more information in the slides of presentation. As you can see, smart home devices should be used carefully if you want to protect your privacy. Today it’s very easy to collect and manipulate a smart home. So always keep in mind the security risk smart home when you plan your smart home.

Flattr this!

Call for Participation: Interoperability Test 2017 in Ispra

The European Commission will organize conformity and interoperability tests for eMRTDs together with a conference on 25th and 26th September 2017. It will be held in the European Commission Joint Research Centre (JRC) premises in Ispra, Italy. The tests will focus on the latest access control specifications (e.g. the operation of the PACE protocol with Chip Authentication Mapping). This security mechanism, known as “Password Authenticated Connection Establishment with Chip Authentication Mapping” (PACE-CAM) is specified in Technical Report BSI TR-03110 “Advanced Security Mechanisms for Machine Readable Travel Documents and eIDAS token” and it combines PACE and Chip Authentication (CA) into one protocol leading to faster ID document verification.

Logo JRC

The conference will take place on the second day (26/09/2017) and will include speakers from the EU Commission, ICAO (requested) and Member States (requested). At its end, the high-level aggregate results of the tests will be presented.

The main beneficiaries of these tests are EU Member States. Depending on the number of EU Member States that will participate in the event, and provided that it is possible from an organisational perspective, a limited number of non-EU ICAO Member States and private sector travel document manufacturers will be allowed to participate in the test (on a first come first serve basis).

The test will focus on the implementation of PACE as specified in the Technical Report “Radio Frequency Protocol and Application Test Standard for eMRTD Part 3 Tests for Application Protocol and Logical Data Structure“, Version: 2.10, July 2016.

You can find the Call for Participation for the interoperability test here with more information concerning preregistration etc. See you in Ispra!

Flattr this!

Apple adds NFC support to iOS 11

During Apple’s Word Wide Developer Conference (WWDC) this week the company shared the latest information concerning support of Near Field Communication (NFC) protocol in iOS 11. Developers coding for iOS 11 will be able to create apps that can read NFC tags. This opens the door for wireless exchange of information between an iPhone and various connected devices in a user’s environment.

Apple and NFC

Apple and NFC (Wikipedia)

Currently the NFC chip in the iPhone is only used to handle contactless Apple Pay transactions. But in the new framework called Core NFC the company provide the foundation for multiple use cases by third-party apps. Using Core NFC, you can read tags of types 1 through 5 that contain data in the NFC Data Exchange Format (NDEF). At present the API supports only read access for the tags. Hopefully in the future there will be also the possibility integrated to write to the tags. With the new framework Apple could let third-party developers make use of NFC in new ways, or it could simply expand NFC functions beyond Apple Pay for use in its own apps and services. The specification says, “For example, your app might give users information about products they find in a store or exhibits they visit in a museum”.

There are also first code samples available, e.g. iOS11-NFC-Example implemented by Hans Knöchel. In his GitHub repository Hans describes a quick example showing how to use the API in iOS 11 and Swift 4. The new framework requires at least XCode 9, iOS 11 and an iPhone 7 or iPhone 7 Plus.

However, the possibilities for NFC outside of banking area look set to expand with Apple’s next-generation mobile operating systems. So I’m looking forward to blogging of an additional version of iOS, which also allows complex contactless protocols we know in the context of eID like Chip Authentication Mapping (PACE-CAM). This would enable the iPhone to read ID cards of ePassports using ISO 14443 for contactless communication. Nevertheless this first step in iOS opens a world of possibilities for new apps on iPhones.

 

Flattr this!

Happy New Year and summary of 2016

I wish you a happy and prosperous new year 2017. Thanks for visiting my blog in 2016 and thanks for your feedback in various forms like comments and discussions. In 2016 I published several articles on this blog and I plan to continue writing about my activities in the area of protocols and testing also in 2017. Before I start with new blog posts in the next weeks, I give a short summary of 2016.

Most read articles in 2016

Article#
Home Page1
Java Sample Code to access Smart Card2
First results of eMRTD Interoperability Test 20163
Chip Authentication Mapping4
ICAO LDS 1.8 or How to detect a file on an ePassport5
Sending EnOcean telegram6
Chip Authentication Version 3 (CAv3)7
Update of BSI TR-03105 Part 5.1 available (V1.4)8
eMRTD Test Specifications Overview9
Eclipse IoT overview10

First time visiting CeBIT as a blogger in 2016

In 2016 it was the first time that I visited the CeBIT in Hanover as an accredited blogger. There were several companies supporting bloggers and also the CeBIT itself established rooms and areas in the exhibition to work and refresh. ePassports were not focused in the exhibition but several companies and organisations demonstrated their ideas concerning IoT and the protocols used there.

Blogger Press Card CeBIT 2016

Blogger Press Card CeBIT 2016

protocolbench is now a registered trade mark

Last year I decided to protect the name of this blog. so I’ve registered the word mark ‘protocolbench’ at the German Patent and Trade Mark Office (DPMA). Under register number 302015219473 you can find more information about the trade mark.

Certificate of word mark 'protocolbench'

Certificate of word mark ‘protocolbench’

Support this blog

In middle of December I decided to use Flattr and Patreon on my blog. These organisations allow visitors and readers to support my blog in an easy way. If you like this blog, please support my work by donating via Flattr.

New job at secunet

And last but not least, I’m working for a new company. Since September 2016 I’m working as a Principal at secunet Security Networks AG in the division ‘Homeland Security’. My area of responsibility is similar to the one before, like testing, eID and standardisation and of course GlobalTester. With the new facilities in Paderborn secunet has established the ninth location in Germany.

Conclusion and summary

So, also in 2017 I will publish new blog posts here in context of protocols and testing. One of the next articles for example will describe the digitalisation of ePassports. If you have ideas of subjects you are interested in or subjects you work on and you would like to get more visibility, just contact me.

 

Flattr this!

How to assure interoperability?

Motivation

Typically protocols are connecting two different systems. In an open system with several stakeholders interoperability between these systems is a fundamental requirement. To assure this interoperability there are various way. In this blog post I present you two popular approaches in cooperation with my colleague Dr. Guido Frank who works at the German Federal Office for Information Security (BSI).

Interoperability

Interoperability is a characteristic of a product or system, whose interfaces are completely understood, to work with other products or systems, present or future, in either implementation or access, without any restrictions (Source: Wikipedia).

puzzle - interoperability test

From a system perspective, this means that all implementations need to comply to the same technical specifications. Interoperability is essential because these systems are open systems with different stakeholders. It refers to the collaboration ability of cross-system protocols.

Crossover Testing vs. Conformity Testing

To ensure interoperability, implementations need to be tested. In general, there are different approaches to test systems or implementations.

Crossover Testing

The scope of a crossover test is to test every system component with all other system components. This procedure allows to detect incompatibilities between existing implementations with a fixed release status.

The efforts to perform this kind of test increases disproportionately with every additional instance of the system. Therefore these kinds of tests can practically be performed only with a small number of involved test partners. The following figure illustrates the interaction between different systems in a crossover test.

Crossover Testing

Crossover Testing

Another problem of crossover testing is maintenance: every new implementation or any new version of existing implementations must be tested with every corresponding system, which again increases the testing efforts significantly. A benefit of crossover testing can be found in the early phase of developing when crossover testing helps the developer to implement their own system and can be used as an indicator to use the right way. On the other side, this kind of testing only indicates a positive test case with two correct systems to test (“smoke test”). The behaviour of the systems in bad cases is not tested. Additionally, with two different systems to be tested in a crossover test it’s difficult to decide which system behaves correctly in case of a failure and which implementation has to be changed.

Conformity Testing

The purpose of conformity tests is to verify that a system implements the specifications correctly (i.e. it “conforms” to the specifications).  These specifications need to be defined by stakeholders and finally implemented e.g. by test labs to run their conformity test tools.

This way these test suites verify the implementation under test with protocol data units which mimic both “correct” and “incorrect” behaviour of the system. The figure below illustrates the interactions between the test suite implementation and the system in a conformity test. The test definitions have to be tested with regard to harmonised interpretation among test participants.

Conformity Testing

Conformity Testing

Conclusion

Both approaches of testing allow assuring interoperability to other systems components to a certain extent. But with increasing complexity of the systems to be tested and the increasing number of systems at all the crossover testing is getting more and more extensive. Only conformity testing scales well with the complexity and the number of systems in an adequate way. The following diagram illustrates the increasing efforts of crossover tests in relation to conformity test.

Compared efforts of crossover testing and conformity testing

Compared efforts of crossover testing and conformity testing

Direct tests between a subset of implementations are be useful during implementation or integration phase of a node.  Such tests could also be performed via bilateral appointment between different stakeholders, e.g. via pilots. Experience from such test could also be used as additional input for conformity tests. Crossover testing or a central coordination of such tests would not be necessary for this purpose. As soon as there are several system components to be tested, conformity testing should be chosen.

The benefits of such an approach of interoperability testing can also be seen in several so called ‘InterOp tests‘ that have been performed for more than a decade in context of eMRTD. Detailed failure analysis allows to improve the stability of the whole eMRTD system. Additionally, the results of ‘InterOp tests’ helps not only to improve the stability of ePassports and inspection systems but also to improve the quality of (test) specifications and test tools.

Another open system with several vendors is banking. All cash cards or credit cards must fulfill international test specifications. This way of interoperability testing allows customers to use their cards worldwide at cash machines of various banks.

To assure systematic interoperability, it is necessary to set up conformity test specifications that systematically test the requirements as defined in the technical specifications. The tests should not only define good cases but also bad cases that mirror the pitfalls typically occurring in a system. Only this way allows to assure real system-wide interoperability.

Setup of test specification

Important components of a conformance test specifications are:

  • Description of general test requirements
  • Test setup / Testing environment
  • Definition of suitable test profiles / implementation profiles
  • Implementation Conformance Statement (ICS)
  • Definition of testing or configuration data
  • Definition of test cases according to a unified data structure
  • Each test case should concentrate on a single feature to be tested

The following structure of a test specification has been established since the beginning of eMRTD testing in 2005. It is based on the ISO/OSI layer model where data is tested on layer 7 and protocols are tested on layer 6.

Typical structure of a test case in this context:

  • Test case ID: unique identifier for each test case
  • Purpose: objective of the test case
  • Version: current version of this test case independent from the test specification
  • Reference: where is this feature / behaviour specified
  • Preconditions: setup of test case
  • Test scenario: description of test case, step by step
  • Postconditions: setdown of test case

Test cases can be combined in test suites to cluster test cases of similar topics or objectives. As the test specifications need to be implemented in suitable testing tools, it is useful to define the test cases already in a way, that eases their implementation, e.g. via XML using a suitable XML scheme.

 

Flattr this!

eMRTD Test Specification Overview

Currently I’m preparing a project where an ePassport has to be tested. These tests start with the booklet and end with the chip. During the preparation the need for a test specification overview popped up. This need was the root of a new service here on this blog: an overview of all current specifications in the domains of this blog starting with eMRTDs and their corresponding inspection systems.
Keep calm and continue testingTo list all current specifications I’ve added a new page called ‘test specifications‘ in menu above. I will keep this list up-to-date in the future. Finally with every new version of a test specification I will update this list. Currently the list contains test specification released by ICAO and BSI. Both organisations are in the front of implementing tests in context of eMRTD and the corresponding back-end-systems. These certification schemes of BSI and ANSSI also base on these test specification.

Test specifications are “living documents”, which causes several modifications over the time. You need the test specifications, listed here, to prove conformity and finally certify your passport or inspection system.

With every new protocol you need some more or some modified test cases in the specifications. And also maintenance is an important fact to keep the test cases up-to-date. Additionally, I will list also test specifications of other domains like IoT in the closer future.

So have a look at this page next time when you’re back on this blog.

Flattr this!

Update of RF and Protocol Testing Part 4 V2.10 online

Introduction

Simultaneously with Part 3, the ICAO released also version 2.10 of the test specification ‘RF and Protocol Testing Part 4‘ to test the interoperability of inspection systems (IS) in context of eMRTD. While the Technical Advisory Group (TAG) of ICAO endorsed the update on the ICAO website, from now on the test specification can be referenced officially. Finally in version 2.10 of the test specification there are some significant modifications compared with the previous version 1.01 released in 2013:

  • Support of protocol PACE-CAM:
    • New test suite ISO7816_G to test Chip Authentication,ICAO Logo
    • New default configuration including Chip Authentication,
    • Updated implementation conformance statement (ICS) to specify IS supporting PACE-CAM,
    • Updated list of abbreviations,
  • Tests for LDS 1.8,
  • Updated references concerning Doc9303 7th edition,
  • Added Advanced Inspection Procedure (AIP),
  • Additionally, there are some clarifications and minor editorial changes.

Furthermore you can find a more detailed list of changes and modifications in version 2.10 to test interoperability of inspection systems.

New test cases in Version 2.10 Update

Basically the new test cases are testing the protocol PACE-CAM or make use of the new LDS 1.8 data structure where the LDS version number is stored in EF.SOD (additionally to EF.COM).

  • ISO7816_C_29: PACE-CAM with missing tag 8Ah but correct ECAD
  • ISO7816_C_30: PACE-CAM with incorrectly encoded ECAD (no octet string)
  • ISO7816_C_31: PACE-CAM with wrong ECAD
  • ISO7816_C_32: PACE-CAM with wrong tag 8Ah (use 8Bh) but correct ECAD
  • ISO7816_C_33: PACE-CAM with correct tag 8Ah but missing ECAD
  • ISO7816_C_34: PACE-CAM with Passive Authentication
  • ISO7816_C_35: Return additional tag 8Ah during PACE-GM
  • ISO7816_C_36: Use DG14 without SecurityInfo during PACE-CAM
  • ISO7816_C_37: Use EF.CardSecurity with wrong ChipAuthenticationPublicKey during PACE-CAM
  • ISO7816_C_38: Use EF.CardSecurity without ChipAuthenticationPublicKeyInfo during PACE-CAM
  • ISO7816_C_39: Check supported standardized Domain Parameters with Chip Authentication Mapping
  • ISO7816_G_01: Chip Authentication with DH
  • ISO7816_G_02: Chip Authentication with ECDH
  • ISO7816_G_03: DG14 with one key reference
  • ISO7816_G_04: DG14 with two key references
  • ISO7816_G_05: DG14 with three key references
  • ISO7816_G_06: DG14 with invalid key reference
  • ISO7816_G_07: DG14 with corrupted DH public key
  • ISO7816_G_08: DG14 with corrupted ECDH public key
  • ISO7816_G_09: Use old session keys after Chip Authentication
  • ISO7816_G_10: Verify lifetime of ephemeral keys
  • ISO7816_G_11: DG14 with invalid DH public key specification
  • ISO7816_G_12: DG14 with invalid ECDH public key specification
  • ISO7816_G_13: ChipAuthenticationPublicKeyInfo: key reference does not match key reference in ChipAuthenticationInfo
  • ISO7816_G_14: Chip Authentication with Extended Length
  • ISO7816_G_15: Use various status words for invalid key reference
  • LDS_A_10: EF.COM with LDS version 1.8
  • LDS_D_35: EF.SOD with LDS Version 1.8
  • LDS_D_36: Security Object with LDS Version 1.8 but Version wrong number
  • LDS_D_37: Security Object with LDS Version 1.7 but Version number 1
  • LDS_D_38: EF.SOD with future LDS Version 1.9

Modified test cases in Version 2.10 Update

Due to the new document structure of version 2.10, it’s difficult to detect all modifications. Therefore please be aware that the list of modified test cases may not be complete and there might be more changes compared to previous version 1.01.

  • ISO7816_C_04: Added new OID for PACE-CAM in table corresponding to test case
  • ISO7816_D_07: Test case deleted

With the release of this test specification, version 2.10 is relevant for certification. So from now on, your inspection system must fulfill these conformity tests to achieve a certificate.

Flattr this!

Update of RF and Protocol Testing Part 3 V2.10 online

Introduction

There is an update of ICAO’s test specification ‘RF and Protocol Testing Part 3‘ available. The specification is focusing on conformity testing and protocol testing for eMRTDs implementing protocols like BAC and PACE.ICAO Part 3 Cover

The Technical Advisory Group (TAG) of ICAO endorsed the updated release on the ICAO website, so from now on the test specification can be referenced officially. In version 2.10 of the test specification there are some major modifications:

  • Additional test cases for PACE-CAM (this includes modifications of existing test cases and also new test cases especially for PACE-CAM).
    • New test suite 7816_S to verify access rights (read and select) of EF.CardSecurity.
    • New test suite LDS_K to test presence and coding of SecurityInfo structures in EF.CardSecurity
  • The referenced documents are updated to Doc 9303 Edition 7 and old specifications including supplements are replaced.
  • With 7th edition of Doc 9303 the wording is changed from ‘PACEv2’ and ‘SAC’ to ‘PACE’.
  • And of course there are some minor editorial corrections.

The interim version 2.08 of this test specification was used during the interoperability test in London 2016 (first results of this event can be found in a previous post). This version was prepared at the meeting of ISO WG3 TF4R in Berlin to establish a valid version for the test event. Version 2.10 includes all the updates and some minor changes. In the following the update of version 2.10 is listed more detailed.

New test cases in layer 6

  • ISO7816_O_55: Accessing the EF.CardSecurity file with explicit file selection.
  • ISO7816_O_56: Accessing the EF.CardSecurity file with implicit file selection (ReadBinary with SFI).
  • ISO7816_O_57: Accessing the EF.CardSecurity file with ReadBinary. The test verifies the enforcement of SM after the PACE-CAM protocol has been performed successfully.
  • ISO7816_O_58: Accessing the EF.CardSecurity file with ReadBinary. The test verifies the enforcement of SM after a PACE protocol different from PACE-CAM has been performed successfully.
  • ISO7816_P_78: Positive test with a complete sequence of PACE without Chip Authentication Mapping commands and with MRZ password. The tag 0x8A during PACE-GM and PACE-IM MUST NOT be returned.
  • ISO7816_S_01: Accessing EF.CardSecurity with explicit file selection and Read Binary.
  • ISO7816_S_02: Accessing EF.CardSecurity with implicit file selection (ReadBinary with SFI).
  • ISO7816_S_03: Accessing EF.CardSecurity with explicit file selection and Read Binary OddIns.
  • ISO7816_S_04: Accessing EF.CardSecurity with implicit file selection (ReadBinary OddIns with SFI).

Modified test cases in layer 6

  • ISO7816_P_01: New step 6 and step 7 added for PACE-CAM, Step 5 return new data object 0x8A used in PACE-CAM.
  • ISO7816_P_02: New step 6 and step 7 added for PACE-CAM, Step 5 return new data object 0x8A used in PACE-CAM.
  • ISO7816_P_03: Step 5 return new data object 0x8A used in PACE-CAM.
  • ISO7816_P_14: Step 6 return new data object 0x8A used in PACE-CAM.
  • ISO7816_P_25: Step 5 return new data object 0x8A used in PACE-CAM.
  • ISO7816_P_26: Step 5 return new data object 0x8A used in PACE-CAM.
  • ISO7816_P_27: Step 5 return new data object 0x8A used in PACE-CAM.
  • ISO7816_P_28: Step 5 return new data object 0x8A used in PACE-CAM.
  • ISO7816_P_41: Adopted profile to handle PACE-CAM.
  • ISO7816_P_42: Adopted profile to handle PACE-CAM.
  • ISO7816_P_43: Adopted profile, step 5 return new data object 0x8A used in PACE-CAM.
  • ISO7816_P_44: Adopted profile, Step 5 return new data object 0x8A used in PACE-CAM.
  • ISO7816_P_45: Adopted profile, step 5 return new data object 0x8A used in PACE-CAM.
  • ISO7816_P_46: Adopted profile, step 5 return new data object 0x8A used in PACE-CAM.
  • ISO7816_P_49: Adopted profile to handle PACE-CAM.
  • ISO7816_P_50: Adopted profile to handle PACE-CAM.
  • ISO7816_P_68: Adopted purpose.
  • ISO7816_P_73: Step 5 return new data object 0x8A used in PACE-CAM.
  • ISO7816_P_74: Step 5 return new data object 0x8A used in PACE-CAM.
  • ISO7816_R_05: Correction in referenced RFC.
  • ISO7816_R_06: Correction in referenced RFC.

New test cases in layer 7

  • LDS_E_09: Test that EF.DG14 contains at least one valid set of SecurityInfos for Chip Authentication. A chip supporting PACE-CAM must also support CA.
  • LDS_I_05: Verify that EF.CardAccess contains at least one valid PACEInfo for PACE-GM or PACE-IM as an additional mapping procedure if PACE-CAM is supported.
  • LDS_K_01: Test the ASN.1 encoding of the SecurityInfos.
  • LDS_K_02: Verify the ASN.1 encoding of the ChipAuthenticationPublicKey.
  • LDS_K_03: Test the coherency between the EF.CardSecurity and EF.CardAccess.
  • LDS_K_04: Verify that the parameterID also denotes the ID of the Chip Authentication key used, i.e. the chip MUST provide a ChipAuthenticationPublicKeyInfo with keyID equal to parameterID.

Modified test cases in layer 7

  • LDS_I_02: Added OIDs for PACE-CAM and new step 3 (to check that a valid OID is present for each declared configuration).
  • LDS_I_03: Added OID for PACE-CAM.
  • LDS_I_04: Test case deleted.
  • LDS_J_04: Correction in referenced RFC.

Previous ideas to migrate this test specification to an ISO document are canceled due to political reasons. Part 3 (eMRTD) and Part 4 (inspection systems) will be ICAO documents furthermore whereas Part 1 (durability of ePassports) and Part 2 (contactless interface) are still migrated to ISO documents (ISO 18745-1 and ISO 18745-2).

Flattr this!

Eclipse IoT overview

Introduction

A few days before the Eclipse Neon release, the Eclipse Foundation has released several projects in context of IoT (Internet of things). The Eclipse IoT working group is engaged in projects like SmartHome, Kura, Paho and OM2M.

Logo Eclipse IoTThe Internet of Things is all about connecting devices (sensors and actuators) to the internet. You can find these devices in automobiles, wearables, industrial factories, homes, etc. A key challenge is the complexity of implementing an IoT solution where you need to deal with various hardware platforms, manage the IoT gateways to connect the devices to the internet, manage the connectivity and interoperability, and integrate the data in the existing systems and databases.

An important way to reduce this complexity is to create reusable libraries and frameworks. As a result these frameworks are abstract and implement key features. Consequently right here is the approach of Eclipse IoT delivering several technologies combined in an open source stack with all key features and standards that you need to develop your own IoT solution. Furthermore the Eclipse Foundation set up a community with more than 200 contributors to assure the enhancement of the IoT stack.

The current release includes Eclipse SmartHome Version 0.8 and Eclipse Paho Version 1.2. The projects Eclipse Kura and Eclipse OM2M will be available later this month. Additionally, the foundation starts a new project proposal called Eclipse Kapua. Goal is to create a modular integration platform for IoT devices and smart sensors. On this way there will be a bridge between Operation Technology and Information Technology.

Eclipse IoT

The Eclipse IoT ecosystem contains standards, gateways and of course frameworks. The following paragraphs will describe these modules. In addition you can find a reference to an Eclipse project that is relevant in this domain. Please keep in mind that this list is not complete, there are currently 24 different projects available in context of Eclipse IoT.

IoT stack

The following graphic describes the structure of Eclipse IoT stack. The stack includes frameworks, open communication standards and a gateway to assure management services. Consequently most modules based on Java and OSGi. OSGi describes a modular system and service platform for Java that implements a complete and dynamic component model. Finally the Eclipse IoT stack with its components addresses all key requirements in IoT: interoperability, security and connectivity.

Eclipse Open IoT Stack

Eclipse Open IoT Stack

Open communication standards

It’s an elementary feature in context of IoT to provide several mechanisms for protocols used in this domain. All devices must be connected, secured and managed. On this way the Eclipse projects in the IoT ecosystem supports the relevant protocols and standards:

  • MQTT: Eclipse Paho delivers a MQTT client implementation (Java, C/C++, JavaScript). The corresponding MQTT broker is implemented in Eclipse Mosquito (implementation in C). MQTT (Message Queuing Telemetry Transport) is a light-weight publish and subscribe messaging protocol specified in ISO/IEC 20922. Almost the new version 1.2 of Paho includes for example WebSocket support for Java and Pythons clients.
  • CoAP: Eclipse Californium delivers the CoAP standard in Java, including DTLS support.
  • Lightweight M2M: The server side of LwM2M is delivered by Eclipse Wakaama (C/C++) and the client side by Eclipse Leshan (Java). Wakaama provides an API for server applications to send commands to registered LwM2M clients. Leshan relies on the Eclipse IoT Californium project for the CoAP and DTLS implementation.
  • DNSSEC: Eclipse Tiaki provides a DNSSEC implementation in Java. Domain Name System Security Extensions (DNSSEC) is specified by IETF for securing certain kinds of information provided by the Domain Name System (DNS).
  • DTLS: Eclipse TinyDTLS implements the Data Transport Layer Security (DTLS) standard in C. The implementation covers both the client and the server state machine. DTLS in general is a communication protocol that provides security also for datagram protocols like UDP.

Gateways

A gateway manages the connectivity between devices and provides a platform for the upper applications. Eclipse Kura offers a set of services that helps to manage the devices and applications deployed onto IoT gateways. Same to other Eclipse projects the gateway is based on Java and OSGi services. Kura manages the cloud connectivity, supports different protocols and configures the network.

Frameworks

Eclipse IoT provides a set of frameworks:

  • Eclipse SmartHome is a set of Java and OSGi services for building and running Smart Homes. It is designed to run on “simple” devices like Raspberry Pi, BeagleBone Black or Intel Edison. Additionally, this framework supports typical protocols with their diversity used in Smart Homes like EnOcean, KNX or ZigBee. Most of all this way allows the devices and systems to communicate with each other. Almost the new version 0.8 of Eclipse Smart Home contains now a new REST interface that allows easier interaction with the clients. Furthermore, new bindings are supported, e.g. for DigitalStrom.
  • Eclipse SCADA is a set of Java and OSGi services that implements nearly all of the services required to run a SCADA (supervisory control and data acquisition) system. As one type of an Industrial Control System (ICS) Eclipse SCADA delivers functions for data acquisition, monitoring, visualization, etc. Additionally, the framework supports typical industrial automation standards like ModBus, Siemens PLC, SNMP and OPC.
  • Eclipse OM2M is one implementation of ETSI’s MSM standard. This implementation provides a horizontal Service Capability Layer (SCL) to be deployed in a M2M network.

Summary

In conclusion Eclipse IoT provides an open source stack including all relevant frameworks, protocols and standards to develop your own IoT application. The stack allows you to develop new devices but also to modernise existing ‘legacy’ devices.

Flattr this!

First results of eMRTD Interoperability Test 2016

During 10th Security Document World 2016 an additional Interoperability Test for eMRTD with PACE took place in London. In context of ePassports this was the 14th event starting 2003 on Canberra. This time there were two test labs involved, 17 document providers and twelve inspection system providers. Here I will focus on the conformity test including test labs and document providers and the InteropTest results. The event was organised by the colleagues of secunet. The following document providers delivered in sum 27 samples:

Logo of SDW InteropTest

  • Arjo Systems
  • Atos IT Solutions and Services
  • Bundesdruckerei
  • Canadian Banknote Company
  • cryptovision
  • De La Rue
  • Gemalto
  • ID&Trust
  • Imprimerie Nationale Group
  • Iris Corporation Berhad
  • MaskTech
  • Morpho
  • NXP Semiconductors
  • Oberthur Technologies
  • PAV Card
  • Polygraph combine ‘Ukraina’
  • PWPW

And the following test laboratories performed a subset of tests focusing on PACE (and of course PACE-CAM):

  • Keolabs (France)
  • HJP Consulting / TÜViT (Germany)

The test cases performed during the event based on ICAO’s test specification ‘RF Protocol and Application Test Standard for eMRTD – Part 3‘ Version 2.08 RC2 including some minor adaptions based on the last WG3TF4 meeting in Berlin. The final version 2.08 of this test specification will be released soon and deltas will be listed in an additional blog post here. With focus on PACE-CAM the following test suites were performed by both test labs:

  • Test Unit 7816_O (Security Conditions for PACE-protected eMRTDs)
  • Test Unit 7816_P (Password Authenticated Connection Establishment)
  • Test Unit 7816_Q (Select and Read EF.CardAccess)
  • Test Unit 7816_S (Select and Read EF.CardSecurity)
  • Test Unit LDS_E (Data Group 14)
  • Test Unit LDS_I (EF.CardAccess)
  • Test Unit LDS_K (EF.CardSecurity)

Some statistics concerning the samples:

  • PACE-CAM was supported in the following types:
    • Generic Mapping (GM), Chip Authentication Mapping (CAM): 18 samples
    • Integrated Mapping (IM), CAM: 4 samples
    • GM, IM and CAM: 4 samples
  • LDS:
    • 25 samples used LDS1.7
    • 1 sample used LDS1.8
    • 1 sample used LDS2.0 (with backward compatibility to LDS1.7)

In the preliminary InteropTest results presented by Michael Schlüter during the SDW he mentioned, that 8502 test cases were performed during conformity testing by the test labs and 98% of the relevant test cases were passed by the samples. Additionally, the test results of both labs were fairly consistent. There was one test case that causes the most failures and this test case verifies ChipAuthenticationPublicKey in EF.CardSecurity (LDS_K_2). Here we need some clarification in the specification Doc9303 and finally in the test specification.

During the crossover test there were three problems detected: At first the sequence of PACE, CA and TA was performed correctly while the sequence of PACE-CAM and TA causes some problems during the inspection procedure of the readers. This might be based in the fact, that PACE-CAM is specified in an ICAO document and TA in a BSI document. Some inspection systems had also problems with alternative File IDs for EF.CVCA. The alternative FID can be defined in TerminalAuthenticationInfo (see A.1.1.3 in TR-03110 Part 3) and must be used by the inspection systems to address and read EF.CVCA. But a bad surprise in the InteropTest results was, that around 50% of the inspection systems don’t perform Passive Authentication (PA) correctly. During the preparation of the InteropTest a wrong CSCA certificate was distributed and 50% of the systems have not detected this wrong certificate, this means: 50% of the inspection systems failed in Passive Authentication! During the conference Dr. Uwe Seidel (Bundeskriminalamt, BKA) noticed, that this number mirrors the real world and that in fact PA is currently a real problem in border control.

The InteropTest results can be summed up in two statements:

  1. There is a very good quality of the eMRTD samples.
  2. Reader vendors have still some work to do, especially to implement Passive Authentication correctly.

A detailed test report of this event and the InteropTest results will be published by secunet in June 2016.

Update: The final test report can now be downloaded here (after a short registration at the SDW website).

Flattr this!

Sending EnOcean telegram

EnOcean is an energy harvesting wireless technology used primarily in building automation systems and smart homes. All modules based on this technology combine on the one hand micro energy converters with ultra low power electronics, and on the other hand enable wireless communications between battery-less wireless sensors, actors and even gateways. The communication is based on so called ‘EnOcean telegram’. Since 2012 the EnOcean standard is specified as the international standard ISO/IEC 14543-3-10.

The EnOcean Alliance is an association of several companies to develop and promote Logo of EnOCeanself-powered wireless monitoring and control systems for buildings by formalizing the interoperable wireless standard. On their website the alliance offers some of their technical specifications for everybody.

To send an EnOcean telegram you need a piece of hardware connected to your host, e.g. an EnOcean USB300 USB Stick for your personal computer or an EnOcean Pi SoC-Gateway TRX 8051 for your Raspberry Pi. In this sample we use the USB300 to send a telegram using a small piece of software implemented in Java. The following photography shows an USB300 stick:

EnOcean USB300 Stick used to send EnOcean telegram

EnOcean USB300 Stick

The EnOcean radio protocol (ERP) is optimised to transmit information using extremely little power generated e.g. by piezo elements. The information sent between two devices is called EnOcean telegram. Depending on the EnOcean telegram type and the function of the device the payload is defined in EnOcean Equipment Profiles (EEP). The technical properties of a device define three profile elements:

  1. The ERP radio telegram type: RORG (range: 00…FF, 8 Bit)
  2. Basic functionality of the data content: FUNC (range 00…3F, 6 Bit)
  3. Type of device in its individual characteristics: TYPE (range 00…7F, 7 Bit)

Since version 2.5 of EEP the various Radio-Telegram types are grouped ORGanisationally:

TelegramRORGDescription
RPSF6Repeated Switch Communication
1BSD51 Byte Communication
4BSA54 Byte Communication
VLDD2Variable Length Data
MSCD1Manufacturer Specific Communication
ADTA6Addressing Destination Telegram
SM_LRN_REQC6Smart Ack Learn Request
SM_LRN_ANSC7Smart Ack Learn Answer
SM_RECA7Smart Ack Reclaim
SYS_EXC5Remote Management
SEC30Secure Telegram
SEC_ENCAPS31Secure Telegram with RORG encapsulation

In this context we use the type VLD (Variable Length Data) to have a closer look to EnOcean telegrams. VLD telegrams can carry a variable payload of data. The following graphic shows the structure of on EnOcean telegram (based on EnOcean Serial Protocol 3, short: ESP3):

This graphic describes the structure of an EnOcean telegram

Structure of EnOcean telegram

ESP3 is a point-to-point protocol with a packet data structure. Every packet (or frame) consists of header, data and optional data. As you can see in the structure, the length of the complete telegram is encoded in the header with two bytes. This suggests a maximum telegram length of 65535 bytes. Unfortunately, the maximum length of such a telegram is reduced to 21 bytes (data) due to limitations of low power electronics. Reduced by overhead information wasted in field data, the resulting net payload has finally a size of 14 Bytes. The following code snippet demonstrates how to send a telegram with 14 bytes payload ’00 11 22 33 44 55 66 77 88 99 AA BB CC DD’. At first we have look at the telegram:

Telegram: 55 00 14 07 01 65 D2 00 11 22 33 44 55 66 77 88 99 AA BB CC DD 00 00 00 00 00 01 FF FF FF FF 44 00 0B
Sync. byte: 55
Header: 00 14 07 01
CRC8 Header 65
Length data: 20 (0x14)
Length optional data: 7 (0x07)
Packet Type: 01
Data: D2 00 11 22 33 44 55 66 77 88 99 AA BB CC DD 00 00 00 00 00
RORG: D2
ID: 00 00 00 00
Status: 00
Data Payload: 00 11 22 33 44 55 66 77 88 99 AA BB CC DD
Optional data: 01 FF FF FF FF 44 00
SubTelNumber: 01
Destination ID: FF FF FF FF
Security: 00
Dbm: 68 (0x44)
CRC8 Data 0B

The following Java code demonstrates one way to send this telegram via USB300. The code snippet uses the library of RXTX to access the serial port.

import java.io.OutputStream;

import gnu.io.CommPort;
import gnu.io.CommPortIdentifier;
import gnu.io.SerialPort;

public class EnOceanSample {
	
	static SerialPort serialPort;
	static String serialPortName = "COM3";

	public static void main(String[] args) {
		
		byte[] sampleTelegram = new byte[] { (byte) 0x55, (byte) 0x00, (byte) 0x14, (byte) 0x07, (byte) 0x01, (byte) 0x65, 
				(byte) 0xD2, (byte) 0x00, (byte) 0x11, (byte) 0x22, (byte) 0x33, (byte) 0x44, (byte) 0x55, (byte) 0x66, (byte) 0x77, (byte) 0x88, (byte) 0x99, (byte) 0xAA, (byte) 0xBB, (byte) 0xCC, (byte) 0xDD, 
				(byte) 0x00, (byte) 0x00, (byte) 0x00, (byte) 0x00, (byte) 0x00, (byte) 0x01, (byte) 0xFF, (byte) 0xFF, (byte) 0xFF, (byte) 0xFF, (byte) 0x44, (byte) 0x00, (byte) 0x0B};
		
		try {
			CommPortIdentifier portIdentifier = CommPortIdentifier
					.getPortIdentifier(serialPortName);
			if (portIdentifier.isCurrentlyOwned()) {
				System.err.println("Port is currently in use!");
			} else {
				CommPort commPort = portIdentifier.open("EnOceanSample", 3000);
	
				if (commPort instanceof SerialPort) {
					serialPort = (SerialPort) commPort;
	
					// settings for EnOcean:
					serialPort.setSerialPortParams(57600, SerialPort.DATABITS_8,
							SerialPort.STOPBITS_1, SerialPort.PARITY_NONE);
					
					System.out.println("Sending Telegram...");
					OutputStream outputStream = serialPort.getOutputStream();
					outputStream.write(sampleTelegram);
					outputStream.flush();
					outputStream.close();
					serialPort.close();
					System.out.println("Telegram sent");
					
				} else {
					System.err.println("Only serial ports are handled!");
				}
			}
		}
		catch (Exception ex) {
		}
	}
}

On this way it’s not possible to send telegrams with a huge payload. If the information to be sent is longer than the described limit above, you can use a mechanism called ‘chaining’. To chain telegram a special sequence of telegrams is necessary. All protocol steps for chaining are specified in EO3000I_API.

Attention: In Europe EnOcean products are using the frequency 868,3 MHz. This frequency can be used by everybody for free but the traffic is limited, e.g. in Germany where it’s only allowed to send 36 seconds within one hour.

In one of my last blog posts I gave you the know how to receive EnOcean telegrams. Now, based on the information above, you can send your own EnOcean telegram in context of your Smart Home or your IoT environment.

Flattr this!

Author: .