Testing ePassport Readers using TTCN-3

Currently you can find the well-known test tool Titan under the patronage of the Eclipse Foundation (proposal). This tool was developed by Ericsson several years ago to the test internet protocol (IP). Titan bases on TTCN-3, a test language focusing on communication protocols. This keeps me in mind an old project with ETSI where we used TTCN-3 to test ePassport readers concerning BSI TR-3105 part 5.1.

From end of 2009 to middle of 2011 ETSI has conducted a project to develop a test system prototype for conformance testing of ePassport readers. The objective of this project was to design, build and test a TTCN-3 (Testing and Test Control Notation, Version 3) based test system prototype for ePassport reader conformance testing. This project was a joint effort between the European Joint Research Centre (JRC) in Ispra (Italy) and ETSI in Sophia Antipolis (France). The test language TTCN-3 has already been widely used in many testing projects across different protocols and technologies. However, until this project TTCN-3 has not been applied in the area of contactless smart card testing.

The ETSI Specialists Task Force (STF) 400 with experts from the companies / organisations AMB Consulting, ARH, Comprion, ETSI, FSCOM, HJP Consulting, Masaryk University, Soliatis and Testing Technologies operated this project. The work of this STF has been split into three main phases:

  1. Design, implementation, and use of ePassport test system
  2. Development of ePassport testing framework
  3. Writing of the documentation and dissemination material

Scope of this project was to build a test system to test an inspection system typically used to read ePassports. To demonstrate the basic functionality and the feasibility, a subset of BSI TR-03105 Part 5.1 was specified and implemented in the test system.

The following image describes the architecture of the ePassport reader test system developed during this project:

System architecture of prototype to test ePassport Reader with TTCN-3

Architecture of test system based on TTCN-3 for ePassport readers (Source: ETSI White Paper No.7)

The most significant part in the architecture is “TTCN-3 Test Component”. This module simulates the ePassort behaviour  by receiving APDUs, react to these commands and send result in APDUs back to the SUT (here the ePassport reader).

The successful implementation of a TTCN-3 based test system shows that the use of TTCN-3 fits the requirements of conformance testing of eMRTD or other eID systems. The prototype demonstrates the feasibility of using such formal techniques for protocols which would improve the quality and repeatability of tests and reduce room for interpretation.

An overview of this project and the results were summarized by the colleagues Jean-Marc Chareau, Laurent Velez and Zdenek Riha in ETSI White Paper No 7.

 

 

Flattr this!

Update of RF and Protocol Testing Part 3 V2.06 online

The MRTD group of ICAO has released an update (version 2.06) with clarifications of their technical report RF and Protocol Testing Part 3 focusing on conformity test and protocol testing for ePassports implementing protocols like BAC and Supplemetal Access Control (SAC) respective PACEv1.

The new version 2.06 of TR-03105 Part 3.2 focusing on protocol testing includes the following changes:

  • General: Several test cases accept now additionally also an Execution Error in expected results.
  • General: Instead of ePassports we are talking now about eMRTD.
  • General: An additional profile was added: “EAC or PACE or AA-ECDSA”.
  • General: The profiles of several test cases were extended.
  • General: Compatibility to both PACE and BAC in most test cases of ISO_D and ISO_E.
  • General: Use CAR from DV certificate during Terminal Authentication instead of reading CAR from file EF.CVCA.
  • ISO7816_C_04: The command GET CHALLENGE must not have been performed.
  • ISO7816_P_10: This test case was deleted.
  • ISO7816_P_73: Allows multiple PACEInfo if just one parameter ID is being used.
  • ISO7816_P_74: Allows multiple PACEInfo if just one parameter ID is being used.
  • ISO7816_P_75: Requires two PACEInfo elements using the same OID and different parameter IDs.
  • LDS_A_03: Now LDS version 1.8 is also accepted.
  • LDS_B_13: Added new assertions on the date (day and month).
  • LDS_D_06: Additional test step checking the LDS info object.

In the past I have missed such a list for every new released version of test specifications, like BSI TR-03105 or ICAO technical reports. You can find a list of modifiied test cases for protocol testing in the last version of BSI TR-03105 Part 3.2 in a previous post.

So I hope, this list of modified test cases is helpful for your work in context of ePassport testing. If you are interested, please leave a comment and I will update this list with every new version of test specifications in context of smart cards used in ePassports and ID cards.

Flattr this!

Results SAC Interoperability Test in Madrid 2014

The European Commission (EC) and the International Civil Aviation Organization (ICAO) has organized a SAC interoperability test in Madrid end of June 2014. The objective of this interoperability test was to assure that European countries are ready to launch Supplemental Access Control (SAC) respective PACEv2 at the end of this year. The following countries participated in the test (in alphabetical order):

  • Australia
  • Austria
  • Belgium
  • Bosnia Herzegovina
  • Czech Republic
  • Finland
  • France
  • Germany
  • Iceland
  • Italy
  • Japan
  • Netherlands
  • Norway
  • Portugal
  • Slovenia
  • Spain
  • Sweden
  • Switzerland

The SAC interoperability test was also open for industry. The following vendors participated with their ePassport solutions (in alphabetical order):

  • 3M
  • Arjowiggins
  • Athena
  • De La Rue
  • EDAPS
  • Gemalto
  • Giesecke & Devrient
  • IRIS
  • Masktech
  • Oberthur
  • PWPW
  • Safran Morpho
  • Toshiba

Every participant had the chance to submit up to two different sets of ePassport with different implementations. Altogether there were 52 samples available during the test session. All ePassports were tested in two different test procedures: Crossover Test and Conformity Test. Here the Conformity Test is focused on, because protocols are in foreground in this kind of test. There were three test labs (Keolabs, TÜViT + HJP Consulting and UL) taking part in the interoperability test with their test tools to perform a subset of “ICAO TR RF Protocol and Application Test Standard for e-Passports, Part 3”. The subset includes the following test suites:

  • ISO7816_O: Security conditions for PACE protected eMRTDs
  • ISO7816_P: Password Authenticated Connection Establishment (PACEv2)
  • ISO7816_Q: Command READ and SELECT for file EF.CardAccess
  • LDS_E: Matching between EF.DG14 and EF.CardAccess
  • LDS_I: Structure of EF.CardAccess

During the conformity test, all three test labs performed 21.282 test cases altogether. Nearly 3% of these test cases failed during the conformity test.

The following diagram shows the results of the conformity test as part of the SAC interoperability test. There were five samples with zero failure, seven samples with 1 failure, twenty-seven samples with 2, 3 or 4 failures, five samples with 5 up to 20 failures and eight samples with more than twenty failures:

This diagram describes the number of failures per document

The following diagram shows the failures per sample:

This diagram shows the number of failures per document

All documents supported either Integrated Mapping (IM), Generic Mapping (GM) or both. The following diagram shows the distribution of the mapping protocols:

This diagram shows the relation between Generic Mapping and Integrated Mapping

In mapping protocol there is a possibility to choose either ECDH, DH or both of them. The samples of the SAC interoperability test supported mostly ECDH, as showed in the following diagram:

This diagram shows the relation between ECDH and DH in Mapping

The observations of the conformity test (part of SAC interoperability test) are:

  • the document quality varies from “close to release state” to “experimental state”
  • there are different interpretations of padding in EF.CardAccess and EF.DG14, encoding of TerminalAuthenticationInfo in EF.DG14, the use of DO84 in PACE and the use of parameter ID when proprietary or standardized domain parameters are used
  • certificates for EAC protocol (e.g. test case 7816_O_41) were missing or not usable
  • use of different versions of test specification of test labs (Version 2.01 vs. Version 2.06)

Update 1: You can find a discussion concerning the test results on LinkedIn here.

Update 2: You can find the slides of the presentation we hold at the end of the SAC Interoperability Test here.

Flattr this!

Update of BSI TR-03105 Part 3.2 available (V1.4.1)

The German BSI and French AFNOR have released an update with minor clarifications of their technical guideline BSI TR-03105 Part 3.2 focusing on conformity tests for ePassports implementing protocols like PACE and SAC (EACv1).

The new version 1.4.1of TR-03105 Part 3.2 includes changes in the following test cases:

  • ISO7816_II_2: The missing profile ‘ECDH’ is added to the profile of this test case according to the corresponding test case ISO7816_I_2 in test suite I.
  • ISO7816_II_3: There is a new test step added (step 3) to perform the additional command GENERAL AUTHENTICATE to perform key agreement correctly.
  • ISO7816_K_19: There are several meanings how to handle the ‘presence’ of a data group. A simple command SELECT to detect a data group of the chip is insufficient and may cause problems. In this test case the presence of data group EF.DG15 should be used as an indicator to perform Active Authentication. In the new version of this test case the wording is adapted to TR-03110 and is changed from “is present” to “if available”. On this way the discussion is moved from TR-03105 to TR-03110. From my point of view it makes sense to check if the relevant data group is listed in file EF.SOD. The information in EF.COM is note secured by Passive Authentication and may be corrupted. Instead of that, EF.SOD is secure and can be used as an indicator of the existence of a file on the chip.
  • ISO7816_L_13: In step 9 of this test case the command MUTUAL AUTHENTICATE is performed. In the old version of the specification this command was not complete. The missing Le byte is now added, so the command expects now 40 bytes (or 28 in hex) as response.
  • ISO7816_L_14: In the previous version of TR-03105 in step 8 of this test case a SELECT MF with parameter P2 = ‘0C’ is performed. ISO7816-4 specifies for bytes b4=1 and b3 =1 that no response data is expected if Le field is absent. This command causes problems on some COS implementations and so the command is replaced by a SELECT with P2 = ’00’ and Le = ’00’.

In the past I have missed such a list for every new released version of test specifications, like BSI TR-03105 or ICAO technical reports. So I hope, this list of modified test cases is helpful for your work in context of ePassport testing. If you are interested, please leave a comment and I will update this list with every new version of test specifications in context of smart cards used in ePassports and ID cards.

Flattr this!

Interoperability Test for Supplemental Access Control (SAC)

During the ICAO Regional Seminar on Machine Readable Travel Documents (MRTD) in Madrid from 25th to 27th of June 2014 there will be also the opportunity of an interoperability test for ePassports with Supplemental Access Control (SAC). The protocol SAC is replacing Basic Access Control (BAC) used in ePassports and will be obligatory in EU from December 2014. SAC is a mechanism specified to ensure only authorized parties can wirelessly read information from the RFID chip of an ePassport. SAC is also known as PACE v2 (Password Authenticated Connection Establishment). PACE v1 is used as a basic protocol in the German ID card and was developed and specified by the German BSI.

An interoperability test is similar to a plugtest performed e.g. by ETSI. It’s an event during which devices (ePassport, inspection systems and test tools) are tested for interoperability with emerging standards by physically connecting them. This procedure allows all vendors to test their devices against other devices. Additionally, there is the opportunity besides this crossover tests to test the devices against conformity test suites implemented in test tools like GlobalTester. This procedure reduces efforts and allows comprehensive failure analyses of the devices like ePassports or inspection systems. There are well established test specifications available, both for ePassports and for inspection systems. Publishers of these test specifications are the German BSI (TR-03105) or ICAO (TR – RF and Protocol Testing Part 3).

You can find further information corresponding to this event on the ICAO website. The website will be updated frequently.

Flattr this!

PersoSim – an open source eID simulator

The German Federal Office for Information Security (BSI) started a project for an open source eID simulator. The simulator allows a wide range of personalisation, is more flexible than a real card and is free to use.

There is a rising need of test cards for developers of eID clients and companies which want to offer services by using the eID functions of the German ID card (nPA, elektronischer Personalausweis). Today it is difficult to get test cards for developers who want to evaluate the eID functions in their systems. Also for improvements and development of new protocols – but also for tests of established protocols – an open implementation of eID functions would be helpful. Therefore the German BSI started a project with HJP Consulting for an implementation of an open source eID simulator which provides all logical functions of the German ID card.

The website of the project is www.persosim.de (site is in German only) and the first version of the simulator is ready for download there. There is also a virtual driver available, that simulates a card reader. On this way you can simulate card and reader for testing purposes.

Update 1: We have released an article in The VAULT (magazine of Silicon Trust) concerning PersoSim in English Language. You can find the article here for free in The VAULT #14.

Update 2: We have released the source code of the simulator and using github as repository. You can find all relevant information on the PersoSim project website. Please feel free to fork the code and extend the project with new features.

Flattr this!

Standards of Smart Cards in ISO Layer Model

Usually smart card applications base on international standards and norms. Also protocols mentioned here in this blog in context of ePassports, like Supplemental Access Control (SAC) or Password Authenticated Connection Establishment (PACE) are based on international ISO standards. The following figure shows the relevant ISO standards for contacted smart cards on the one side and contactless smart cards on the other side:

Smart Cards in context of ISO/OSI Layer Model

Smart Cards in context of ISO/OSI Layer Model

The main standard for contacted smart cards is ISO 7816, the main standard for contactless smart cards is ISO 14443. On application level both types of smart cards are using ISO 7816,  where all commands (Application Protocol Data Unit, APDU) and files systems are described. Protocols are composed by these commands and using access rights and file systems specified in this standard. The standard ISO 7816-4 (Integrated circuit cards – Part 4: Organization, security and commands for interchange) is important for nearly all smart card applications. Using this standard enable applications to interoperate in various open environments, e.g. a credit card can be read by different terminals all over the world because credit card and terminal are using the same standard.

ISO 14443 specifies contactless mechanisms of smart cards. Smart cards may be type A or type B, both of them communicate via radio at 13.56 MHz. The main differences between these two types concern modulation methods, coding schemes (ISO 14443-2) and protocol initialization procedures (ISO 14443-3). Both types are using the same transmission protocol, described in ISO 14443-4. The transmission protocol specifies mechanisms like data block exchange and waiting time extension. In the contactless world a reader is called proximity coupling device (PCD) and the card itself is
called proximity integrated circuit card (PICC).

 

 

Flattr this!

Use RasPi to seek after EnOcean telegrams

During the last months I spent some hours in the specifications of EnOcean telegrams. These telegrams are used in domain of home automation. The EnOcean Alliance publishes all necessary specification on their website. One of the relevant specifications is EnOcean Serial Protocol 3 (ESP3). In this description you can find all information to understand the protocol used by EnOcean.The specification of this protocol is also standardized and published as ISO/IEC 14543-3-10.

If you are interested in collecting telegrams to analyze them and to understand the protocol behind them, the following project may be interesting for you: EnOceanSpy. I’ve hosted this small piece of software on GitHub. It’s written in C and there is a binary version available for Raspberry Pi (RasPi). On this way you can use your RasPi in combination with an USB300 stick. The following photo demonstrates a buildup including a WakaWaka as power source.

RasPi_WakaWaka_USB300EnOcean allows on the one hand one-way and on the other hand bidirectional communication between devices. Currently most of this communication is not decrypted, so you can read all information communicated via air. There is a first specification to use cryptography for EnOcean protocol. I will give you an overview on this way of encryption in the next time.

Have fun to seek your environment after EnOcean devices 🙂

 

Flattr this!

List of upcoming test events

The colleagues of testevents.com set up a list of upcoming test events all over the world (testing in general and not only focussing on protocols). You can filter for several countries or categories and get information concerning corresponding call for papers. The calendar lists various test events, e.g. German Testing Day taking place in November 2013 in Munich or EuroStar Software Testing Conference taking place in November 2013 in Gothenburg, Sweden.

This list may help you to plan your attendance at important conferences and to keep the deadlines of CfP in mind. On their website you can also find some book references and magazine references, all focusing on testing. Thanks to the team of testevents.com for this useful service!

Flattr this!

Automatic border control (eGate)

Back in office after three weeks holiday I would like to show you now one of the high level results doing all these interoperability tests in the domain of ePassports. Today a German consortium (i.a. Bundesdruckerei and Secunet) won a tender for biometric-based eGates rolled out across the country in the next years at several airports. These eGates use well-known protocols as Basic Access Control (BAC) or Supplemental Access Control (SAC) to establish a secure channel between reader and smart card of ePassport via ISO 14443 interface for contactless smart cards. An automatic border control (ABC) like this allows passengers in less than 30 seconds to pass the gate. Currently you can find an alternative of such systems at the airport in Heathrow.

The following figure shows a typical simplified workflow of such a border control system:
Border Control Process

To reduce waiting time for passengers the system is using an automatic process. At the beginning the citizen is passing the gate by showing his ePassport. An inspection system scans the machine readable zone of the data page to derivate a cryptographic key to get access to the contactless smart card. As soon as all data groups of chip are read, the inspection system verifies the authenticity of the data to assure validity of the current ePassport chip. In the meantime the face recognition system scans the citizen to get a facial image of him. This image is being compared with the facial image of the chip (biometric verification). If both images are similar and the ePassport is not blacklisted, the citizen can pass the gate.

Flattr this!

Test tool overview

The colleagues of Imbus started a new platform with a list of several test tools in various categories. You find there both commercial and open source solutions of test software. The list is available in German and also in English.  All listed tools are classified as following:

  • Code and coverage analysis
  • Continuous integration
  • Defect and change management
  • Load and performance test
  • Test automation
  • Test management
  • Test specification and generators

You are invited to submit your tool and to interchange experiences and tips on the platform. Thanks to Imbus for this great and helpful overview!

Flattr this!

Next generation of ePassport testing

Developing and implementing conformity tests is a time-consuming and fault-prone task. To reduce these efforts a new route must be tackled. The current way of specifying tests and implementing them includes too many manual steps. Based on the experience of testing electronic smart cards in ID documents like ePassports or ID cards, the author describes a new way of saving time to write new test specifications and to get test cases based on these specifications. It is possible, though, to improve the specification and implementation of tests significantly by using new technologies such as model based testing (MBT) and domain specific languages (DSL). I’m describing here my experience in defining a new language for testing smart cards based on DSL and models, and in using this language to generate both documents and test cases that can run in several test tools. The idea of using a DSL to define a test specification goes back to a tutorial of Markus Voelter and Peter Friese, hold during the conference Software Engineering 2010  in Paderborn.

With the introduction of smart cards in ID documents the verification of these electronic parts has become more and more important. The German Federal Office for Information Security (BSI) defines technical guidelines that specify several tests required to fulfill compliance. These guidelines include tests on the electrical and physical layer on the one hand, and tests on the application and data layer on the other hand. In this presentation the author focusses on the tests on the last two layers because these tests can be implemented completely in software.

In TR-03105 the BSI specifies several hundreds of test cases concerning the data format of smart cards and also the commands and protocols used to communicate with the chip.

In the past the typical approach was divided into several separate steps. At first the BSI specified a list of test cases and published them in a document that was written manually by an editor of the technical guideline. Then several test houses and vendors of test tools implemented all the test cases based on the specific guideline into their software solution. All these steps had to be done manually, which means: the software engineer of each institution read the guideline and implemented test case by test case in his special test environment. With every new version of the guideline this procedure had to be repeated again and again. At the beginning, the update cycle of these test specifications was very frequent because all the feedback collected in the field was included in the guideline and new versions were published in short intervals:

figure_1

This way of specifying test specifications is inefficient because of the large number of manual steps. Doing the transformation from the test specification to the implementation is not only inefficient but also fault-prone: every test case in the guideline must be formulated in “prose” by the editor; every engineer must implement the test case in the respective programming language. Also the consistency of the tests must be maintained by the editor manually.

Furthermore, the writing of test specifications is an extensive part of conformity testing. The editor of such a specification in general uses a word processing software that is useful for e.g. writing small letters. But this kind of software is not really convenient for writing technical specifications like TR-03105. A typical problem is versioning of different types. It would be most helpful for developers, if the editor used the track changes mode when he changes test cases. This way the developer can easily detect changes. But this advantage depends on the activated mode. As soon as the editor forgets to activate the track changes mode the implementation of these changes becomes more and more complicated.

Due to an increasing number of new requirements of the applications running on smart cards the complexity of these systems becomes higher and higher. In Walter Fumys “Handbook of eID Security” the history of eID documents from purely visible ones to future versions is illustrated. This complexity in these applications will result in so many test cases that the current approach of writing and implementing test specifications is a blind alley.

With recent results of Model Driven Software Development (MDSD) this blind alley can be avoided. New techniques and tools allow us now to switch from the manual parts to a more automated procedure. The goal is to write only one “text” that can be used as a source for all the test tools. The solution is a model that defines the test cases and a transformation of this model to other platforms or formats.

With this new approach, the process of specifying tests can be reduced to the interesting part where the editor can use his creativity to conceive new tests and not to use his office software to write tests.

Defining a language that describes the test case is the basis for this procedure. This grammar can be used to model test cases, and based on this model all the artifacts needed can be generated. The following figure visualizes this process: there is one Meta test specification that is used to generate not only the human-readable document but also the tool-specific test cases for every test environment.

figure_2

One solution to define a language is Xtext. With Xtext the user gets a complete environment based on Eclipse to develop his own domain specific language (DSL). One of the benefits of Xtext is the editor that is generated automatically by the tool itself. This editor includes code-completion, syntax coloring, code-folding and outline view. This editor is very helpful to write test cases. Every test case that is not compliant with the grammar is marked as faulty. So the editor of the specification can recognize this wrong test case directly like a software developer in Integrated Development Environments (IDE).

Additionally, the user can implement generators to generate code for the scope platform. These generators are called by the Modeling Workflow Engine (MWE). These generators are powerful and productive tools to provide test cases for different platforms.

In the public sector it is more and more important to write barrier-free documents. It takes a lot of time to write a barrier-free document based on a typical technical specification. With a generator that produces a human-readable document the author of the test specifications can use generic templates that produce barrier-free documents in an automatic way because the generator can use rules that fulfill even these standards.

Once the user has generated a new test specification or a new test case based on any test tool, he can modify this document by adding some special features, e.g. a special library to one test case. With a model based test specification it is possible to re-import this modified artifact into the model to assure persistency. The author presented and published his first experience at ICST 2011.

This approach helps to write test specifications in a technical way on a Meta level but it does not focus on the content of the test specification. Thus, the approach helps to write the document but it does not help to produce any content needed. Currently, the quality of a test specification is dependent from the background of the author. With his knowledge of protocols and corresponding pitfalls he can specify interesting test cases. But many test cases contain the same scenarios (wrong length, set a value to zero, use maximum or minimum value and so on). It would be more reasonable and economical if the author could focus on special test cases for the relevant protocols and their pitfalls and “standard” test cases would be generated automatically. On the other hand, test specifications written by humans always run the risk of being inconsistent, error-prone and imprecise. Additionally, it is always rather time-consuming to write test specifications manually.

To focus and solve problems as described above a consortium of BSI, HJP Consulting, s-lab Software Quality Lab (University of Paderborn) and TÜViT started a research project, namely MOTEMRI (Modellbasiertes Testen mit Referenzimplementierung). In MOTEMRI a model is developed that contains all relevant information of the popular protocol PACE. This model is specified in UML so everybody who is interested can read and modify the diagrams easily. In this way it is possible to adapt new protocols into the model like PACE or new versions developed in the future. Based on the model, algorithms generate test cases automatically. Thereby the knowledge of designing test cases is enacted into software, independent from the knowledge of the author of the test specification. By the way, this procedure also allows using various “wrong” values for negative test cases.  Negative test cases are generated automatically and access different “wrong” values. Using random values allows better testing and ensures better chip implementations.

Flattr this!