Category Archives: ePassport

Update of RF and Protocol Testing Part 3 V2.07 online

There is an maintenance update of ICAO’s test specification ‘RF and Protocol Testing Part 3‘ available since today. The specification is focusing on conformity testing and protocol testing for ePassports implementing protocols like BAC and Supplemental Access Control (SAC) respective PACE v2.

The Technical Advisory Group (TAG) of ICAO endorsed the release on the ICAO website, so from now on the test specification can be referenced officially. In version 2.07 of the test specification there are no technical or fundamental changes, but editorial changes. The following test cases are modified in the new release 2.07:

  • ISO7816_B_16: Profile corrected
  • ISO7816_B_26: Added version
  • ISO7816_B_34: Profile corrected
  • ISO7816_B_52: Profile corrected
  • ISO7816_D_06: Updated version
  • ISO7816_D_09 – ISO7816_D_22: Profile corrected and version updated
  • ISO7816_E_09 – ISO7816_E_22: Profile corrected and version updated
  • ISO7816_F_20: Profile corrected and version updated
  • ISO7816_G_20: Profile corrected and version updated
  • ISO7816_O_12: Deleted obsolete Test-ID
  • ISO7816_O_13: Corrected sequence tags
  • ISO7816_O_31: Deleted obsolete Test-ID
  • ISO7816_O_35: Added missing caption
  • ISO7816_P_xx:  Deleted sample in description of step 1 (‘i.e. more than one set of
    domain parameters are available for PACE’)
  • ISO7816_P_04: Corrected numbering in expected results
  • ISO7816_P_06: Corrected numbering in expected results
  • ISO7816_P_07: Corrected numbering in expected results
  • ISO7816_P_14: Updated version
  • ISO7816_P_74: In preconditions step 3 concretized concerning PACEInfos in EF.CardAccess
  • ISO7816_Q_03: Added missing reference TR-SAC
  • LDS_D_06: Corrected typos in step 8

 

With the new edition of Doc 9303 several technical reports are now obsolete except the test specifications. These test specifications are still independent documents.

Update of ICAO Doc 9303 Edition

International Civil Aviation Organization (ICAO) has released the seventh edition of ICAO Doc 9303. This document is the de-facto standard for machine readable travel documents (MRTD). It specifies passports and visas starting with the dimensions of the travel document and ending with the specification of protocols used by the chip integrated in travel documents.

ICAO Doc 9303 Title page

A fundamental problem of the old sixth edition of Doc 9303 (released 2006) resides in the fact, that there are in sum 14 supplemental documents. All of these supplements include clarifications and corrections of Doc 9303, e.g. Supplement 14 contains 253 different issues. Additionally, there are separate documents specifying new protocols like Supplemental Access Control (SAC) also known as PACE v2. So ICAO started in 2011 to re-structure the specifications with the result that all these technical reports, guidelines and supplements are now consolidated in the seventh edition of ICAO Doc 9303. Also several inconsistencies of the documents are resolved. On this way several technical reports, like TR – Supplemental Access Control for MRTDs V1.1 and TR LDS and PKI Maintenance V2.0, are obsolete now with the seventh edition of Doc 9303.

The new edition of ICAO Doc 9303 consists now of twelve parts:

  • Part 3: Specifications common to all MRTDs
  • Part 4: Specifications for Machine Readable Passports (MRPs) and other td3 size MRTDs
  • Part 5: Specifications for td1 size Machine Readable Official Travel Documents (MROTDs)
  • Part 8: RFU (Reserved for future use): Emergency Travel Documents
  • Part 9: Deployment of biometric identification and electronic storage of data in eMRTDs
  • Part 10: Logical Data Structure (LDS) for storage of biometrics and other data in the contactless integrated circuit (IC)
  • Part 11: Security mechanisms for MRTDs
  • Part 12: Public Key Infrastructure (PKI) for MRTDs

From a protocol point of view there are two interesting parts in Doc 9303: part 10 describes the data structures used in a smart card to store information. In addition part 11 describes the technical protocols to get access to this data, e.g. Chip Authentication Mapping.

Special thanks to Garleen Tomney-McGann working at ICAO headquarter in Montreal. As a member of the Traveller Identification Programme (TRIP) she has coordinated all the activities resulting in the seventh release of ICAO Doc 9303.

Chip Authentication Mapping

Supplemental Access Control (SAC) is a set of security protocols published by ICAO to protect personal data stored in electronic travel documents like ePassports and ID cards. One protocol of SAC is the well known Password Authenticated Connection Establishment (PACE) protocol, which supplements and enhances Basic Access Control (BAC). PACE was developed originally by the German Federal Office for Information Security (BSI) to provide a cryptographic protocol for the German ID card (Personalausweis).

Currently PACE supports three different kinds of mapping as part of the security protocol execution:

  • Generic Mapping (GM) based on a Diffie-Hellman Key Agreement,
  • Integrated Mapping (IM) based on a direct mapping of a field element to the cryptographic group,
  • Chip Authentication Mapping (CAM) extends Generic Mapping and integrates Chip Authentication.

Since Version 1.1 of ICAO technical report TR – Supplemental Access Control for MRTDs there is a specification of a third mapping procedure for PACE, the Chip Authentication Mapping (CAM), which extends established Generic Mapping. This third mapping protocol combines PACE and Chip Authentication into only one protocol PACE-CAM. On this way it is possible to perform Chip Authentication Mapping faster than both separate protocols.

The chip indicates the support of Chip Authentication Mapping by the presence of a corresponding PACEInfo structure in the file EF.CardAccess.  The Object Identifier (OID) defines the cryptographic parameters that must be used during the mapping. CAM supports AES with key length of 128, 192 and 256. But in contrast to GM and IM there is no support of 3DES (for security reasons) and only support of ECDH.

The mapping phase of the CAM itself is 100% identical to the mapping phase of GM. The ephemeral public keys are encoded as elliptic curve points.

To perform PACE a chain of GENERAL AUTHENTICATE commands is used. For CAM there is a deviation in step 4 when Mutual Authentication is performed. In this step the terminal sends the authentication token of the terminal (tag 0x85) and expects the authentication token of the chip (tag 0x86). Additionally, in CAM the chip sends also encrypted chip authentication data with tag 0x8A to the terminal.

If GENERAL AUTHENTICATION procedure was performed successfully, the terminal must perform the following two steps to authenticate the chip:

  1. Read and verify EF.CardSecurity,
  2. Use the public key of EF.CardSecurity in combination with the mapping data and the encrypted chip authentication data received during CAM to authenticate the chip.

It is necessary to perform Passive Authentication in combination with Chip Authentication Mapping to consider that the chip is genuine.

The benefit of Chip Authentication Mapping is the combination of PACE and Chip Authentication. The combination of both protocols saves time and allows a faster performance than the execution of both protocol separately.

You can find interesting information concerning CAM in the patent of Dr. Dennis Kügler and Dr. Jens Bender in the corresponding document of the German Patent and Trademark Office.

 

Testing ePassport Readers using TTCN-3

Currently you can find the well-known test tool Titan under the patronage of the Eclipse Foundation (proposal). This tool was developed by Ericsson several years ago to the test internet protocol (IP). Titan bases on TTCN-3, a test language focusing on communication protocols. This keeps me in mind an old project with ETSI where we used TTCN-3 to test ePassport readers concerning BSI TR-3105 part 5.1.

From end of 2009 to middle of 2011 ETSI has conducted a project to develop a test system prototype for conformance testing of ePassport readers. The objective of this project was to design, build and test a TTCN-3 (Testing and Test Control Notation, Version 3) based test system prototype for ePassport reader conformance testing. This project was a joint effort between the European Joint Research Centre (JRC) in Ispra (Italy) and ETSI in Sophia Antipolis (France). The test language TTCN-3 has already been widely used in many testing projects across different protocols and technologies. However, until this project TTCN-3 has not been applied in the area of contactless smart card testing.

The ETSI Specialists Task Force (STF) 400 with experts from the companies / organisations AMB Consulting, ARH, Comprion, ETSI, FSCOM, HJP Consulting, Masaryk University, Soliatis and Testing Technologies operated this project. The work of this STF has been split into three main phases:

  1. Design, implementation, and use of ePassport test system
  2. Development of ePassport testing framework
  3. Writing of the documentation and dissemination material

Scope of this project was to build a test system to test an inspection system typically used to read ePassports. To demonstrate the basic functionality and the feasibility, a subset of BSI TR-03105 Part 5.1 was specified and implemented in the test system.

The following image describes the architecture of the ePassport reader test system developed during this project:

System architecture of prototype to test ePassport Reader with TTCN-3

Architecture of test system based on TTCN-3 for ePassport readers (Source: ETSI White Paper No.7)

The most significant part in the architecture is “TTCN-3 Test Component”. This module simulates the ePassort behaviour  by receiving APDUs, react to these commands and send result in APDUs back to the SUT (here the ePassport reader).

The successful implementation of a TTCN-3 based test system shows that the use of TTCN-3 fits the requirements of conformance testing of eMRTD or other eID systems. The prototype demonstrates the feasibility of using such formal techniques for protocols which would improve the quality and repeatability of tests and reduce room for interpretation.

An overview of this project and the results were summarized by the colleagues Jean-Marc Chareau, Laurent Velez and Zdenek Riha in ETSI White Paper No 7.

 

 

Interoperability Test for Supplemental Access Control (SAC)

During the ICAO Regional Seminar on Machine Readable Travel Documents (MRTD) in Madrid from 25th to 27th of June 2014 there will be also the opportunity of an interoperability test for ePassports with Supplemental Access Control (SAC). The protocol SAC is replacing Basic Access Control (BAC) used in ePassports and will be obligatory in EU from December 2014. SAC is a mechanism specified to ensure only authorized parties can wirelessly read information from the RFID chip of an ePassport. SAC is also known as PACE v2 (Password Authenticated Connection Establishment). PACE v1 is used as a basic protocol in the German ID card and was developed and specified by the German BSI.

An interoperability test is similar to a plugtest performed e.g. by ETSI. It’s an event during which devices (ePassport, inspection systems and test tools) are tested for interoperability with emerging standards by physically connecting them. This procedure allows all vendors to test their devices against other devices. Additionally, there is the opportunity besides this crossover tests to test the devices against conformity test suites implemented in test tools like GlobalTester. This procedure reduces efforts and allows comprehensive failure analyses of the devices like ePassports or inspection systems. There are well established test specifications available, both for ePassports and for inspection systems. Publishers of these test specifications are the German BSI (TR-03105) or ICAO (TR – RF and Protocol Testing Part 3).

You can find further information corresponding to this event on the ICAO website. The website will be updated frequently.

Automatic border control (eGate)

Back in office after three weeks holiday I would like to show you now one of the high level results doing all these interoperability tests in the domain of ePassports. Today a German consortium (i.a. Bundesdruckerei and Secunet) won a tender for biometric-based eGates rolled out across the country in the next years at several airports. These eGates use well-known protocols as Basic Access Control (BAC) or Supplemental Access Control (SAC) to establish a secure channel between reader and smart card of ePassport via ISO 14443 interface for contactless smart cards. An automatic border control (ABC) like this allows passengers in less than 30 seconds to pass the gate. Currently you can find an alternative of such systems at the airport in Heathrow.

The following figure shows a typical simplified workflow of such a border control system:
Border Control Process

To reduce waiting time for passengers the system is using an automatic process. At the beginning the citizen is passing the gate by showing his ePassport. An inspection system scans the machine readable zone of the data page to derivate a cryptographic key to get access to the contactless smart card. As soon as all data groups of chip are read, the inspection system verifies the authenticity of the data to assure validity of the current ePassport chip. In the meantime the face recognition system scans the citizen to get a facial image of him. This image is being compared with the facial image of the chip (biometric verification). If both images are similar and the ePassport is not blacklisted, the citizen can pass the gate.

Next generation of ePassport testing

Developing and implementing conformity tests is a time-consuming and fault-prone task. To reduce these efforts a new route must be tackled. The current way of specifying tests and implementing them includes too many manual steps. Based on the experience of testing electronic smart cards in ID documents like ePassports or ID cards, the author describes a new way of saving time to write new test specifications and to get test cases based on these specifications. It is possible, though, to improve the specification and implementation of tests significantly by using new technologies such as model based testing (MBT) and domain specific languages (DSL). I’m describing here my experience in defining a new language for testing smart cards based on DSL and models, and in using this language to generate both documents and test cases that can run in several test tools. The idea of using a DSL to define a test specification goes back to a tutorial of Markus Voelter and Peter Friese, hold during the conference Software Engineering 2010  in Paderborn.

With the introduction of smart cards in ID documents the verification of these electronic parts has become more and more important. The German Federal Office for Information Security (BSI) defines technical guidelines that specify several tests required to fulfill compliance. These guidelines include tests on the electrical and physical layer on the one hand, and tests on the application and data layer on the other hand. In this presentation the author focusses on the tests on the last two layers because these tests can be implemented completely in software.

In TR-03105 the BSI specifies several hundreds of test cases concerning the data format of smart cards and also the commands and protocols used to communicate with the chip.

In the past the typical approach was divided into several separate steps. At first the BSI specified a list of test cases and published them in a document that was written manually by an editor of the technical guideline. Then several test houses and vendors of test tools implemented all the test cases based on the specific guideline into their software solution. All these steps had to be done manually, which means: the software engineer of each institution read the guideline and implemented test case by test case in his special test environment. With every new version of the guideline this procedure had to be repeated again and again. At the beginning, the update cycle of these test specifications was very frequent because all the feedback collected in the field was included in the guideline and new versions were published in short intervals:

figure_1

This way of specifying test specifications is inefficient because of the large number of manual steps. Doing the transformation from the test specification to the implementation is not only inefficient but also fault-prone: every test case in the guideline must be formulated in “prose” by the editor; every engineer must implement the test case in the respective programming language. Also the consistency of the tests must be maintained by the editor manually.

Furthermore, the writing of test specifications is an extensive part of conformity testing. The editor of such a specification in general uses a word processing software that is useful for e.g. writing small letters. But this kind of software is not really convenient for writing technical specifications like TR-03105. A typical problem is versioning of different types. It would be most helpful for developers, if the editor used the track changes mode when he changes test cases. This way the developer can easily detect changes. But this advantage depends on the activated mode. As soon as the editor forgets to activate the track changes mode the implementation of these changes becomes more and more complicated.

Due to an increasing number of new requirements of the applications running on smart cards the complexity of these systems becomes higher and higher. In Walter Fumys “Handbook of eID Security” the history of eID documents from purely visible ones to future versions is illustrated. This complexity in these applications will result in so many test cases that the current approach of writing and implementing test specifications is a blind alley.

With recent results of Model Driven Software Development (MDSD) this blind alley can be avoided. New techniques and tools allow us now to switch from the manual parts to a more automated procedure. The goal is to write only one “text” that can be used as a source for all the test tools. The solution is a model that defines the test cases and a transformation of this model to other platforms or formats.

With this new approach, the process of specifying tests can be reduced to the interesting part where the editor can use his creativity to conceive new tests and not to use his office software to write tests.

Defining a language that describes the test case is the basis for this procedure. This grammar can be used to model test cases, and based on this model all the artifacts needed can be generated. The following figure visualizes this process: there is one Meta test specification that is used to generate not only the human-readable document but also the tool-specific test cases for every test environment.

figure_2

One solution to define a language is Xtext. With Xtext the user gets a complete environment based on Eclipse to develop his own domain specific language (DSL). One of the benefits of Xtext is the editor that is generated automatically by the tool itself. This editor includes code-completion, syntax coloring, code-folding and outline view. This editor is very helpful to write test cases. Every test case that is not compliant with the grammar is marked as faulty. So the editor of the specification can recognize this wrong test case directly like a software developer in Integrated Development Environments (IDE).

Additionally, the user can implement generators to generate code for the scope platform. These generators are called by the Modeling Workflow Engine (MWE). These generators are powerful and productive tools to provide test cases for different platforms.

In the public sector it is more and more important to write barrier-free documents. It takes a lot of time to write a barrier-free document based on a typical technical specification. With a generator that produces a human-readable document the author of the test specifications can use generic templates that produce barrier-free documents in an automatic way because the generator can use rules that fulfill even these standards.

Once the user has generated a new test specification or a new test case based on any test tool, he can modify this document by adding some special features, e.g. a special library to one test case. With a model based test specification it is possible to re-import this modified artifact into the model to assure persistency. The author presented and published his first experience at ICST 2011.

This approach helps to write test specifications in a technical way on a Meta level but it does not focus on the content of the test specification. Thus, the approach helps to write the document but it does not help to produce any content needed. Currently, the quality of a test specification is dependent from the background of the author. With his knowledge of protocols and corresponding pitfalls he can specify interesting test cases. But many test cases contain the same scenarios (wrong length, set a value to zero, use maximum or minimum value and so on). It would be more reasonable and economical if the author could focus on special test cases for the relevant protocols and their pitfalls and “standard” test cases would be generated automatically. On the other hand, test specifications written by humans always run the risk of being inconsistent, error-prone and imprecise. Additionally, it is always rather time-consuming to write test specifications manually.

To focus and solve problems as described above a consortium of BSI, HJP Consulting, s-lab Software Quality Lab (University of Paderborn) and TÜViT started a research project, namely MOTEMRI (Modellbasiertes Testen mit Referenzimplementierung). In MOTEMRI a model is developed that contains all relevant information of the popular protocol PACE. This model is specified in UML so everybody who is interested can read and modify the diagrams easily. In this way it is possible to adapt new protocols into the model like PACE or new versions developed in the future. Based on the model, algorithms generate test cases automatically. Thereby the knowledge of designing test cases is enacted into software, independent from the knowledge of the author of the test specification. By the way, this procedure also allows using various “wrong” values for negative test cases.  Negative test cases are generated automatically and access different “wrong” values. Using random values allows better testing and ensures better chip implementations.

Results of SAC InterOp Test 2013 available

The results of the InterOp test 2013 concerning the new protocol SAC (Supplemental Access Control) are available. The test event was split into two slots – a conformity test (to test if the document conform to the latest ICAO standards) and a crossover test (to test, if each document can be read by the inspection system). A concluding test report is available here. Thanks to Mark Lockie and Michael Schlüter for organizing this successful event.

ePassport Interoperability Test in London

Next week another ePassport interoperability test takes place in London. The community will join to test their next generation smart cards in ePassports with the new protocol Supplemental Access Control (SAC) as a replacement of Basic Access Control (BAC). BAC was designed in the beginning of this century and will be replaced by SAC in December 2014 latest. The protocol SAC bases on the well known protocol Password Authenticated Connection Establishment (PACE) that was mainly developed by German BSI and that is also used in German ID cards issued since November 2010. PACE is specified in TR-03110.

During the interoperability test vendors of chips and inspections system will test their implementations against current conformity test suites of several test labs. More information can be found here: InterOp 2013.

See you in London!