Tag Archives: test

Results SAC Interoperability Test in Madrid 2014

The European Commission (EC) and the International Civil Aviation Organization (ICAO) has organized a SAC interoperability test in Madrid end of June 2014. The objective of this interoperability test was to assure that European countries are ready to launch Supplemental Access Control (SAC) respective PACEv2 at the end of this year. The following countries participated in the test (in alphabetical order):

  • Australia
  • Austria
  • Belgium
  • Bosnia Herzegovina
  • Czech Republic
  • Finland
  • France
  • Germany
  • Iceland
  • Italy
  • Japan
  • Netherlands
  • Norway
  • Portugal
  • Slovenia
  • Spain
  • Sweden
  • Switzerland

The SAC interoperability test was also open for industry. The following vendors participated with their ePassport solutions (in alphabetical order):

  • 3M
  • Arjowiggins
  • Athena
  • De La Rue
  • EDAPS
  • Gemalto
  • Giesecke & Devrient
  • IRIS
  • Masktech
  • Oberthur
  • PWPW
  • Safran Morpho
  • Toshiba

Every participant had the chance to submit up to two different sets of ePassport with different implementations. Altogether there were 52 samples available during the test session. All ePassports were tested in two different test procedures: Crossover Test and Conformity Test. Here the Conformity Test is focused on, because protocols are in foreground in this kind of test. There were three test labs (Keolabs, TÜViT + HJP Consulting and UL) taking part in the interoperability test with their test tools to perform a subset of “ICAO TR RF Protocol and Application Test Standard for e-Passports, Part 3”. The subset includes the following test suites:

  • ISO7816_O: Security conditions for PACE protected eMRTDs
  • ISO7816_P: Password Authenticated Connection Establishment (PACEv2)
  • ISO7816_Q: Command READ and SELECT for file EF.CardAccess
  • LDS_E: Matching between EF.DG14 and EF.CardAccess
  • LDS_I: Structure of EF.CardAccess

During the conformity test, all three test labs performed 21.282 test cases altogether. Nearly 3% of these test cases failed during the conformity test.

The following diagram shows the results of the conformity test as part of the SAC interoperability test. There were five samples with zero failure, seven samples with 1 failure, twenty-seven samples with 2, 3 or 4 failures, five samples with 5 up to 20 failures and eight samples with more than twenty failures:

This diagram describes the number of failures per document

The following diagram shows the failures per sample:

This diagram shows the number of failures per document

All documents supported either Integrated Mapping (IM), Generic Mapping (GM) or both. The following diagram shows the distribution of the mapping protocols:

This diagram shows the relation between Generic Mapping and Integrated Mapping

In mapping protocol there is a possibility to choose either ECDH, DH or both of them. The samples of the SAC interoperability test supported mostly ECDH, as showed in the following diagram:

This diagram shows the relation between ECDH and DH in Mapping

The observations of the conformity test (part of SAC interoperability test) are:

  • the document quality varies from “close to release state” to “experimental state”
  • there are different interpretations of padding in EF.CardAccess and EF.DG14, encoding of TerminalAuthenticationInfo in EF.DG14, the use of DO84 in PACE and the use of parameter ID when proprietary or standardized domain parameters are used
  • certificates for EAC protocol (e.g. test case 7816_O_41) were missing or not usable
  • use of different versions of test specification of test labs (Version 2.01 vs. Version 2.06)

Update 1: You can find a discussion concerning the test results on LinkedIn here.

Update 2: You can find the slides of the presentation we hold at the end of the SAC Interoperability Test here.

Update of BSI TR-03105 Part 3.2 available (V1.4.1)

The German BSI and French AFNOR have released an update with minor clarifications of their technical guideline BSI TR-03105 Part 3.2 focusing on conformity tests for ePassports implementing protocols like PACE and SAC (EACv1).

The new version 1.4.1of TR-03105 Part 3.2 includes changes in the following test cases:

  • ISO7816_II_2: The missing profile ‘ECDH’ is added to the profile of this test case according to the corresponding test case ISO7816_I_2 in test suite I.
  • ISO7816_II_3: There is a new test step added (step 3) to perform the additional command GENERAL AUTHENTICATE to perform key agreement correctly.
  • ISO7816_K_19: There are several meanings how to handle the ‘presence’ of a data group. A simple command SELECT to detect a data group of the chip is insufficient and may cause problems. In this test case the presence of data group EF.DG15 should be used as an indicator to perform Active Authentication. In the new version of this test case the wording is adapted to TR-03110 and is changed from “is present” to “if available”. On this way the discussion is moved from TR-03105 to TR-03110. From my point of view it makes sense to check if the relevant data group is listed in file EF.SOD. The information in EF.COM is note secured by Passive Authentication and may be corrupted. Instead of that, EF.SOD is secure and can be used as an indicator of the existence of a file on the chip.
  • ISO7816_L_13: In step 9 of this test case the command MUTUAL AUTHENTICATE is performed. In the old version of the specification this command was not complete. The missing Le byte is now added, so the command expects now 40 bytes (or 28 in hex) as response.
  • ISO7816_L_14: In the previous version of TR-03105 in step 8 of this test case a SELECT MF with parameter P2 = ‘0C’ is performed. ISO7816-4 specifies for bytes b4=1 and b3 =1 that no response data is expected if Le field is absent. This command causes problems on some COS implementations and so the command is replaced by a SELECT with P2 = ’00’ and Le = ’00’.

In the past I have missed such a list for every new released version of test specifications, like BSI TR-03105 or ICAO technical reports. So I hope, this list of modified test cases is helpful for your work in context of ePassport testing. If you are interested, please leave a comment and I will update this list with every new version of test specifications in context of smart cards used in ePassports and ID cards.

Interoperability Test for Supplemental Access Control (SAC)

During the ICAO Regional Seminar on Machine Readable Travel Documents (MRTD) in Madrid from 25th to 27th of June 2014 there will be also the opportunity of an interoperability test for ePassports with Supplemental Access Control (SAC). The protocol SAC is replacing Basic Access Control (BAC) used in ePassports and will be obligatory in EU from December 2014. SAC is a mechanism specified to ensure only authorized parties can wirelessly read information from the RFID chip of an ePassport. SAC is also known as PACE v2 (Password Authenticated Connection Establishment). PACE v1 is used as a basic protocol in the German ID card and was developed and specified by the German BSI.

An interoperability test is similar to a plugtest performed e.g. by ETSI. It’s an event during which devices (ePassport, inspection systems and test tools) are tested for interoperability with emerging standards by physically connecting them. This procedure allows all vendors to test their devices against other devices. Additionally, there is the opportunity besides this crossover tests to test the devices against conformity test suites implemented in test tools like GlobalTester. This procedure reduces efforts and allows comprehensive failure analyses of the devices like ePassports or inspection systems. There are well established test specifications available, both for ePassports and for inspection systems. Publishers of these test specifications are the German BSI (TR-03105) or ICAO (TR – RF and Protocol Testing Part 3).

You can find further information corresponding to this event on the ICAO website. The website will be updated frequently.

List of upcoming test events

The colleagues of testevents.com set up a list of upcoming test events all over the world (testing in general and not only focussing on protocols). You can filter for several countries or categories and get information concerning corresponding call for papers. The calendar lists various test events, e.g. German Testing Day taking place in November 2013 in Munich or EuroStar Software Testing Conference taking place in November 2013 in Gothenburg, Sweden.

This list may help you to plan your attendance at important conferences and to keep the deadlines of CfP in mind. On their website you can also find some book references and magazine references, all focusing on testing. Thanks to the team of testevents.com for this useful service!

Test tool overview

The colleagues of Imbus started a new platform with a list of several test tools in various categories. You find there both commercial and open source solutions of test software. The list is available in German and also in English.  All listed tools are classified as following:

  • Code and coverage analysis
  • Continuous integration
  • Defect and change management
  • Load and performance test
  • Test automation
  • Test management
  • Test specification and generators

You are invited to submit your tool and to interchange experiences and tips on the platform. Thanks to Imbus for this great and helpful overview!

Next generation of ePassport testing

Developing and implementing conformity tests is a time-consuming and fault-prone task. To reduce these efforts a new route must be tackled. The current way of specifying tests and implementing them includes too many manual steps. Based on the experience of testing electronic smart cards in ID documents like ePassports or ID cards, the author describes a new way of saving time to write new test specifications and to get test cases based on these specifications. It is possible, though, to improve the specification and implementation of tests significantly by using new technologies such as model based testing (MBT) and domain specific languages (DSL). I’m describing here my experience in defining a new language for testing smart cards based on DSL and models, and in using this language to generate both documents and test cases that can run in several test tools. The idea of using a DSL to define a test specification goes back to a tutorial of Markus Voelter and Peter Friese, hold during the conference Software Engineering 2010  in Paderborn.

With the introduction of smart cards in ID documents the verification of these electronic parts has become more and more important. The German Federal Office for Information Security (BSI) defines technical guidelines that specify several tests required to fulfill compliance. These guidelines include tests on the electrical and physical layer on the one hand, and tests on the application and data layer on the other hand. In this presentation the author focusses on the tests on the last two layers because these tests can be implemented completely in software.

In TR-03105 the BSI specifies several hundreds of test cases concerning the data format of smart cards and also the commands and protocols used to communicate with the chip.

In the past the typical approach was divided into several separate steps. At first the BSI specified a list of test cases and published them in a document that was written manually by an editor of the technical guideline. Then several test houses and vendors of test tools implemented all the test cases based on the specific guideline into their software solution. All these steps had to be done manually, which means: the software engineer of each institution read the guideline and implemented test case by test case in his special test environment. With every new version of the guideline this procedure had to be repeated again and again. At the beginning, the update cycle of these test specifications was very frequent because all the feedback collected in the field was included in the guideline and new versions were published in short intervals:

figure_1

This way of specifying test specifications is inefficient because of the large number of manual steps. Doing the transformation from the test specification to the implementation is not only inefficient but also fault-prone: every test case in the guideline must be formulated in “prose” by the editor; every engineer must implement the test case in the respective programming language. Also the consistency of the tests must be maintained by the editor manually.

Furthermore, the writing of test specifications is an extensive part of conformity testing. The editor of such a specification in general uses a word processing software that is useful for e.g. writing small letters. But this kind of software is not really convenient for writing technical specifications like TR-03105. A typical problem is versioning of different types. It would be most helpful for developers, if the editor used the track changes mode when he changes test cases. This way the developer can easily detect changes. But this advantage depends on the activated mode. As soon as the editor forgets to activate the track changes mode the implementation of these changes becomes more and more complicated.

Due to an increasing number of new requirements of the applications running on smart cards the complexity of these systems becomes higher and higher. In Walter Fumys “Handbook of eID Security” the history of eID documents from purely visible ones to future versions is illustrated. This complexity in these applications will result in so many test cases that the current approach of writing and implementing test specifications is a blind alley.

With recent results of Model Driven Software Development (MDSD) this blind alley can be avoided. New techniques and tools allow us now to switch from the manual parts to a more automated procedure. The goal is to write only one “text” that can be used as a source for all the test tools. The solution is a model that defines the test cases and a transformation of this model to other platforms or formats.

With this new approach, the process of specifying tests can be reduced to the interesting part where the editor can use his creativity to conceive new tests and not to use his office software to write tests.

Defining a language that describes the test case is the basis for this procedure. This grammar can be used to model test cases, and based on this model all the artifacts needed can be generated. The following figure visualizes this process: there is one Meta test specification that is used to generate not only the human-readable document but also the tool-specific test cases for every test environment.

figure_2

One solution to define a language is Xtext. With Xtext the user gets a complete environment based on Eclipse to develop his own domain specific language (DSL). One of the benefits of Xtext is the editor that is generated automatically by the tool itself. This editor includes code-completion, syntax coloring, code-folding and outline view. This editor is very helpful to write test cases. Every test case that is not compliant with the grammar is marked as faulty. So the editor of the specification can recognize this wrong test case directly like a software developer in Integrated Development Environments (IDE).

Additionally, the user can implement generators to generate code for the scope platform. These generators are called by the Modeling Workflow Engine (MWE). These generators are powerful and productive tools to provide test cases for different platforms.

In the public sector it is more and more important to write barrier-free documents. It takes a lot of time to write a barrier-free document based on a typical technical specification. With a generator that produces a human-readable document the author of the test specifications can use generic templates that produce barrier-free documents in an automatic way because the generator can use rules that fulfill even these standards.

Once the user has generated a new test specification or a new test case based on any test tool, he can modify this document by adding some special features, e.g. a special library to one test case. With a model based test specification it is possible to re-import this modified artifact into the model to assure persistency. The author presented and published his first experience at ICST 2011.

This approach helps to write test specifications in a technical way on a Meta level but it does not focus on the content of the test specification. Thus, the approach helps to write the document but it does not help to produce any content needed. Currently, the quality of a test specification is dependent from the background of the author. With his knowledge of protocols and corresponding pitfalls he can specify interesting test cases. But many test cases contain the same scenarios (wrong length, set a value to zero, use maximum or minimum value and so on). It would be more reasonable and economical if the author could focus on special test cases for the relevant protocols and their pitfalls and “standard” test cases would be generated automatically. On the other hand, test specifications written by humans always run the risk of being inconsistent, error-prone and imprecise. Additionally, it is always rather time-consuming to write test specifications manually.

To focus and solve problems as described above a consortium of BSI, HJP Consulting, s-lab Software Quality Lab (University of Paderborn) and TÜViT started a research project, namely MOTEMRI (Modellbasiertes Testen mit Referenzimplementierung). In MOTEMRI a model is developed that contains all relevant information of the popular protocol PACE. This model is specified in UML so everybody who is interested can read and modify the diagrams easily. In this way it is possible to adapt new protocols into the model like PACE or new versions developed in the future. Based on the model, algorithms generate test cases automatically. Thereby the knowledge of designing test cases is enacted into software, independent from the knowledge of the author of the test specification. By the way, this procedure also allows using various “wrong” values for negative test cases.  Negative test cases are generated automatically and access different “wrong” values. Using random values allows better testing and ensures better chip implementations.

Results of SAC InterOp Test 2013 available

The results of the InterOp test 2013 concerning the new protocol SAC (Supplemental Access Control) are available. The test event was split into two slots – a conformity test (to test if the document conform to the latest ICAO standards) and a crossover test (to test, if each document can be read by the inspection system). A concluding test report is available here. Thanks to Mark Lockie and Michael Schlüter for organizing this successful event.

ePassport Interoperability Test in London

Next week another ePassport interoperability test takes place in London. The community will join to test their next generation smart cards in ePassports with the new protocol Supplemental Access Control (SAC) as a replacement of Basic Access Control (BAC). BAC was designed in the beginning of this century and will be replaced by SAC in December 2014 latest. The protocol SAC bases on the well known protocol Password Authenticated Connection Establishment (PACE) that was mainly developed by German BSI and that is also used in German ID cards issued since November 2010. PACE is specified in TR-03110.

During the interoperability test vendors of chips and inspections system will test their implementations against current conformity test suites of several test labs. More information can be found here: InterOp 2013.

See you in London!

Hello world!

Welcome to my blog. You will find here some ideas about testing of protocols and about modeling these protocols also. At the beginning the primary focus is limited to the domain of electronic document like ePassport and eID.

I’m looking forward to getting your feedbacks and to discussing topics around modelling and testing protocols.