Testing of the FAIR DATA Maturity Model

Home » EOSC Liaison Platform » Post » Testing of the FAIR DATA Maturity Model

Federico Drago's picture
21 Jan 2020

Testing of the FAIR DATA Maturity Model

  • By Federico Drago

Dear all,

The Research Data Alliance (RDA) FAIR Data Maturity Model Working Group is inviting the EOSC community to test the FAIR DATA Maturity Model, a common set of core assessment criteria for FAIRness and a generic and expandable self-assessment model to evaluate the maturity level of datasets. A draft set of indicators with priorities is now available, and the group is now in the process of testing the draft indicators with evaluators of FAIRness in different communities.


As mentioned in the WG online meeting on 4 December 2019, they are starting the testing phase for the indicators that the WG has developed over the last months. After some early tests, it is now time to start a more organised test phase. In order to make sure that the indicators are evaluated in a wide range of environments, they are looking for people who are involved with initiatives that evaluate FAIRness of data for specific types of data or for specific communities. The period for testing is January 2020 with results to be shared with the WG in the next scheduled online meeting on 13 February 2020. Test results will then be made publicly available under a CC-BY-SA licence.


Please find attached a template for the test reports, which includes information about the tester, the evaluation approach used for the test, observations for each of the indicators, and general comments or conclusion. Results of all tests will be consolidated by the editorial team with a selection of discussion items to be proposed for the meeting on 13 February. On the basis of the comments and suggestions that come out of the tests, the editorial team will create a new proposal for a revision of the maturity model if necessary. The testing might also bring up any issues that could be covered in the Guidelines that are being developed. Feel free to provide suggestions for guidance that are identified in the tests, so that they can be included in the draft Guidelines that will be presented at the meeting in February. All results, comments and suggestions will be made available for discussion on GitHub.


In order to participate in the test, please respond directly to Christophe Bahim christophe.bahim@pwc.com or Makx Dekkers makx@makxdekkers.com, with a short description of the type of data, the community that you represent and the approach that you are already using to evaluate your data.

Best regards