Data Ingestion Episode I – A New Hope

Data ingestion can be described as a process of dynamic collecting and importing data for storage and immediate use in a database. The data ingestion process starts by prioritizing and validating data sources and assigning data items to the correct database fields. This process can be particularly challenging when working with large and complex data sets like we are dealing with in the first test phase (Episode 1) of the SNSF SPARK project on dynamic data ingestion. As planned for Episode 1, the ingestion module was completed and implemented by Pim van Bree and Geert Kessels (LAB1100) in the virtual research environment (VRE) nodegoat. This module offers to the user an intuitive interface with (almost) no coding, except for setting up SPARQL  / API queries to connect to the different external sources. From May 2020 on Kaspar Gubler has been compiling with the module a data sample of 20k persons from various databases within the research area of contextualized prosopography – for the final test series (Episode 4) we will compile a data sample of 500k people. To compile the first sample of the 20k people, we used different identifiers like the GND (Gemeinsame Normdatei), the VIAF (Virtual International Authority File) and of course Wikidata, the innovative and forward-looking sister project of Wikipedia. At Wikidata you can find various interesting data samples like the Cambridge Alumni database. The screenshot shows an example how the ingestion process is captured in the module’s interface. In principle, an external data source based on identifiers and/or database values is queried from the module and the external data is mapped to the database fields of the ingestion database in nodegoat.

In this example the identifiers of the Cambridge Alumni database are used to get and map the data. The ingestion module is not limited to certain external data sources. Thus, data from image databases can be integrated as well as literature references or knowledge and/or information of different provenance. The extension and enrichment of the data sample will be tested in Episode 2-4 as well as the harmonization of external data on the server level by means of scripts, for example format adjustments for dates. In an international network of database projects of Digital Humanities, for which Kaspar Gubler acts as secretary, a data harmonization strategy has been worked on for years. With the ingestion module, there is now a new hope that, if the SNSF SPARK project will be successful, this network will finally be able to merge its data and thus develop new questions and simplify worldwide cooperation.

Cite this article as: Kaspar Gubler: Data Ingestion Episode I – A New Hope, in HistData, 09/06/2020, https://histdata.hypotheses.org/1463.