Data Ingestion Episode II – The Empire strikes back, but not for long

The second test phase of the SNSF SPARK project (Episode 2) on Dynamic Data Ingestion (DDI) and server-side data harmonisation has been completed. Data from as many different data sources as possible were collected and stored centrally on the DDI server according to the spider principle, resulting in a new meta-database.

Fig. 1: One example of a data collection via DDI module , testphase 2: standardised biografical data and related publications.

However, the tests again drew our attention to two core problems of Linked Open Data (LOD) in research: The “empire strikes back” against LOD on a technical and content level, which are interdependent. On the technical level, the most important prerequisite for data exchange is missing, namely a kind of “industry standard” for a uniform query language and a standardised output format as well as a standardised structure of the data published via Application Programming Interface (API). Many data projects have individually designed APIs and project-specific data structures that do not comply with international standards and/or there is a lack of adequate documentation of the data output. At the content level of LOD, we are faced with challenges due to the heterogeneity of research data, especially in the humanities, which inevitably leads to inconsistent database structures and makes data exchange more difficult. Despite numerous international initiatives in recent years, no research project has yet been able to gain significant or groundbreaking scientific knowledge through Linked Open Data, especially not in the Digital Humanities. In addition to the technical and content-related obstacles, the ‘Empire’ has been quite successful in blocking communication between humanities scholars and software developers. Any researcher who has participated in relevant conferences of the humanities knows about the long discussions on possibilities and potential of linking databases in the field of Digital Humanities. In the end, it all comes down to good ideas and declarations of intent – without having linked a single data set. At the other extreme are initiatives that store large amounts of LOD data in a meta-database and reflect on whether this data represents information or already knowledge. This is an important question as LOD has been praised in the scientific community of the so-called ‘Semantic Web’ as pointing the way to the future of an ‘Internet of Knowledge’, an Internet from which users can retrieve data in a structured and standardised way and transform it into information and knowledge. But we have not yet reached that point. One important project that pursues these goals is Wikidata, a sister project of Wikipedia. Wikidata offers open, international standards for the storage, sharing and exchange of data. In order to exchange and harmonise data, projects would therefore have to store and document their data on Wikidata. This means efforts that not every project can or wants to make. Thus the situation in the Digital Humanities is still more or less the same when it comes to exchanging LOD: on the one side is the humanities spirit that floats on clouds with brilliant ideas for linking data, and on the other side is the analytical developer who collects highly complex data with a down-to-earth approach. The experience from many conferences shows that communication between cloud and earth has hardly been possible so far. Both sides send out signals that are usually misunderstood by the other side – people speak in a different language and do not understand each other, even if they mean the same thing. But how can we bring both worlds together and install a translation board (Rosetta’s stone) between cloud and earth? One such board is the DDI module developed as part of the SNSF SPARK project. In this module a graphical interface facilitates communication between humanities scholars and developers.

Fig. 2 Definition of the Linked Data Resource, in this example the API of swissbib (catalog and data hub of Swiss libraries, which will soon be replaced by a new version)

In the interface of the DDI module, the Linked Data Resource can be defined and queried for a sample data output, which can then be used to assign the data fields of the Linked Data Resource (e.g. data from another research project, from a library or an archive) to one’s own researcher database, which thus becomes a meta-database consisting of data from various data sources.

Fig. 3 Running a test query on swissbib API resources

The advantage of a graphical interface for data ingestion lies in the visual communication: researchers and developers (or researchers experienced in IT) jointly define the interface (API) and immediately see the result, the data output, in the test query. With the test query, it is visible to all participants which database fields and contents are actually present in the data source. Researchers and developers can then use the data of the test query as a template for mapping the database fields of the data source to the new meta-database. This type of visual communication leads to fewer misunderstandings, as we have already seen in test series in the context of the SNF project SPARK.

Fig4. Mapping source (right) and target database fields (left)

The test query also shows whether the data is compatible with your own question or whether it is meaningful – and to what extent these data structures must be compared with those of other data sources in order to obtain significant, scientific results. This brings us to the crucial point: collecting data is one thing, but harmonising data (structured and unstructured) from various data sources in order to be able to evaluate them is the last, but often too big hurdle, especially in the humanities. Therefore a translation tool for harmonising the data had to be integrated in the SPARK project: after the data has been collected by the DDI module, the reconciliation module for data harmonising is used. The module has an algorithmic pattern matching (named entity matching) function that identifies predefined terms or categories (in the sense of a controlled vocabulary) in the data, makes suggestions for assignments or can automatically store the matched terms in the database. This also means that one can see all relations of a term to the texts in which it was found. This matching of vocabularies (or: keyword spotting) has great potential not only for data harmonisation but also for the structuring and analysis of heterogeneous data in general, including for example texts available as OCR (Optical Character Recognition), generated by specialized OCR software like Transkribus. Not only researchers, but also libraries and archives will be able to use these functions to make their (handwritten) texts more accessible to the public, for example in the form of data visualisations such as the following example of publication locations.

Fig. 6 Publication locations of books, published in 1540, from the swissbib API resource found with pattern matching (reconciliation) in text strings

The results (locations) are automatically linked to geo-references, allowing a map of the publication locations to be created immediately after the reconciliation process, in this example with a historical background map (Mercator 1607). In addition to publication locations, authors or specific contents of publications can be found and linked by means of pattern matching, this is just one example, of course, the process of data ingestion and data reconciliation is not limited to certain data types. In the third and final phase of the SPARK project (Episode 3), different scenarios for data harmonisation will be run through to examine to what extent data harmonisation can lead to new research questions, especially in terms of a heuristic approach.

Cite this article as: Kaspar Gubler: Data Ingestion Episode II – The Empire strikes back, but not for long, in HistData, 09/09/2020, https://histdata.hypotheses.org/1635.