Author Archives: kgubler

Data Ingestion Episode I – A New Hope

Data ingestion can be described as a process of dynamic collecting and importing data for storage and immediate use in a database. The data ingestion process starts by prioritizing and validating data sources and assigning data items to the correct database fields. This process can be particularly challenging when working with large and complex data sets like we are dealing with in the first test phase (Episode 1) of the SNSF SPARK project on dynamic data ingestion. As planned for Episode 1, the ingestion module was completed and implemented by Pim van Bree and Geert Kessels (LAB1100) in the virtual research environment (VRE) nodegoat. This module offers to the user an intuitive interface with (almost) no coding, except for setting up SPARQL  / API queries to connect to the different external sources. From May 2020 on Kaspar Gubler has been compiling with the module a data sample of 20k persons from various databases within the research area of contextualized prosopography – for the final test series (Episode 4) we will compile a data sample of 500k people. To compile the first sample of the 20k people, we used different identifiers like the GND (Gemeinsame Normdatei), the VIAF (Virtual International Authority File) and of course Wikidata, the innovative and forward-looking sister project of Wikipedia. At Wikidata you can find various interesting data samples like the Cambridge Alumni database. The screenshot shows an example how the ingestion process is captured in the module’s interface. In principle, an external data source based on identifiers and/or database values is queried from the module and the external data is mapped to the database fields of the ingestion database in nodegoat.

In this example the identifiers of the Cambridge Alumni database are used to get and map the data. The ingestion module is not limited to certain external data sources. Thus, data from image databases can be integrated as well as literature references or knowledge and/or information of different provenance. The extension and enrichment of the data sample will be tested in Episode 2-4 as well as the harmonization of external data on the server level by means of scripts, for example format adjustments for dates. In an international network of database projects of Digital Humanities, for which Kaspar Gubler acts as secretary, a data harmonization strategy has been worked on for years. With the ingestion module, there is now a new hope that, if the SNSF SPARK project will be successful, this network will finally be able to merge its data and thus develop new questions and simplify worldwide cooperation.

Cite this article as: Kaspar Gubler: Data Ingestion Episode I – A New Hope, in HistData, 09/06/2020, https://histdata.hypotheses.org/1463.

Kick off SPARK Project: Dynamic Data Ingestion

On February 1 2020, this SPARK project officially started. Together with Geert Kessels and Pim van Bree (LAB1100) the kick off meeting took place in Montreux on Lake Geneva and the milestones for the project were set. Core tasks are the development of a new module for the virtual open source research environment nodegoat (github) for central data harmonisation and the creation of a common ontology with selected database projects. The kick off it made clear that we have to do the module development and the harmonization of the database structures in small steps and in very close coordination, which is why we decided for an agile software development and project management. The progress of the project can be followed in this blog. The project will also be presented at several events and conferences, such as the Research Day of the University of Bern (March 2020), in a panel at the Annual Congress of Italian Medievalists (Bertinoro, June 2020), in a presentation at the Annual Congress of Atelier Heloise, a European network on digital academic history (Bologna, September 2020) and at a conference on the use of virtual research environments (VRE) with ancient manuscripts (Dorigny, September 2020) as well as at a conference of the the Swiss Academy of Humanities and Social Scienceson on ‘knowledge locations’ (Bern, October 2020), which i will co-organise. The public SPARK workshop with data testing and results is planned for november 2020 in Bern, later a public lecture on the results will be held at the University of Bern.

Cite this article as: Kaspar Gubler: Kick off SPARK Project: Dynamic Data Ingestion, in HistData, 03/02/2020, https://histdata.hypotheses.org/605.

Database Migration Case Study: Repertorium Academicum Germanicum

The goal of the Repertorium Academicum Germanicum (RAG) is to develop the history of the cultural reach of a pre-modern intellectual leadership and impulse group to gain a comprehensive insight into the medieval origins of the modern knowledge society with around 60.000 scholars with 360.000 observations on their life and career paths, within the framework of an analysis of contextualized prosopography. The RAG uses nodegoat as their primary data storage application and research environment. nodegoat is also used to create and publish diachronic geographical and social visualisations (networks).

Work on the RAG began in 2001 under the direction of Rainer Schwinges and Peter Moraw, financed by the Swiss National Science Foundation (SNSF), the German research foundation and the Fritz Thyssen Foundation. From 2007 to 2019 the project was funded by the Union of the German Academies of Sciences and Humanities and from 2008 on as well by the Swiss Academy of Humanities and Social Sciences. The project will be run from 2020 at the University of Bern as part of the larger project Repertorium Academicum (REPAC), which is led by Christian Hesse and Kaspar Gubler and advised by Rainer Schwinges.

In 2017, the RAG research project found itself in a difficult situation. The database that the RAG had been working with for years was technically so outdated that an update of this software was no longer possible. The frontend was running on MS Accesss 2003 (sic!) and the backend on a MS SQL Server 2005. A serious omission was that data entry could be completely free without technical restrictions, for example dependent combination lists. Although there was an internal Wiki that documented the rules for data entry, but not a few of these rules were no longer up-to-date. Accordingly, the large data set was heterogeneous and inconsistent when I started my work at the RAG. Furthermore, with the outdated software constellation it had become more and more difficult for the RAG to publish the research data on the internet, because the data first had to be exported from the SQL-Server and imported into a web application, which was very time-consuming and error-prone. Therefore a new software for the RAG had to be evaluated in a timely manner. A colleague at the University of Bern, who had attended a nodegoat workshop, drew my attention to nodegoat. It was immediately clear to me that nodegoat fulfilled exactly the functions we urgently needed for the RAG’s research data, namely data management and data visualization in one and the same software.

We invited LAB1100 to our office in Bern to discuss the data import in nodegoat. Before the meeting, LAB1100 had analysed the RAG data model and made some decisive suggestions at the meeting on how we could simplify the data model and make it easier to understand. An important change was that institutions were introduced as location types (e.g. university, school, town, court, princely court, church, monastery) and, in addition, these were linked hierarchically. In the previous database, such types existed only sporadically. The main part of the data was only assigned to the main type ‘Location’, which was not conducive to systematic evaluation.

The database migration was started in September 2017. First, the data was exported from the Microsoft SQL database, then cleaned, sorted and partially reordered in various programming steps. The final import into nodegoat was done with Python via the nodegoat JSON-API. By the end of 2017 the database migration was completed and the RAG teams at the universities of Bern and Giessen were able to enter and visualize research data in nodegoat in January 2018.

The biggest advantage of nodegoat for us is the acceleration of the work process. We now need a fraction of the time compared to the previous database to enter, analyze and publish research data. In addition, with the automated data checks of nodegoat we have data consistency under control, which is very important given the large dataset of the RAG.

Cite this article as: Kaspar Gubler: Database Migration Case Study: Repertorium Academicum Germanicum, in HistData, 03/02/2020, https://histdata.hypotheses.org/545.

Project Website: rag-online.org
Public User Interface: database.rag-online.org/viewer