The linking of research data has been a dominant topic for years, especially in digital history. Linked Open Data (LOD) is the buzzword at conferences and in research projects. However, it is not the collection of such data available on the internet that is the greatest challenge here, but its harmonisation, because research databases are usually structured differently. It is therefore not surprising that despite many initiatives no research project in digital history has yet been realised being able to harmonise data across several structural levels of the databases. This means, for example, not only linking persons of databases by their names, but going deeper into the data structure to harmonise, for example, the geographical origin or attributes of a person’s education. But that would be the aim: to answer scientific questions through structural data harmonisation. This is where our SPARK project comes in. The third and final phase of the project (Episode 3) has been completed in January 2021. What are the core results of this project? In essence, it is a software module (DDI module for ‘dynamic data ingestion) and a method: data (research data) is collected from different source databases and ingested on a central server using the module according to the spider principle, creating a new metadatabase. The harmonisation of the collected data in this new build database is done as far as possible already with the data ingestion by mapping the database fields of the source databases into corresponding database fields to the new metadatabase. If such a mapping is not or only partially possible because the database fields of the source database and the metadatabase are too dissimilar, in a second step, as soon as the data is stored on the central server, an algorithm can be used to bring uniformity to this data by data reconciliation. In addition, the data can also be automatically reclassified in order to standardise it. These measures prepare the data for analysis and ultimately for publication, which both can be done in the virtual research environment Nodegoat. We will explain the procedure with the help of a case study. In this study, we collected data from related projects that are researching the history of universities and have joined together in a network, the Atelier Heloïse. The common interest of the projects is a prosopographical based history of universities, scholars and academic knowledge in pre-modern Europe. The four projects were chosen more at random; there are numerous other and important database projects in the Atelier. However, we had to limit ourselves to these four projects. Furthermore, it will be the task of the Atelier to bring all of the databases together in a joint, international project. The projects we have been working on are covering the history of the universities of Bologna (http://asfe.unibo.it/it), Padova (https://www.ottocentenariouniversitadipadova.it), Paris (http://studium.univ-paris1.fr) and the universities of the Old Empire in the project Repertorium Academicum Germanicum (https://rag-online.org). The metadatabase we created from the four projects contains about 200,000 students and scholars from all over Europe in the period 1200-1800, with the projects covering different time spans. The tools for collecting, ingesting (1), reconciling (2) and reclassifying (3) the data in Nodegoat also represent the methodological approach to data harmonisation as a prerequisite for data analysis (visualisations, network analyses) and finally for the publication of the results on the internet.
As a result of this approach, the places of origin of the students and scholars in the four projects were united on a map for the first time in research, impressively demonstrating the potential of international data networking. The map is of course only the starting point for deeper analyses. Only through a joint analysis of the four research projects, which describe the areas of origin (of the students) of their universities and their sources, can a synthesis, which the projects work out together, lead to new insights and research questions. But how can such a map be created? We will now take a look at this, starting with data ingestion. In order to be able to collect data, we must first make two settings in Nodegoat: the definition of the Linked Data Resource and the definition of the Ingestion process. To create the definition, we use a graphical interface, which not only simplifies our work, but also makes the data ingestion process transparent for all team members (project members, programmers). The graphical interface is thus a very important tool for visual communication, enabling a common understanding of the structures of the data sources. In the graphical interface, all database fields can be made visible to the project team and then assigned together to the new meta-database. A clear mapping process in combination with very good knowledge of the database fields (and their meaning, especially in the humanities) are the success factors not only for the ingestion, but also for database migrations in general. In principle, the graphical interface helps to find a common language and understanding between historians and programmers. In the Linked Data Resource module, it must be first defined whether it is an API interface or a SPARQL endpoint. Then a test query is constructed, for example for a person. This requires the identifier of this person, which functions as a variable for all persons of the source database from which data is to be ingested. Then the mapping process follows and the database fields of the source database are assigned to the corresponding fields in the metadatabase. By mapping as closely as possible, harmonisation of the collected data can already be achieved. However, if the structures of the source database and the metadatabase are too different, the data will still be imported and subsequently harmonised with the data matching process in the reconciliation module. Things can get complicated if, in addition, the data formats of the source database are not compatible with the metadatabase. In such a case, the data can be converted before the data import, or only certain information and not the entire content can be extracted from the source database fields. With the reconciliation module, we can then search the imported, heterogeneous data for specific terms that we have previously defined in a vocabulary. The terms found are automatically saved in the metadatabase. We demonstrated at the Atelier Heloïse conference such a procedure using the places of origin of Parisian students as an example. The places of origin are not georeferenced in the Paris database and only the names of the places are available in the application programming interface (API). With Reconciliation we can assign geopoints to the places. To do this, we first import the places from the source database and then use the reconciliation module. We configure this module so that the names of the places in the Paris database are compared with a reference list of places with geo-coordinates that exists in our metadatabase. The places in this reference list also have identifiers of GeoNames, an internationally used geographic reference system. In the reconciliation module, we now set the algorithm to not only search for the Paris names in the reference list, but also to store the georeference location (which contains the coordinates) when a hit is made. In this way we can easily visualise the places of origin of the Paris database on a map and thus also check the data qualitatively in a simple way, as it is an interactive map: Clicking on a point takes us directly to the locations and georeferences. Of course, the reconciliation module works for any data type, not only for geodata. For example, texts can also be searched for specific terms with the algorithm and the hits are automatically saved. If at this point of the ingestion process the data is still not uniform enough for an evaluation, the data can be additionally classified with the reclassification module. The principle of reclassification is that we query the data according to certain criteria and use this query to automatically classify the results. Such reclassifications are useful when a project wants to organise a lot of complex data and prepare it for data analysis. However, automatic reclassification is also very important for maintaining data consistency, as inconsistent or incorrect entries of data can also be automatically filtered out. Let’s make an example. In our case study, the four projects use different categories and names for academic degrees. Although the degrees were already classified in a remarkably uniform way in pre-modern Europe. But of course, one must take into account here that the same degree can have different qualities depending on the university. But we can also overcome this challenge with reclassification by going from the general to the specific. In the following, we are looking at the academic degrees of jurists. For reclassification of jurists, we first create a query in Nodegoat and check whether the expected results appear. Then we use this query in the reclassification module and give it a term. This term is then used to classify the data found. The reclassification is not bound to certain types of data. We can classify people, places, observations, texts, time periods or anything else. In our case, we can classify all jurists of the four projects accordingly and thus quickly obtain a general overview of the areas of origin and study of such persons. Of course, we have to take into account that the projects cover different periods and look at their definitions of jurists in detail. This is done subsequently to the overview and is part of the qualitative data evaluation, where we can further differentiate the data and, for example, reclassify scholars who had studied Roman law. For this group of people, we can then highlight the places of origin in colour and thus see the spaces of origin and communication of these jurists. If we have further data available, for example information on the activities of the jurists as in the Repertorium Academicum Germanicum (RAG), we can also see where the legal knowledge was transferred with the persons – whom we regard as ‘knowledge carriers’. In this way, we could show the spread of law, and Roman law in particular, in pre-modern Europe. Of course, this also works for the other disciplines such as medicine or theology as well as for the large number of scholars holding a degree as a ‘Magister Artium’. It is then up to the researcher to classify, interpret, describe, if necessary correct and refine the results of the reclassification. With the procedure described, however, we are able to combine quantitative and qualitative research and thus reconstruct European knowledge spaces. With data ingestion, we can not only import and evaluate data from research projects, but in parallel, of course, also from other Linked Open Data to supplement or enrich the data set of our metadatabase, for example query the data on Wikidata that is linked to a person in our metadatabase. It would go beyond the scope looking at all the features of data analysis in Nodegoat, for example modules of network analysis, which can of course also be applied to our metadatabase. In any case, the data can be searched easily in full text and / or filtered specifically with complex, combined. It is further possible to query the data spatially drawing a polygon in GeoJOSN and simply copy its code into the database field that contains the geoinformation. This feature enables us to reconstruct, search and analyse specific knowledge spaces. Such spaces, as other results (data sets), can be published in so called data scenarios using an internet module that is configured in the backend of Nodeogat. A scenario is understood to be a data set with the corresponding visualisation settings.
Conclusion: The SPARK project makes it possible to link databases (research data) in a simple and transparent way and to harmonise and analyse the linked data with sophisticated tools – may the data be with you!