Category Archives: SPARK NEWS

Review of the X workshop of Atelier Heloise

The volume under review contains the contribution by van Bree / Gubler / Kessels on results of the SNSF SPARK project on ‘Dynamic Data Ingestion’ (data integration, harmonisation and reconciliation of Linked Open Data):

QFIAB 103 Review X Workshop Heloise (PDF)

SPARK

Forschungsdaten vernetzen, harmonisieren und auswerten

Kaspar Gubler: Forschungsdaten vernetzen, harmonisieren und auswerten: Methodik und Umsetzung am Beispiel einer prosopographischen Datenbank mit rund 200.000 Studenten europäischer Universitäten (1200–1800), in: Oberdorf, Andreas (Hrsg.): Digital Turn und Historische Bildungsforschung. Bestandesaufnahme und Forschungsperspektiven, Bad Heilbrunn, 2022, S. 127-147.

https://library.oapen.org/handle/20.500.12657/57392

New publication on the SNSF Spark Project on ‘dynamic data ingestion’

Kaspar Gubler, Pim van Bree, Geert Kessels: Server-side Data Harmonization through Dynamic Data Ingestion. A Centralized Approach to Link Data in Historical Research , in: Fonti per la storia delle popolazioni accademiche in Europa. Sources for the History of European Academic Communities. X Atelier Héloïse a cura di Gian Paolo Brizzi, Carla Frova, Ferdinando Treggiari Bologna, 2022, pp. 9-14.

Dynamic data ingestion: Gubler / van Bree / Kessels (Only the table of contents is available)

Nodegoat Workshop – Get Linked Open Data into Nodegoat

These workshops follow a workshop series earlier this year, organised in collaboration with the University of Bern in the framework of the SNFS SPARK project ‘Dynamic Data Ingestion’ as well as two of the NEP4DISSENT Summer Schools …”

https://nodegoat.net/blog.s/56/linking-your-historical-sources-to-open-data-workshop-series-organised-by-cost-action-nep4dissent

I can highly recommend the workshop taking place on 13 and 21 September 2021. In particular, it will show how to import Linked Open Data into Nodegoat via an interface, which does not require any special programming skills, allowing you to devote your energy and brain power to the structure, content and consistency of the imported research data.

 

SPARK Workshop on Dynamic Data Ingestion

Programme Session 4 (26-05-2021)

“Introduction by Kaspar Gubler

Welcome to the fourth and final session of the SPARK Nodegoat Workshop. We are very happy about the participation and the numerous feedbacks on the workshop. I can only recommend that you organise such a workshop yourself. With a video conference this is no longer a problem. For example, several projects could get together and organise a workshop, perhaps on a specific topic related to Nodegoat.  A comment on Nodegoat as a research infrastructure in the humanities: Nodegoat is jointly funded by various projects and universities, a model that I believe is the solution for the long-term development of a digital infrastructure for the humanities. It is crucial that such an infrastructure can be used by different disciplines and not just by a single, specific project. If only one project can use software, it is not a true infrastructure. In contrast, as we have seen, Nodegoat can be used by different humanities disciplines. Another advantage of Nodegoat as a research infrastructure is that it does not require difficult software installations or programming skills. Thus, an infrastructure like Nodegoat allows users to focus on research. They don’t have to deal with technical things first. I think this is a big problem in the digital humanities: there is too much focus on the technical stuff and not enough on our core competence, which is to answer research questions with digital methods. In my opinion, too often we only talk about the possibilities of digital methods instead of delivering research results. It’s like constantly cleaning your glasses instead of just putting them on.”

14:00 Welcome and recap of last week’s session

14:15 Ingestion of publications from the Dutch Royal Library SPARQL endpoint

14:50 Break

15:00 Ingestion of SameAs references from lobid.org

15:15 Ingestion of Wikimedia Commons URLs from Wikidata

15:50 Break

16:00 Ingestion (TBD)

16:35 Q&A

Slides:

 

Linked Data Resources Suggestions

Linked Data Resources

Label Value
Name Query the KB SPARQL endpoint based on VIAF ID
Protocol SPARQL
URL http://data.bibliotheken.nl/sparql?default-graph-uri=&query=
URL Options &format=json&timeout=0&debug=on
Query SELECT DISTINCT ?pub ?name ?date (group_concat(?author_ids; separator=”, “) AS ?author_id)

WHERE {

?pub schema:author ?person.

[query=viaf]?person schema:sameAs <http://viaf.org/viaf/[variable=id]71399367[/variable]>.[/query]

?pub schema:name ?name.

?pub schema:author ?author_node.

?author_node schema:sameAs ?author_ids.

?pub schema:publication ?publication_node.

?publication_node schema:startDate ?date.

}

GROUP BY ?pub ?name ?date

Conversion INPUT

http://www.wikidata.org/entity/Q123034, http://viaf.org/viaf/71399367

Script:

const uris = INPUT;

const arr_viaf = uris.match(/viaf\/(\w+)/i);

const viaf_identifier = arr_viaf[1];

OUTPUT = {‘viaf_identifier’: viaf_identifier};

Original Query SELECT DISTINCT ?pub ?name ?date (group_concat(?author_ids; separator=”, “) AS ?author_id)

WHERE {

?pub schema:author ?person.

?person schema:sameAs <http://viaf.org/viaf/71399367>.

?pub schema:name ?name.

?pub schema:author ?author_node.

?author_node schema:sameAs ?author_ids.

?pub schema:publication ?publication_node.

?publication_node schema:startDate ?date.

}

GROUP BY ?pub ?name ?date

 

Label Value
Name Query the lobid.org API for ‘SameAs’
Protocol API
URL https://lobid.org/gnd/
URL Options .json
Query [query=id][variable]118637533[/variable][/query]

 

Label Value
Name Query Wikidata for Wiki Commons URLs based on Wikidata ID
Protocol SPARQL
URL https://query.wikidata.org/sparql?query=
URL Options &format=json
Query SELECT (CONCAT(“https://commons.wikimedia.org/wiki/Category:”,STR(?commons)) as ?commons_link)

WHERE {

<[query=id]http://www.wikidata.org/entity/[variable=id:uri-identifier]Q60866[/variable][/query]> wdt:P373 ?commons.

}

URI Template http://www.wikidata.org/entity/[[identifier]]
Link Click to open Query

 

Label Value
Name DiJeSt
Protocol SPARQL
URL http://tdk-jbs.cs.technion.ac.il:8890/sparql?default-graph-uri=&query=
URL Options &format=application%2Fsparql-results%2Bjson&timeout=0&debug=on&run=+Run+Query+
Query SELECT DISTINCT ?book ?title ?author_name

WHERE {

?book <http://purl.org/dc/terms/title> ?title .

[query=name]FILTER regex(?title,  “[variable]קודש[/variable]”, “i”)[/query]

?book <http://purl.org/dc/terms/creator> ?author_node .

?author_node <https://schema.org/name> ?author_name .

FILTER (lang(?author_name) = ‘und-hebr’)

}

OFFSET [[offset]] LIMIT [[limit]]

 

Label Value
Name The Getty Thesaurus of Geographic Names
Protocol SPARQL
URL http://vocab.getty.edu/sparql.json?query=
URL Options
Query SELECT DISTINCT ?place ?label ?parents (GROUP_CONCAT(?altlabel;SEPARATOR=”,”) AS ?altlabels) {

?place skos:inScheme tgn: .

?place luc:term “[query=name][variable]lemberg[/variable][/query]”.

?place gvp:prefLabelGVP [xl:literalForm ?label].

 

OPTIONAL { ?place xl:altLabel [ gvp:term ?altlabel ] }

OPTIONAL { ?place gvp:parentStringAbbrev ?parents }

}

GROUP BY ?place ?label ?parents

OFFSET [[offset]] LIMIT [[limit]]

 

SNSF SPARK Workshop on Dynamic Data Ingestion

Session 3

12 May 2021

“Introduction by Kaspar Gubler

I would like to welcome in this session all participants who are already familiar with Nodegoat and have therefore skipped sessions 1 and 2 and will now attend sessions 3 and 4. In sessions 1 and 2 we created a data model for people and books and imported data, including geo-coordinates, into Nodegoat by uploading CSV files. Importing data into Nodegoat will also be the central topic of today’s session. We have three ways to import data into Nodegoat.

1) We can upload data into Nodegoat as a CSV file, as we did in session 2.

2) We can import data directly into Nodegoat using a graphical interface without having to upload it, we will look at this process of dynamic data ingestion today.

3) We can import data into Nodegoat via an application programming interface (API), which, unlike 1) and 2), requires programming knowledge.

Of course, we can also start a project in Nodegoat without importing data first, not even geodata. In every Nodegoat installation, the object ‘City’ is already present. In ‘City’ about 130k places are available, which have geo-coordinates as well as a GeoNames-ID. ‘City’ is a collaborative object: all projects of a Nodegoat installation can add and use ‘City’-locations and thus benefit from each other.”

Programme Session 3 (12-05-2021)

14:00 Welcome and recap of last week’s session

14:15 Create Linked Data Resource to query GND

14:30 Create Linked Data Resource to query VIAF

14:50 Break

15:00 Ingestion of VIAF IDs from Wikidata

15:50 Break

16:00 Ingestion of biographical data from Wikidata

16:35 Looking forward to next session

16:45 Q&A

  • Can the mapping then only be done per 1 object? Or can you run it on a set (like reconciliation in open refine) and get unambiguous results automatically? → Yes, both.
  • Can I add/concatenate more of the json fields to the “label”? Because just the preferred name may not be sufficient to identify which one is the correct entry …→ Yes, add/include more of the relevant Values in your response, then open a “filter” dialog if necessary (there the additional fields/values will be shown).
  • Does nodegoat also support APIs that return XML instead of JSON? -> No
  • Wikidata SPARQL Query to only get Gregorian Dates Example: https://w.wiki/3KFq ->thanks!
  • Not really related to this session by I don’t see this referred to in any of the sessions: is there a nodegoat api from which one can draw the visualizations? or even simpler ‘embeds’? -> Public User Interfaces support embedding.
    • Does it expose JSON representations of our objects → next session
    • And is there a SPARQL endpoint (I know I would have to specify a kind of mapping)
  •  I am trying to create a LD resource from my API. The query: https://data.geo-kima.org/api/Variants/PlaceVariants/8964/100/1 works outside nodegoat. How can I spit this to fill the uri and query in nodegoat? (I get an error message when I do https://data.geo-kima.org/api/ in the url and Variants/PlaceVariants/8964/100/1 in the query. → next session
  • Is it possible to interact with database directly using SQL? → Not by design/purpose, API should be used

Links

https://nodegoat.net/usecases

Slides:

 

Preparation

If you are unfamiliar with the benefits of adding external identifiers to your dataset, please read this guide: https://nodegoat.net/guides/externalidentifiers.This example shows how to update 1 object at once. We can update multiple objects at once with the data ingestion (see below, data ingestion with Nodegoat).

Human Readable vs Machine Readable

Browse GND Data

Via Graphical User Interface (GUI):

https://d-nb.info/gnd/118637533 / https://lobid.org/gnd/118637533

Via Application Programming Interface (API): https://lobid.org/gnd/118637533.json

Query GND Data:

Via GUI: https://lobid.org/gnd/search?q=Zwingli

Via API: https://lobid.org/gnd/search?q=Zwingli&filter=type:Person&format=json

 

Data ingestion with Nodegoat

With the data ingestion in Nodegoat, you can enrich or update multiple objects with Linked Open Data (LOD) data from external data sources. This requires two steps. First, we configure a Linked Data Resource in Nodegoat, i.e. a query to the interface where the LOD data is available (data source). Secondly, we configure a data ingestion process, i.e. the mapping and storage of the LOD data in Nodegoat. Below are some examples of how to configure interfaces to Linked Data Resources.

Linked Data Resources

Label Value
Name Search the GND API via lobid.org
Protocol API
URL https://lobid.org/gnd/search?q=
URL Options &filter=type:Person&format=json
Query [query=name][variable]zwingli[/variable][/query]&from=[[offset]]&size=[[limit]]

 

Label Value
Name Search the VIAF API
Protocol API
URL http://www.viaf.org/viaf/AutoSuggest?query=
URL Options
Query [query=name][variable]zwingli[/variable][/query]

 

Label Value
Name Query Wikidata for VIAF ID based on GND ID
Protocol SPARQL
URL https://query.wikidata.org/sparql?query=
URL Options &format=json
Query SELECT ?person ?viaf

WHERE {

[query=gnd]?person wdt:P227 “[variable]118637533[/variable]” .[/query]

?person wdt:P214 ?viaf .}

Link Click to open Query

 

Label Value
Name Query Wikidata for Religion based on GND ID
Protocol SPARQL
URL https://query.wikidata.org/sparql?query=
URL Options &format=json
Query SELECT ?person ?religion ?religion_label

WHERE {

[query=gnd]?person wdt:P227 “[variable]118637533[/variable]” .[/query]

?person wdt:P140 ?religion .

?religion rdfs:label ?religion_label .

FILTER(LANG(?religion_label) = “en”)

}

Link Click to open Query

 

Label Value
Name Query Wikidata for Date of Birth based on GND ID
Protocol SPARQL
URL https://query.wikidata.org/sparql?query=
URL Options &format=json
Query SELECT ?person ?date_of_birth

WHERE {

[query=gnd]?person wdt:P227 “[variable]118637533[/variable]” .[/query]

?person wdt:P569 ?date_of_birth .

}

Conversion INPUT

1484-01-10T00:00:00Z

Script:

const date = new Date(INPUT);

const day = date.getDate();

const month = date.getMonth() + 1;

const year = date.getFullYear();

var OUTPUT = {‘date’: day+’-‘+month+’-‘+year};

Link Click to open Query

Click to open Query with reference statement

 

To start the ingestion process, we activate it for our project in ‘Management’. Then we define the process, the mapping and the storage of the LOD data, in the ‘Data’ section, like in the picture. We can add new objects (or values of objects = ‘object descriptions’ in Nodegoat), ad them if they not exist already or update objects, for example the object ‘Person’.

SNSF SPARK Workshop on Dynamic Data Ingestion

Session 2

05 May 2021

“Introduction by Kaspar Gubler

We are very pleased to have so many interesting projects and engaged participants in the workshop. And more participants have joined for session 2. For example, someone from Hamburg who used to do network analysis with the software Gephi and now wants to try out Nodegoat. A new participant is an archaeologist from the University of Bern who has documented sites in Excel and wants to import and visualise them in Nodegoat. Such data import is not difficult, especially if the data has been entered consistently in Excel. Another new participant plans to visualise cultural heritage with Nodegoat. A good example of how Nodegoat can be used for the presentation of digital, cultural heritage (thus also for art history) is the encyclopaedia on Romantic Nationalism: https://ernie.uva.nl/viewer.p/21/52/types/all/grid

Terminology

Before we start, I would like to remind you of the terminology of Nodegoat, in which we speak of Objects and Sub-Objects as well as Categories. We describe these Objects (column inExcel) in Nodegoat with Object descriptions (like rows in Excel). Object descriptions can be a text, a link, a picture or a link to another Object or a Category (= reference = relation). We can define in our data model the kind of description for each Object description. This gives us the possibility to describe an Object very precisely:

Find a common language

Important: if you want to communicate with another Nodegoat project it is very helpful if you use the terminology mentioned. So the first questions to another project would be: what Objects do you have? And how do you describe your Objects with what kind of Object descriptions? In which Sub-Object do you store your geo references? If you want to get in touch with other projects, you can organise your own zoom meetings on specific questions about Nodegoat. I see many projects that have a lot in common and could certainly benefit from an exchange. I would also like to draw your attention to the Nodegoat Day on 4 June, where you can present your project or your project idea.”

14:00 Welcome and recap of last week’s session

14:15 Object Type ‘Place’ Data Model + Data Entry

14:30 Object Type ‘Place’ Data Import

14:50 Break

15:00 Object Type ‘Person’ Data Import

15:30 Filter + Visualisation

15:50 Break

16:00 Scope & Visual Settings

16:15 Conditions & Export

16:35 Looking forward to next session

16:45 Q&A

  • Difference between Gephi and Nodegoat? → Nodegoat departs from data management + visualization functionality
  • Will there be a possibility to store Nodegoat Data in a data repository like Zenodo? → There are rumors about a Zenodo-Module in Nodegoat coming, currently it’s technically no problem to do it manually
  • How to download a “dump”?  → Via API, export dump of the data + of the model in JSON
  • Is it possible to export a complete project (instead of individual csv sheets)?  → Yes, via API you can export all of the data and the data model in JSON
  • How can I “undo” an import from csv when I notice that some things did not work as intended? Can I mass delete objects? → Yes, cou can mass delete objects via graphical interface, choosing all objects deleting them with the grey multi button, or delete in ‘Model’ the whole of a Object Type with clicking on ‘empty’ or mass delete mass objects via API
  • Can you import by just giving the URL of the Google Doc? → Yes, via API of Nodegoat, check what Google allows you to do via API
  • Can visualisations be downloaded in any way to the desktop? → Yes, Screenshot, or for high resolution use the ‘Capture’ functionality in the visualisation settings
  • Follow-up question to session 1:  can you create an itinerary of a person (object) with just knowing the sequence of the location but not the dates?→ Yes, with storing vague dates in Nodegoat, you make an statement in vague dates (‘Chronology’) like: ‘Studies came after Birth’. Or Yes, use as date: 1, 2, 3 etc. , or use the sequence identifier in a nodegoat date, so if you know a year use ‘1880 1’, ‘1818 2’, ‘1818 3’
  • We can include both a geometry (polygon) AND a precise coordinate in a sub-object? Or as separate subobjects of the same object? → Yes, , yes both options are possible! One geometry can be polygon + point(s) + line(s). Or each in a separate sub-object to be able to add attributes.
  • Are there any example projects that depicts more complex routes? → http://mnn.nodegoat.net/viewer.p/1/47/scenario/30/geo
  • Can you add your own icons to be displayed on the map? → Yes, in SVG format.
  • Nodegoat as Tool to visualise routes or itineraries? →  Yes
  • Is there also a method to show place-specific meta-information on the map instead of the person’s? →  Yes
  • In case of data model refactoring, how should we deal with the already inserted data? For example, if one wants to normalize repetitive data creating a new object type, how can he migrate the actual data to the new data model? Export + Transform + Import is the only way?  →  Yes, but because you now have nodegoat IDs, it’s a matter of a straightforward mapping. Or use an Ingestion process (session 4).
  • is it possible to mark a node with multiple conditions (e.g. one condition for people born in the low countries (orange) + people died in Italy (blue), so objects that fall in both categories marked in two colours)? →  Yes

Links

http://mnn.nodegoat.net/viewer.p/1/47/scenario/30/geo

Slides:

Download Google Sheets as CSV files:

RAG Places small selection: https://docs.google.com/spreadsheets/d/1zvcVj66nr1tm7PAmNJSSf2BI_o5e2rrPCSE2l4PAsHQ/
RAG People small selection:https://docs.google.com/spreadsheets/d/1K2SGF0TkQTVnZ5WQqgMc0MbJdGps1kA_oWVL3Qir6rs/

Guides:https://nodegoat.net/guides/csvfilehttps://nodegoat.net/guides/gazetteer

Another sample data Import:
https://histdata.hypotheses.org/nodegoat-tutorials
Tutorial No 10, to create this map (positions of ships):

SNSF SPARK Workshop on Dynamic Data Ingestion

Session 1

28 April 2021

Introduction by Kaspar Gubler

“Welcome to the Nodegoat SPARK workshop ‘Dynamic Data ingestion’ We are very happy about the 140 participants from all over the world. On the map, which was of course created with Nodegoat, we can see the places of origin of the participants. They come from very different disciplines: history, literary history, German studies, English studies, legal history, historical geography, art history, musicology, theatre studies, film studies, African studies, Islamic studies, sociology, digital humanities and also from archives and libraries. This impressively shows how Nodegoat has established itself in the last ten years as an international research infrastructure for the humanities, an interdisciplinary research infrastructure that helps digital research to gain new insights and more visibility – and facilitates the collaboration of projects, especially beyond one’s own subject boundaries. Pim van Bree and Geert Kessels began developing Nodegoat as part of a project at the University of Amsterdam in 2011. Pim van Bree has a Master’s degree in Media Studies, Geert Kessels a Master’s degree in History. Both are also accomplished software developers. Their particular strength is that they know both worlds, the world of humanities and the world of programming. They combine these two worlds in their workshops, which they conduct at educational institutions worldwide. With their deep knowledge of methods, sources and questions in the humanities, they can create fitting and working data models for the different disciplines in Nodegoat to extract new scientific information and knowledge from the data.”

Programme

14:00 Welcome by Kaspar Gubler

14:15 General introduction to nodegoat

14:40 Login and set up your nodegoat project

14:50 Break

15:00 Object Type ‘Person’ Data Model

15:15 Object Type ‘Person’ Data Entry

15:35 Classification ‘Capacity’ Data Model + Data Entry

15:50 Break

16:00 Object Type ‘Book’ Data Model

16:20 Object Type ‘Book’ Data Entry

16:35 Looking forward to next session

16:45 Q&A

  • I don’t have access to the ‘Model’ section in nodegoat → Check Page Clearence. Each nodegoat project has 1 administrator at the beginning, who can set up additional users. Management > Users > add User > add the user  and activate the ‘Model’ in the settings  for the page clearance (tab)
  • How does the Scope work? For Visualizations? → In the Scope you define which of your databasefields you want to use for the visualization, so you activate the field that contains the georeference wiht the coordinates
  • Are there facilities (planned?) helping to prepare a RDF rendition of the database? →Yes: on one’s own nodegoat installation, you can configure a translation module to translate the data model to some RDF vocabulary
  • Is it possible to export, as static or dynamic representation, a computed spatio-temporal / network analysis ? →Yes
  • Würden Sie sagen, dass sich Nodegoat grundsätzlich auch als Bilddatenbank (mit Zusatzbeschreibungen und Querverweisen) eignet? →Yes, absolutely, see the links below.
  • Regarding custom gazetteers and prosopographies: Are there size limitations? → Not in general, size limit of CSV import is set to 60’000 rows at a time
  • Is it better to store a region like Germany as JSON or via Reference >city > autofill option Germany? →It depends on what you want to show on your map, if you are more interested in areas it’s Geo JSON, if you are working in general with dots on your maps, it’s maybe better to store it as a point like your other data
  • Can we import polygon data from an existing map with territorial circumscription so that we don’t have to draw them by hand? → Yes
  • Can we specify a schema or other constraints to ensure consistency of the data (e.g. Birthdate < Deathdate, no overlapping residence periods etc.)? → You can use visualisation to do some error checking, but there can be no hard enforcement of such constraints at the moment. Or you can filter specifically on Birthdate < Deathdate, more advanced.
  • Does database harmonise the different sources of location, e.g. if I put “Roma (IT)” from “City” and also point from “Geometry” which is actually Rome, will database understand it is the same? On the map it looks the same, but how is it in database? Is it linked as the same entity? → More on this next session!
  • Is Arabic script supported or more generally, are scripts running from right to left supported? → Yes: Everything Unicode
  • Can we geo-visualize more than one object type, say author’s places and book’s publishing houses’ places? → Yes: using the Scope (https://nodegoat.net/guides/visualisationsettings)
  • Is there anything in the guides as regards Nodegoat and RDF? → Yes, it’s work in progress, see this sample nodegoat project working with a Subject-Predicate-Object (RDF Format) data model: https://www.manto-myth.org/blog/a-half-dozen-ways-to-die-mythically
  • If I uncheck „Fixed field“ in Object „book“ it throws: The data Model does not have a configuration that can be used to generate Object names, please check your settings.“ Why? → Tick/check one or more of the Object Description ‘name’ (‘use object description for name’) checkboxes
  • Can nodegoat handle localised object descriptions? E.g. book reviews in different languages? → Yes
  • Can nodegoat handle uncertain dates / data? → Yes, see the following blogposts:

https://nodegoat.net/blog.s/45/how-to-store-uncertain-data-in-nodegoat-ambiguous-identities

https://nodegoat.net/blog.s/44/how-to-store-uncertain-data-in-nodegoat-conflicting-information

https://nodegoat.net/blog.s/43/how-to-store-uncertain-data-in-nodegoat-incomplete-source-material

https://nodegoat.net/blog.s/42/how-to-store-uncertain-data-in-nodegoat

      Slides

Data Ingestion Episode III – May the linked open data be with you

The linking of research data has been a dominant topic for years, especially in digital history. Linked Open Data (LOD) is the buzzword at conferences and in research projects. However, it is not the collection of such data available on the internet that is the greatest challenge here, but its harmonisation, because research databases are usually structured differently. It is therefore not surprising that despite many initiatives no research project in digital history has yet been realised being able to harmonise data across several structural levels of the databases. This means, for example, not only linking persons of databases by their names, but going deeper into the data structure to harmonise, for example, the geographical origin or attributes of a person’s education. But that would be the aim: to answer scientific questions through structural data harmonisation. This is where our SPARK project comes in. The third and final phase of the project (Episode 3) has been completed in January 2021. What are the core results of this project? In essence, it is a software module (DDI module for ‘dynamic data ingestion) and a method: data (research data) is collected from different source databases and ingested on a central server using the module according to the spider principle, creating a new metadatabase. The harmonisation of the collected data in this new build database is done as far as possible already with the data ingestion by mapping the database fields of the source databases into corresponding database fields to the new metadatabase. If such a mapping is not or only partially possible because the database fields of the source database and the metadatabase are too dissimilar, in a second step, as soon as the data is stored on the central server, an algorithm can be used to bring uniformity to this data by data reconciliation. In addition, the data can also be automatically reclassified in order to standardise it. These measures prepare the data for analysis and ultimately for publication, which both can be done in the virtual research environment Nodegoat. We will explain the procedure with the help of a case study. In this study, we collected data from related projects that are researching the history of universities and have joined together in a network, the Atelier Heloïse. The common interest of the projects is a prosopographical based history of universities, scholars and academic knowledge in pre-modern Europe. The four projects were chosen more at random; there are numerous other and important database projects in the Atelier. However, we had to limit ourselves to these four projects. Furthermore, it will be the task of the Atelier to bring all of the databases together in a joint, international project. The projects we have been working on are covering the history of the universities of Bologna (http://asfe.unibo.it/it), Padova (https://www.ottocentenariouniversitadipadova.it), Paris (http://studium.univ-paris1.fr) and the universities of the Old Empire in the project Repertorium Academicum Germanicum (https://rag-online.org). The metadatabase we created from the four projects contains about 200,000 students and scholars from all over Europe in the period 1200-1800, with the projects covering different time spans. The tools for collecting, ingesting (1), reconciling (2) and reclassifying (3) the data in Nodegoat also represent the methodological approach to data harmonisation as a prerequisite for data analysis (visualisations, network analyses) and finally for the publication of the results on the internet.

As a result of this approach, the places of origin of the students and scholars in the four projects were united on a map for the first time in research, impressively demonstrating the potential of international data networking. The map is of course only the starting point for deeper analyses. Only through a joint analysis of the four research projects, which describe the areas of origin (of the students) of their universities and their sources, can a synthesis, which the projects work out together, lead to new insights and research questions. But how can such a map be created? We will now take a look at this, starting with data ingestion. In order to be able to collect data, we must first make two settings in Nodegoat: the definition of the Linked Data Resource and the definition of the Ingestion process. To create the definition, we use a graphical interface, which not only simplifies our work, but also makes the data ingestion process transparent for all team members (project members, programmers). The graphical interface is thus a very important tool for visual communication, enabling a common understanding of the structures of the data sources. In the graphical interface, all database fields can be made visible to the project team and then assigned together to the new meta-database. A clear mapping process in combination with very good knowledge of the database fields (and their meaning, especially in the humanities) are the success factors not only for the ingestion, but also for database migrations in general. In principle, the graphical interface helps to find a common language and understanding between historians and programmers. In the Linked Data Resource module, it must be first defined whether it is an API interface or a SPARQL endpoint. Then a test query is constructed, for example for a person. This requires the identifier of this person, which functions as a variable for all persons of the source database from which data is to be ingested. Then the mapping process follows and the database fields of the source database are assigned to the corresponding fields in the metadatabase. By mapping as closely as possible, harmonisation of the collected data can already be achieved. However, if the structures of the source database and the metadatabase are too different, the data will still be imported and subsequently harmonised with the data matching process in the reconciliation module. Things can get complicated if, in addition, the data formats of the source database are not compatible with the metadatabase. In such a case, the data can be converted before the data import, or only certain information and not the entire content can be extracted from the source database fields. With the reconciliation module, we can then search the imported, heterogeneous data for specific terms that we have previously defined in a vocabulary. The terms found are automatically saved in the metadatabase. We demonstrated at the Atelier Heloïse conference such a procedure using the places of origin of Parisian students as an example. The places of origin are not georeferenced in the Paris database and only the names of the places are available in the application programming interface (API). With Reconciliation we can assign geopoints to the places. To do this, we first import the places from the source database and then use the reconciliation module. We configure this module so that the names of the places in the Paris database are compared with a reference list of places with geo-coordinates that exists in our metadatabase. The places in this reference list also have identifiers of GeoNames, an internationally used geographic reference system. In the reconciliation module, we now set the algorithm to not only search for the Paris names in the reference list, but also to store the georeference location (which contains the coordinates) when a hit is made. In this way we can easily visualise the places of origin of the Paris database on a map and thus also check the data qualitatively in a simple way, as it is an interactive map: Clicking on a point takes us directly to the locations and georeferences. Of course, the reconciliation module works for any data type, not only for geodata. For example, texts can also be searched for specific terms with the algorithm and the hits are automatically saved. If at this point of the ingestion process the data is still not uniform enough for an evaluation, the data can be additionally classified with the reclassification module. The principle of reclassification is that we query the data according to certain criteria and use this query to automatically classify the results. Such reclassifications are useful when a project wants to organise a lot of complex data and prepare it for data analysis. However, automatic reclassification is also very important for maintaining data consistency, as inconsistent or incorrect entries of data can also be automatically filtered out. Let’s make an example. In our case study, the four projects use different categories and names for academic degrees. Although the degrees were already classified in a remarkably uniform way in pre-modern Europe. But of course, one must take into account here that the same degree can have different qualities depending on the university. But we can also overcome this challenge with reclassification by going from the general to the specific.  In the following, we are looking at the academic degrees of jurists. For reclassification of jurists, we first create a query in Nodegoat and check whether the expected results appear. Then we use this query in the reclassification module and give it a term. This term is then used to classify the data found. The reclassification is not bound to certain types of data. We can classify people, places, observations, texts, time periods or anything else. In our case, we can classify all jurists of the four projects accordingly and thus quickly obtain a general overview of the areas of origin and study of such persons. Of course, we have to take into account that the projects cover different periods and look at their definitions of jurists in detail. This is done subsequently to the overview and is part of the qualitative data evaluation, where we can further differentiate the data and, for example, reclassify scholars who had studied Roman law. For this group of people, we can then highlight the places of origin in colour and thus see the spaces of origin and communication of these jurists. If we have further data available, for example information on the activities of the jurists as in the Repertorium Academicum Germanicum (RAG), we can also see where the legal knowledge was transferred with the persons – whom we regard as ‘knowledge carriers’. In this way, we could show the spread of law, and Roman law in particular, in pre-modern Europe. Of course, this also works for the other disciplines such as medicine or theology as well as for the large number of scholars holding a degree as a ‘Magister Artium’. It is then up to the researcher to classify, interpret, describe, if necessary correct and refine the results of the reclassification. With the procedure described, however, we are able to combine quantitative and qualitative research and thus reconstruct European knowledge spaces. With data ingestion, we can not only import and evaluate data from research projects, but in parallel, of course, also from other Linked Open Data to supplement or enrich the data set of our metadatabase, for example query the data on Wikidata that is linked to a person in our metadatabase. It would go beyond the scope looking at all the features of data analysis in Nodegoat, for example modules of network analysis, which can of course also be applied to our metadatabase. In any case, the data can be searched easily in full text and / or filtered specifically with complex, combined. It is further possible to query the data spatially drawing a polygon in GeoJOSN and simply copy its code into the database field that contains the geoinformation. This feature enables us to reconstruct, search and analyse specific knowledge spaces. Such spaces, as other results (data sets), can be published in so called data scenarios using an internet module that is configured in the backend of Nodeogat. A scenario is understood to be a data set with the corresponding visualisation settings.

Conclusion: The SPARK project makes it possible to link databases (research data) in a simple and transparent way and to harmonise and analyse the linked data with sophisticated tools – may the data be with you!

SNSF SPARK workshop: data ingestion and harmonization

Workshop on the results of the SPARK project of the Swiss National Science Foundation (SNSF) “Dynamic Data Ingestion (DDI): Server-side data harmonization in historical research. A centralized approach to networking and providing interoperable research data to answer specific scientific questions” (http://p3.snf.ch/project-190161). The workshop will take place in four sessions via Zoom. In sessions 1 and 2, participants will be introduced to the functions of the virtual research environment Nodegoat (VRE) and create a data model and import a data sample, which they will use in sessions 3 and 4 for the exercises on data ingestion. At the end, each participant will have a working VRE that can be used for further research or also used in teaching. It is highly recommended to attend all 4 sessions. The workshop is primarily aimed at members of the Phil.-Hist. faculty of the University of Bern, but is generally open to other interested parties on planet earth. The zoom link to the workshop will be sent to participants after registration. The workshop will be led by Nodegoat developers Pim van Bree and Geert Kessels (LAB1100), together with Kaspar Gubler, Institute of History, University of Bern.

Members of the Phil.-Hist. faculty of the University of Bern can apply for an VRE free of charge at the following link: https://www.dh.unibe.ch/dienstleistungen/nodegoat_go/index_ger.html Other participants can obtain an VRE at nodegoat.net. Or get Nodegoat Open Soruce on GitHub: https://github.com/nodegoat/nodegoat

The workshops always take place on Wednesdays from 2 – 5 pm. The workshops are recorded and can therefore be re-watched if a session cannot be attended.

Dates: 28.04.2021 / 05.05.2021 / 12.5.2021 / 26.5.2021

Registration for the workshop until 25.04.2021 to: kaspar.gubler@hist.unibe.ch

Session 1: Data Modelling (people and books)

In session 1 we get to know the central functions of Nodegoat (NG). Since NG is managed via the web browser, no additional software needs to be installed on the computer and location-independent working is no problem. With NG, research projects can be created, research data managed, analyzed, visualized, published on the Internet and shared with other researchers without any special programming skills. We will create our first data model, which we will fill with data in the next sessions. As we will see, NG is not a rigid ”boutique solution” that only fits a specific question or data model. Students and researchers can use NG to create custom data models based on their specific questions.

Session 2: Importing Data (including a VIAF id for each person)

In Session 2, we will import our first data sample and an identifier (VIAF) for each person. So we will work with a prosopographically oriented data model, but we can easily extend it for other research questions.

Session 3: Ingesting Biographical Data (like other IDs, or birth/death)

In session 3, we will learn about the principles of “Dynamic Data Ingestion”. There are numerous data sources on the Internet, whether for research or for the interested public. What types of data sources are there? And what about data quality? We will first explore these questions before connecting our Nodegoat environment to a typical data source via an interface (API) and importing the first test data. These data can be further identifiers of persons or also information about the life data.

Session 4: Ingesting Related Data (like published books of people)

In Session 4, we will enrich the data on the persons and check what other data is available on the Internet. For example, publications that we can add and, if we have full texts, additionally analyze in nodegoat. Finally, we will also look at the data harmonization capabilities in nodegoat.

Data Ingestion Episode II – The Empire strikes back, but not for long

The second test phase of the SNSF SPARK project (Episode 2) on Dynamic Data Ingestion (DDI) and server-side data harmonisation has been completed. Data from as many different data sources as possible were collected and stored centrally on the DDI server according to the spider principle, resulting in a new meta-database.

Fig. 1: One example of a data collection via DDI module , testphase 2: standardised biografical data and related publications.

However, the tests again drew our attention to two core problems of Linked Open Data (LOD) in research: The “empire strikes back” against LOD on a technical and content level, which are interdependent. On the technical level, the most important prerequisite for data exchange is missing, namely a kind of “industry standard” for a uniform query language and a standardised output format as well as a standardised structure of the data published via Application Programming Interface (API). Many data projects have individually designed APIs and project-specific data structures that do not comply with international standards and/or there is a lack of adequate documentation of the data output. At the content level of LOD, we are faced with challenges due to the heterogeneity of research data, especially in the humanities, which inevitably leads to inconsistent database structures and makes data exchange more difficult. Despite numerous international initiatives in recent years, no research project has yet been able to gain significant or groundbreaking scientific knowledge through Linked Open Data, especially not in the Digital Humanities. In addition to the technical and content-related obstacles, the ‘Empire’ has been quite successful in blocking communication between humanities scholars and software developers. Any researcher who has participated in relevant conferences of the humanities knows about the long discussions on possibilities and potential of linking databases in the field of Digital Humanities. In the end, it all comes down to good ideas and declarations of intent – without having linked a single data set. At the other extreme are initiatives that store large amounts of LOD data in a meta-database and reflect on whether this data represents information or already knowledge. This is an important question as LOD has been praised in the scientific community of the so-called ‘Semantic Web’ as pointing the way to the future of an ‘Internet of Knowledge’, an Internet from which users can retrieve data in a structured and standardised way and transform it into information and knowledge. But we have not yet reached that point. One important project that pursues these goals is Wikidata, a sister project of Wikipedia. Wikidata offers open, international standards for the storage, sharing and exchange of data. In order to exchange and harmonise data, projects would therefore have to store and document their data on Wikidata. This means efforts that not every project can or wants to make. Thus the situation in the Digital Humanities is still more or less the same when it comes to exchanging LOD: on the one side is the humanities spirit that floats on clouds with brilliant ideas for linking data, and on the other side is the analytical developer who collects highly complex data with a down-to-earth approach. The experience from many conferences shows that communication between cloud and earth has hardly been possible so far. Both sides send out signals that are usually misunderstood by the other side – people speak in a different language and do not understand each other, even if they mean the same thing. But how can we bring both worlds together and install a translation board (Rosetta’s stone) between cloud and earth? One such board is the DDI module developed as part of the SNSF SPARK project. In this module a graphical interface facilitates communication between humanities scholars and developers.

Fig. 2 Definition of the Linked Data Resource, in this example the API of swissbib (catalog and data hub of Swiss libraries, which will soon be replaced by a new version)

In the interface of the DDI module, the Linked Data Resource can be defined and queried for a sample data output, which can then be used to assign the data fields of the Linked Data Resource (e.g. data from another research project, from a library or an archive) to one’s own researcher database, which thus becomes a meta-database consisting of data from various data sources.

Fig. 3 Running a test query on swissbib API resources

The advantage of a graphical interface for data ingestion lies in the visual communication: researchers and developers (or researchers experienced in IT) jointly define the interface (API) and immediately see the result, the data output, in the test query. With the test query, it is visible to all participants which database fields and contents are actually present in the data source. Researchers and developers can then use the data of the test query as a template for mapping the database fields of the data source to the new meta-database. This type of visual communication leads to fewer misunderstandings, as we have already seen in test series in the context of the SNF project SPARK.

Fig4. Mapping source (right) and target database fields (left)

The test query also shows whether the data is compatible with your own question or whether it is meaningful – and to what extent these data structures must be compared with those of other data sources in order to obtain significant, scientific results. This brings us to the crucial point: collecting data is one thing, but harmonising data (structured and unstructured) from various data sources in order to be able to evaluate them is the last, but often too big hurdle, especially in the humanities. Therefore a translation tool for harmonising the data had to be integrated in the SPARK project: after the data has been collected by the DDI module, the reconciliation module for data harmonising is used. The module has an algorithmic pattern matching (named entity matching) function that identifies predefined terms or categories (in the sense of a controlled vocabulary) in the data, makes suggestions for assignments or can automatically store the matched terms in the database. This also means that one can see all relations of a term to the texts in which it was found. This matching of vocabularies (or: keyword spotting) has great potential not only for data harmonisation but also for the structuring and analysis of heterogeneous data in general, including for example texts available as OCR (Optical Character Recognition), generated by specialized OCR software like Transkribus. Not only researchers, but also libraries and archives will be able to use these functions to make their (handwritten) texts more accessible to the public, for example in the form of data visualisations such as the following example of publication locations.

Fig. 6 Publication locations of books, published in 1540, from the swissbib API resource found with pattern matching (reconciliation) in text strings

The results (locations) are automatically linked to geo-references, allowing a map of the publication locations to be created immediately after the reconciliation process, in this example with a historical background map (Mercator 1607). In addition to publication locations, authors or specific contents of publications can be found and linked by means of pattern matching, this is just one example, of course, the process of data ingestion and data reconciliation is not limited to certain data types. In the third and final phase of the SPARK project (Episode 3), different scenarios for data harmonisation will be run through to examine to what extent data harmonisation can lead to new research questions, especially in terms of a heuristic approach.

Data Ingestion Episode I – A New Hope

Data ingestion can be described as a process of dynamic collecting and importing data for storage and immediate use in a database. The data ingestion process starts by prioritizing and validating data sources and assigning data items to the correct database fields. This process can be particularly challenging when working with large and complex data sets like we are dealing with in the first test phase (Episode 1) of the SNSF SPARK project on dynamic data ingestion. As planned for Episode 1, the ingestion module was completed and implemented by Pim van Bree and Geert Kessels (LAB1100) in the virtual research environment (VRE) nodegoat. This module offers to the user an intuitive interface with (almost) no coding, except for setting up SPARQL  / API queries to connect to the different external sources. From May 2020 on Kaspar Gubler has been compiling with the module a data sample of 20k persons from various databases within the research area of contextualized prosopography – for the final test series (Episode 4) we will compile a data sample of 500k people. To compile the first sample of the 20k people, we used different identifiers like the GND (Gemeinsame Normdatei), the VIAF (Virtual International Authority File) and of course Wikidata, the innovative and forward-looking sister project of Wikipedia. At Wikidata you can find various interesting data samples like the Cambridge Alumni database. The screenshot shows an example how the ingestion process is captured in the module’s interface. In principle, an external data source based on identifiers and/or database values is queried from the module and the external data is mapped to the database fields of the ingestion database in nodegoat.

In this example the identifiers of the Cambridge Alumni database are used to get and map the data. The ingestion module is not limited to certain external data sources. Thus, data from image databases can be integrated as well as literature references or knowledge and/or information of different provenance. The extension and enrichment of the data sample will be tested in Episode 2-4 as well as the harmonization of external data on the server level by means of scripts, for example format adjustments for dates. In an international network of database projects of Digital Humanities, for which Kaspar Gubler acts as secretary, a data harmonization strategy has been worked on for years. With the ingestion module, there is now a new hope that, if the SNSF SPARK project will be successful, this network will finally be able to merge its data and thus develop new questions and simplify worldwide cooperation.

Kick off SPARK Project: Dynamic Data Ingestion

On February 1 2020, this SPARK project officially started. Together with Geert Kessels and Pim van Bree (LAB1100) the kick off meeting took place in Montreux on Lake Geneva and the milestones for the project were set. Core tasks are the development of a new module for the virtual open source research environment nodegoat (github) for central data harmonisation and the creation of a common ontology with selected database projects. The kick off it made clear that we have to do the module development and the harmonization of the database structures in small steps and in very close coordination, which is why we decided for an agile software development and project management. The progress of the project can be followed in this blog. The project will also be presented at several events and conferences, such as the Research Day of the University of Bern (March 2020), in a panel at the Annual Congress of Italian Medievalists (Bertinoro, June 2020), in a presentation at the Annual Congress of Atelier Heloise, a European network on digital academic history (Bologna, September 2020) and at a conference on the use of virtual research environments (VRE) with ancient manuscripts (Dorigny, September 2020) as well as at a conference of the the Swiss Academy of Humanities and Social Scienceson on ‘knowledge locations’ (Bern, October 2020), which i will co-organise. The public SPARK workshop with data testing and results is planned for november 2020 in Bern, later a public lecture on the results will be held at the University of Bern.