Author Archives: kgubler

The coffee break as a driver of science: Nodegoat @ Uni Bern (2017-2021)

The coffee break was sometimes referred to as the ‘driver of science’ during the COVID 19 pandemic, as the isolation in the home office made people aware of how important the break-related, social interactions are for scientific exchange in the corridors, on the stairs and in the canteen. In fact, the success story of the virtual research environment Nodegoat at the University of Bern can also be traced back to a coffee break. It took place in 2017 in the research pool of the Institute of History. The starting point was a database migration that I had to manage in the same year for the digital research project Repertorium Academicum Germanicum (RAG), a major project that was being worked on by research groups at the universities of Bern and Giessen (D). The project required new software for data entry and data visualisation, as the previous system was getting on in years and could no longer be brought up to date at a reasonable cost. After some research on potential solutions, it turned out that no software met the complex requirements. And developing my own software was out of the question due to time constraints. Instead of doing more research, I preferred to let things rest for a moment and reconsider the overall situation over a cup of coffee. Nothing was more obvious than to take this opportunity to call on my colleague, who was working on her dissertation for another project in the neighbouring office, to share some thoughts about God and the world, about software and database migrations. After briefly explaining the complicated situation, the colleague (a historian) immediately asked whether we had also considered Nodegoat. She had attended a Nodegoat workshop in Düsseldorf by the developers of Nodegoat (LAB1000) to gain insights into network analysis, which she would consider for her dissertation. To my surprise, I had never heard of this software. Nodegoat, she continued, is specialised in managing and visualising research data, with both functions integrated into the same software. Amazed and slightly electrified, I immediately set about testing Nodegoat in detail. LAB1100 provided a test environment. Surprisingly, Nodegoat had exactly the range of functions we needed for the RAG. Now it was only a matter of convincing the top level of the project management of Nodegoat, which was sceptical at first. Only a workshop with LAB1100 in Bern finally brought the breakthrough. In the workshop, a new data model was created and the data migration was analysed in detail. The starting signal was given and after three months the previously used database was history. On 1 January 2018, the time had come: Nodegoat was put into operation by the teams in Bern and Giessen. This was followed by training and a series of data cleansing sessions, which progressed quickly thanks to the new interface, which displays the data clearly and makes it correspondingly easy to detect irregularities. The collaboration with LAB1100 proved to be a stroke of luck. In close coordination with the developers, the RAG was able to initiate various software modules. It started with an extension of the Nodegoat interface (API) to speed up the data migration. After the data migration, it was about the complex module for the collection and visualisation of approximate temporal information as well as various modules for data visualisation and data export. The first project year of the RAG with Nodegoat was satisfactory for the teams. The training effort for the new working environment was kept within narrow limits. Data could now be entered much faster and more fluently than in the old system (MS Access as frontend, MS SQL Server as backend). For the first time, visualisations allow a complete overview of the immense data stock (60,000 persons, about one million pieces of information on curricula vitae, careers, institutions) that has been collected by hand over the years. Here you can see, for example, the scholars’ areas of origin by university in the Old Empire 1250-1550, whereby the University of Krakow was also taken into account due to its outstanding importance in education (source: rag-online.org).

When, in 2019, the University of Bern promised the faculties a budget for corresponding projects as part of its digitisation strategy, it was obvious to me to bring Nodegoat into play. The reason for this was that Nodegoat offers a digital infrastructure for the humanities that is not project specific. In contrast, research projects in the humanities today still develop their own software specifically adapted to the project, which therefore only functions as an isolated solution and cannot be used by other projects. Nodegoat, on the other hand, can be used by all disciplines (not only) in the humanities, because the data model can be defined individually and thus adapted to different sources and questions. Another plus point is the global aspect. The development of Nodegoat, which is available as open source software, is shared by various educational institutions and projects worldwide. This international collaboration for a flexible research environment is, in my opinion, the key to a sustainable digital infrastructure (not only) in the humanities. Also, the worldwide use of the same research environment, which is also specialised in the visualisation of research data, automatically promotes inter- and transdisciplinary exchange as well as research collaboration in general. These arguments were also convincing at the Institute of History and we were able to submit the “Application for Strategic Faculty Funds, Funding Line III, Digitisation” to the university. Title of the project: “Establishing a shared Virtual Research Environment (VRE)”. The funds were granted and Nodegoat GO went live as a multi-user platform in April 2020. To facilitate the launch, I took over the Nodegoat support position and was able to advise numerous projects from a wide range of fields in the humanities (history, archaeology, German studies, English studies, music and theatre studies): from sources to data model and visualisations. In mid-2021, after Nodegoat GO had got off to a good start, I handed over the support position to a student as planned, with the intention of promoting the transfer of knowledge about Nodegoat (as well as digital skills in general) at all levels: support from students for students and projects. Incidentally, the University of Bern is the first university to have created a position for Nodegoat support. A pioneering act that has not failed to have an impact: as of November 2021, around 50 projects with around 100 users are now working with Nodegoat GO at the university, as well as other, larger projects with their own Nodegoat installations. And all this because of a coffee break.

Cite this article as: Kaspar Gubler: The coffee break as a driver of science: Nodegoat @ Uni Bern (2017-2021), in HistData, 07/12/2021, https://histdata.hypotheses.org/2559

Die Kaffeepause als Treiberin der Wissenschaft: Nodegoat @ Uni Bern (2017-2021)

Die Kaffeepause wurde während der COVID-19-Pandemie zuweilen als ‘Treiberin der Wissenschaft’ bezeichnet, da die Isolation im Homeoffice bewusst machte, wie wichtig die pausenbedingten, sozialen Interaktionen für den wissenschaftlichen Austausch in den Gängen, auf den Treppen und in der Mensa sind. Tatsächlich ist auch die Erfolgsgeschichte der virtuellen Forschungsumgebung Nodegoat an der Universität Bern auf eine Kaffeepause zurückzuführen. Sie fand 2017 statt im Forschungspool des Historischen Instituts. Ausgangspunkt war eine Datenbankmigration, die ich im selben Jahr für das digitale Forschungsprojekt Repertorium Academicum Germanicum (RAG) bewerkstelligen sollte, einem Grossprojekt, das von Forschungsgruppen an den Universitäten Bern und Giessen (D) bearbeitet wurde. Das Projekt benötigte eine neue Software für die Datenerfassung und für die Datenvisualisierung, da das bisherige System in die Jahre gekommen war und nicht mehr mit verhältnismässigem Aufwand auf einen aktuellen Stand gebracht werden konnte. Nach einigen Recherchen zu potentiellen Lösungen stellt sich heraus, dass keine Software den komplexen Anforderungen genügte. Und eine eigene Software zu entwickeln kam aus Zeitgründen nicht in frage. Anstelle einer erweiterten Recherche zog ich es vor, die Dinge einen Moment ruhen zu lassen und bei einer Tasse Kaffee die Gesamtsituation noch einmal zu überdenken. Nichts war naheliegender als bei dieser Gelegenheit die Kollegin, die im Nachbarbüro für ein anderes Projekt an ihrer Dissertation arbeitete, aufzusuchen, um über Gott und die Welt, über Software und Datenbankmigrationen einige Gedanken anzustellen. Nach kurzen Erläuterungen zur vertrackten Situation fragte die Kollegin (Historikerin) sogleich, ob wir auch Nodegoat in Betracht gezogen hätten. Sie habe in Düsseldorf einen Nodegoat-Workshop der Entwickler von Nodegoat (LAB1000) besucht, um Einblicke in die Netzwerkanalyse zu erhalten, die sie für ihre Dissertation in Betracht ziehen würde. Zu meiner Überraschung hatte ich noch nie von dieser Software gehört. Nodegoat, so die Kollegin weiter, sei spezialisiert auf die Verwaltung und Visualisierung von Forschungsdaten, wobei beide Funktionen in derselben Software integriert seien. Verblüfft und leicht elektrisiert machte ich mich sogleich daran, Nodegoat eingehend zu prüfen. LAB1100 stellte eine Testumgebung zur Verfügung. Erstaunlich: Nodegoat verfügte exakt über den Funktionsumfang, den wir für das RAG benötigten. Nun galt es nur noch, die oberste Ebene der Projektleitung von Nodegoat zu überzeugen, welche zuerst skeptisch war. Erst ein Workshop mit LAB1100 in Bern brachte letztlich den Durchbruch. Im Workshop wurde ein neues Datenmodell erstellt und die Datenmigration eingehend analysiert. Der Startschuss fiel und nach drei Monaten war die vorher genutzte Datenbank Geschichte. Am 1. Januar 2018 war es soweit: Nodegoat wurde von den Teams in Bern und Giessen in Betrieb genommen. Es folgten Schulungen und eine Reihe von Datenbereinigungen, die dank dem neuen Interface, welche die Daten übersichtlich darstellt und entsprechend einfach Unregelmässigkeiten entdecken lässt, zügig voran gingen. Die Zusammenarbeit mit LAB1100 erwies sich als Glücksfall. In enger Abstimmung mit den Entwicklern konnte das RAG verschiedene Software-Module initiieren. Es fing an mit einer Erweiterung der Schnittstelle von Nodegoat (API) , um die Datenmigration zu beschleunigen. Nach der Datenmigration war es etwa das komplexe Modul für die Erfassung und Visualisierung ungefährer zeitlicher Angaben sowie diverse Module für die Datenvisualisierung und den Datenexport. Das erste Projektjahr des RAG mit Nodegoat verlief für die Teams zufriedenstellend. Der Schulungsaufwand für die neue Arbeitsumgebung hielt sich in engen Grenzen. Daten konnten nun wesentlich schneller und flüssiger erfasst werden als im alten System (MS Access als Frontend, MS SQL Server als Backend). Visualisierungen ermöglichen erstmals einen Gesamtüberblick auf den über die Jahre in Handarbeit zusammengetragenen, immensen Datenbestand (60’000 Personen, gegen eine Million an Informationen zu Lebensläufen, Karrieren, Institutionen). Hier zu sehen sind etwa die Herkunftsräume der Gelehrten nach Universitäten im Alten Reich 1250-1550, wobei die Universität Krakau aufgrund ihrer überragenden Bedeutung im Bildungswesen ebenfalls berücksichtig wurde (Quelle: rag-online.org).

Als dann 2019 die Universität Bern im Rahmen ihrer Digitalisierungsstrategie den Fakultäten Budget für entsprechende Projekte in Aussicht stellte, war es für mich naheliegend, Nodegoat ins Spiel zu bringen. Dies aus dem Grund, da Nodegoat eine digitale Infrastruktur für die Geisteswissenschaften bietet, die dieser Bezeichnung auch gerecht wird. Im Gegensatz dazu entwickeln Forschungsprojekte der Geisteswissenschaften heute immer noch ihre eigene, dem Projekt spezifisch angepasste Software, die damit nur als Insellösung funktioniert und nicht durch weitere Projekten genutzt werden kann. Nodegoat dagegen können sämtliche Disziplinen (nicht nur) der Geisteswissenschaften nutzen, weil das Datenmodell individuell festgelegt und damit an unterschiedliche Quellen und Fragestellungen angepasst werden kann. Ein weiterer Pluspunkt ist der globale Aspekt. Die Entwicklung von Nodegoat, das als Open Source Software verfügbar ist, wird weltweit durch verschieden Bildungsinstitutionen und Projekte gemeinsam getragen. Dieses internationale Zusammenspannen für eine flexible Forschungsumgebung ist meiner Ansicht nach der Schlüssel für eine nachhaltige, digitale Infrastruktur (nicht nur) in den Geisteswissenschaften. Auch fördert die weltweite Nutzung derselben Forschungsumgebung, die zudem auf die Visualisierung von Forschungsdaten spezialisiert ist, automatisch den inter- und transdizsiplinären Austausch wie die Forschungszusammenarbeit allgemein. Diese Argumente überzeugten auch am Historischen Institut und wir konnten der Universität den “Antrag auf strategische Mittel der Fakultät, Förderlinie III, Digitalisierung” einreichen. Titel des Projekts: “Einrichten einer gemeinsamen virtuellen Forschungsumgebung (Virtual Research Environment VRE)”. Die Gelder wurden gesprochen und Nodegoat GO wurde im April 2020 als Mehrbenutzerplattform in Betrieb genommen. Um den Start zu erleichtern, übernahm ich die Stelle für den Nodegoat-Support und durfte dabei zahlreiche Projekte aus den verschiedensten Bereichen der Geisteswissenschaften (Geschichte, Archäologie, Germanistik, Anglistik, Musik- und Theaterwissenschaften) beraten: von den Quellen zum Datenmodell und zu den Visualisierungen. Mitte 2021, nachdem Nodegoat GO gut angelaufen war, habe ich planmässig die Support-Stelle an einen Studierenden weitergegeben, mit der Absicht den Wissenstransfer zu Nodegoat (wie überhaupt digital skills) auf allen Ebenen zu fördern: Support von Studierenden für Studierende und Projekte. Die Universität Bern ist übrigens die erste Universität, die eine Stelle für den Nodegoat-Support geschaffen hat. Eine Pioniertat, die ihre Wirkung nicht verfehlt hat: Stand November 2021 arbeiten an der Universität mittlerweile rund 50 Projekte mit gegen 100 Usern mit Nodegoat GO sowie andere, grössere Projekte mit eigenen Nodegoat-Installationen. Und dies alles wegen einer Kaffeepause.

Cite this article as: Kaspar Gubler: Die Kaffeepause als Treiberin der Wissenschaft: Nodegoat @ Uni Bern (2017-2021), in HistData, 18/11/2021, https://histdata.hypotheses.org/2465

Nodegoat als Tool für digitale Editionen: Ringvorlesung von Kaspar Gubler

Ringvorlesung an der Universität Bern: Einblicke in die Digital Humanities → Fokus Editionen

Lecture series at the University of Bern: Insights into the Digital Humanities → Focus Editions

Die Teilnahme via Zoom erfordert eine Registrierung unter folgendem Link:

Participation via Zoom requires registration at the following link:

https://unibe-ch.zoom.us/meeting/register/u5MvdeCsqDMuHtRZuTJO4j0FdKmO0uZzGNgL

6. Dezember 2021

Kaspar Gubler: Nodegoat als Tool für digitale Editionen

“Die Präsentation stellt die Funktionen der virtuellen Forschungsumgebung Nodegoat  für die Bearbeitung, Analyse und Edition von Texten vor. Bislang war Nodegoat in den Digital Humanities vor allem für Datenmanagement, Netzwerkanalyse und Visualisierung von Forschungsdaten bekannt. Dagegen sind die Textverarbeitungstools von Nodegoat erst Eingeweihten vertraut: mit Nodegoat können etwa Informationen aus Texten automatisch extrahiert und gespeichert werden (pattern matching). Weiter können Texte von Hand oder automatisiert ausgezeichnet werden. Auch ist es möglich, verschollene Texte (Bibliotheken) aufgrund von Querverweisen digital zu rekonstruieren. Ausgefeilte Funktionen zur chronologischen Einreihung von Texten mit vagen oder fehlenden Datumsangaben runden die Funktionspalette ab.”

Kaspar Gubler: Nodegoat as a tool for digital editions

“The presentation introduces the functions of the virtual research environment Nodegoat for editing, analysing and editing texts. So far, Nodegoat has been known in the digital humanities mainly for data management, network analysis and visualisation of research data. In contrast, Nodegoat’s text processing tools are only familiar to the initiated: with Nodegoat, for example, information can be automatically extracted from texts and stored (pattern matching). Furthermore, texts can be marked up manually or automatically. It is also possible to digitally reconstruct lost texts (libraries) based on cross-references. Sophisticated functions for the chronological classification of texts with vague or missing dates round off the range of functions.”

Mit anschliessendem Crashkurs (‘Eine digitale Edition mit Nodegoat erstellen’). Es werden keine Vorkenntnisse vorausgesetzt. Die Teilnehmenden sollten Zugang zu einer Nodegoat-Forschungsumgebung haben, zu beantragen hier: https://www.dh.unibe.ch/dienstleistungen/nodegoat_go/index_ger.html. Oder, für Personen ausserhalb der Uni Bern, hier: nodegoat.net

Followed by a crash course (‘Creating a digital edition with Nodegoat’). No previous knowledge is required. Participants should have access to a Nodegoat research environment, to apply here: https://www.dh.unibe.ch/dienstleistungen/nodegoat_go/index_ger.html. Or, for people outside the University of Bern, here: nodegoat.net.

Poster_Ringvorlesung_HS2021_A3

 

Links und Informationen zur Ringvorlesung:

Ein beispielhaftes Projekt, das die Möglichkeiten und Funktionen von Nodegoat für eine digitale Edition zeigt, ist die Encyclopedia of Romantic Nationalism in Europe.

An exemplary project that shows the possibilities and functions of Nodegoat for a digital edition is the Encyclopedia of Romantic Nationalism in Europe.

https://ernie.uva.nl/viewer.p/21/56/object/131-158438

Für dieses Projekt wurden die Daten (mit Tags markierte Texte) aus Nodegoat via Schnittstelle (API) exportiert und in das Datenformat XML konvertiert, um so die Daten für die Publikation in Buchform vorzubereiten. Die Bücher, die der Online-Version des Projekts damit weitgehend entsprechen, liegen in zwei Bänden vor.

For this project, the data (tagged texts) were exported from Nodegoat via interface (API) and converted into the data format XML in order to prepare the data for publication in book form. The books, which thus largely correspond to the online version of the project, are available in two volumes.

https://spinnet.eu/ernie/erniethebook

Jeder Artikel in diesem Projekt entspricht einem Objekt in Nodegoat. Jedem Objekt in Nodegoat wird bei der Erstellung automatisch eine eindeutige Kennung (Unique identifier) zugewiesen. Damit lassen sich die Artikel einfach zititieren, auch mit einem Digital Object Identifier (DOI), wie hier zu sehen.

Each item in this project corresponds to an object in Nodegoat. Each object in Nodegoat is automatically assigned a unique identifier when it is created. This makes it easy to cite the articles, even with a Digital Object Identifier (DOI), as seen here.

Wie können wir Texte ‘taggen’ (auszeichnen) in Nodegoat? Hintergrund: in Nodegoat können wir für jedes Objekt (und für die Kategorien, die die Objekte beschreiben oder klassifizieren) verschiedene Typen von Inhaltselementen im Datenmodell definieren. Wollen wir einen Text auszeichnen, wählen wir im Datenmodell  das Element ‘Text (Tags & Layout)’.

How can we ‘tag’ texts in Nodegoat? Background: in Nodegoat we can define different types of content elements in the data model for each object (and for the categories that describe or classify the objects). If we want to tag a text, we select the element ‘Text (Tags & Layout)’ in the data model.

Im Datenbereich sieht dann dieses Textfeld folgendermassen aus, hier mit einem Beispieltext zu einer Pilgerreise gefüllt.

In the data area, this text field looks like this, here filled with an example text for a pilgrimage.

In diesem Beispiel handelt es sich um den Objekttypen ‘Dokument’, in dem wir die Texte als einzelne Objekte erfassen und taggen. Wir können in den Conditions von Nodegoat bestimmte Begriffe, die uns besonders interessieren, farblich hervorheben. Mit einem Klick auf einen farbigen Begriff gelangen wir zum anderen Objekt, zum Beispiel zu einer Person. Im Tab ‘Cross-Referencing’ sehen wir alle Objekte aufgeführt, zu denen von unserem Text aus solche Verlinkungen erstellt wurden. Die Verlinkungen machen es zudem möglich, umgehen ein Netzwerk dieses Textes zu erstellen und in dieser Form die Verlinkungen darzustellen (mit den Methoden der Netzwerkanalyse).

In this example, we are dealing with the object type ‘document’, in which we capture and tag the texts as individual objects. In the conditions of Nodegoat, we can highlight in colour certain terms that are of particular interest to us. Clicking on a coloured term takes us to the other object, for example to a person. In the tab ‘Cross-Referencing’ we see all objects listed to which such links have been created from our text. The links also make it possible to create a network of this text and to display the links in this form (using the methods of network analysis).

Die Texte in Nodegoat werden nicht in XML getaggt, sondern in HTML. Das Besondere: im HTML-Code sehen wir hellblau die Objekte, die getaggt wurden, mit ihren Identifikationsnummern. Dank dieser Identifikationsnummern, die in der Datenbank gespeichert werden, verfügen wir durch das Tagging über eine klar definierte Struktur des Textes, die wir sogleich für Auswertungen nutzen können, etwa für die erwähnte Netzwerkanalyse, für Kartenvisualisierung und weitere Analysen. Kann Nodegoat auch Daten im XML-Format herstellen? Zum Beispiel für eine Publikation? Ja. Das wurde auch schon gemacht, siehe dazu das Beispiel der eingangs erwähnten Enzyklopädie. Die Daten, also hier die Tags mit ihren Identifikationsnummern werden dazu via Schnittstelle in Nodegoat im Format JSON heruntergeladen und mit einem XML-Parser in das XML-Datenformat konvertiert. Dieser Vorgang ist insofern nicht schwierig, da die Nodegoat-Daten klar strukturiert und definiert im Format JSON vorliegen. Könnte ein XML-Editor für Nodegoat entwickelt werden, sodass man die Tags nicht in HTML, sondern gleich in XML abspeichern könnte? Ja, das ist möglich. Das Tagging mit HMTL + eindeutigen Identifikationsnummern sollte allerdings nicht unterschätzt werden. Es bietet gerade im Hinblick auf die Her- und Bereitstellung von Forschungsdaten gewisse Vorteile gegenüber dem XML-Format, dessen Stärken mehr im Bereich des Publishing zu sehen sind. Doch sollen hier die Formate nicht gegeneinander ausgespielt werden, sondern der Fokus auf die Forschung gelegt werden und damit auf die Frage, welche neuen Erkenntnisse wir mit digitalen Tools gewinnen können? Dabei sollten wir nicht nur Daten sammeln, sondern diese auswerten, möglichst über verschiedene Ebenen der Kontextualisierung hinweg. Für solche Auswertungen bietet Nodegoat einen einfachen Zugang, insbesondere auch für die Lehre.

The texts in Nodegoat are not tagged in XML, but in HTML. The special feature: in the HTML code we see in light blue the objects that have been tagged with their identification numbers. Thanks to these identification numbers, which are stored in the database, we have a clearly defined structure of the text through tagging, which we can immediately use for evaluations, for example for the network analysis mentioned, for map visualisation and other analyses. Can Nodegoat also produce data in XML format? For example, for a publication? Yes. This has already been done, see the example of the encyclopaedia mentioned at the beginning. The data, in this case the tags with their identification numbers, are downloaded into Nodegoat in JSON format via an interface and converted into the XML data format with an XML parser. This process is not difficult insofar as the Nodegoat data is clearly structured and defined in JSON format. Could an XML editor be developed for Nodegoat so that the tags could be saved in XML instead of HTML? Yes, that is possible. However, tagging with HMTL + unique identification numbers should not be underestimated. Especially with regard to the production and provision of research data, it offers certain advantages over the XML format, whose strengths are to be seen more in the area of publishing. However, the formats should not be played off against each other here, but the focus should be on research and thus on the question of what new insights we can gain with digital tools? In doing so, we should not only collect data, but also evaluate it, if possible across different levels of contextualisation. Nodegoat offers easy access for such evaluations, especially for teaching.

Nodegoat verfügt über verschiedene andere Funktionen zur Bearbeitung und Auswertung von Texten und Bildern. Neben dem ‘taggen’ von Texten können etwa mit Regex (regular expression) Begriffe in den Conditions von Nodegoat definiert und sogleich im Text farblich hervorgehoben werden (braun, gelb, blau).

Nodegoat has various functions for editing and evaluating texts and images. In addition to tagging texts, terms can be defined in the Nodegoat conditions using regex (regular expression) and immediately highlighted in colour in the text (brown, yellow, blue).

Um den Begriff ‘Schiff’ in den Conditions blau einzufärben, tragen wir bei den Descriptions des Objekts Folgendes ein: (Schiff)  und dann diese Formatierung: <span style=”background-color: #81BEF7;”>$1</span>. Wo die Angaben einzutragen sind, sehen wir in der folgenden Abbildung.

In order to colour the term ‘Schiff’ blue in the conditions, we enter the following in the object’s descriptions: (Schiff) and then this formatting: <span style=”background-color: #81BEF7;”>$1</span>. We can see where to enter the information in the following illustration.

Eine weitere nützliche Funktion für die Arbeit mit Texten ist das Modul der ‘Data reconciliation’ in Nodegoat. Damit können wir Beschreibungen zu Objekten automatisiert nach bestimmten Begriffen durchsuchen lassen, die wir zuvor in einem Vokabular defniert haben. Die Treffer werden sogleich in der Datenbank abgespeichert. So können wir auch Texte durchsuchen und die gefundenen Begriffe automatisch ‘taggen’ und wiederum abspeichern lassen, wobei der Algorithumus zur Zeit so eingestellt ist, dass er nach einem ganz spezifischen Begriff in einem Text sucht und nicht nach der Gesamtzahl dieser Begriffe.

Another useful function for working with texts is the ‘Data reconciliation’ module in Nodegoat. This allows us to automatically search descriptions of objects for certain terms that we have previously defined in a vocabulary. The hits are immediately stored in the database. In this way, we can also search texts and have the terms found automatically ‘tagged’ and saved again, whereby the algorithm is currently set to search for a very specific term in a text and not for the total number of these terms.

Kann man Text in Nodegoat importieren? Ja. Die einfachste Art ist, die Texte per Copy / Paste in das Textfeld einzufügen. Bei vielen Texten kann man diese entweder hochladen via Interface (CSV-Format) oder man importiert Texte via Schnittstelle (API). Letzteres kann man mit Transkribus kombinieren. Dies bedeutet: Wir können unsere Texte in Transkribus, zum Beispiel im Webinterface von Transkribus lite, zuerst automatisch (mit OCR) transkribieren lassen und dann jede Seite unsers Dokuments als ein Objekt in Nodegoat importieren mit der ‘Data Ingestion’ Funktion. Anschliessend können wir die Texte in Nodegoat taggen und etwa mit Abfragen oder Visualisierungen auswerten. Wir gehen an dieser Stelle nicht weiter auf die Einzelheiten ein, um zu einem späteren Zeitpunkt dazu ein Tutorial zu erstellen. In der Abbildung unten sehen wir als Beispiel einige importierte Pages aus Transkribus mit ihren IDs und den Seitenzahlen. Wir können also ganze Werke aus Transkribus in Nodegoat importieren.

Can I import text into Nodegoat? Yes. The easiest way is to copy / paste the texts into the text field. For many texts you can either upload them via interface (CSV format) or import texts via interface (API). The latter can be combined with Transkribus. This means: We can first have our texts transcribed automatically (with OCR) in Transkribus, for example in the web interface of Transkribus lite, and then import each page of our document as an object into Nodegoat using the ‘Data Ingestion’ function. Afterwards, we can tag the texts in Nodegoat and evaluate them with querys and visualisations. We will not go into further detail here, in order to create a tutorial on this at a later date. In the figure below we see as an example some imported pages from Transkribus with their IDs and page numbers. So we can import whole works from Transkribus into Nodegoat.

Dies war nur ein erster Überblick zu den Funktionen, die Nodegoat für die Arbeit mit Texten bietet. Abschliessend kann darauf verwiesen werden, dass in jeder Nodegoat-Umgebung (domain) ein Webinterface integriert ist, mit dem die eigene digitale Edition via Web der Öffentlichkeit zugänglich gemacht werden kann. Wie eine solches Webinterface konfiguriert wird, werden wir zu einem späteren Zeitpunkt auf diesem Blog erläutern.

This was only a first overview of the functions that Nodegoat offers for working with texts. Finally, it can be pointed out that a web interface is integrated in every Nodegoat environment (domain), with which one’s own digital edition can be made accessible to the public via the web. How to configure such a web interface will be explained later on this blog.

Nodegoat Workshop – Get Linked Open Data into Nodegoat

These workshops follow a workshop series earlier this year, organised in collaboration with the University of Bern in the framework of the SNFS SPARK project ‘Dynamic Data Ingestion’ as well as two of the NEP4DISSENT Summer Schools …”

https://nodegoat.net/blog.s/56/linking-your-historical-sources-to-open-data-workshop-series-organised-by-cost-action-nep4dissent

I can highly recommend the workshop taking place on 13 and 21 September 2021. In particular, it will show how to import Linked Open Data into Nodegoat via an interface, which does not require any special programming skills, allowing you to devote your energy and brain power to the structure, content and consistency of the imported research data.

 

SPARK Workshop on Dynamic Data Ingestion

Programme Session 4 (26-05-2021)

“Introduction by Kaspar Gubler

Welcome to the fourth and final session of the SPARK Nodegoat Workshop. We are very happy about the participation and the numerous feedbacks on the workshop. I can only recommend that you organise such a workshop yourself. With a video conference this is no longer a problem. For example, several projects could get together and organise a workshop, perhaps on a specific topic related to Nodegoat.  A comment on Nodegoat as a research infrastructure in the humanities: Nodegoat is jointly funded by various projects and universities, a model that I believe is the solution for the long-term development of a digital infrastructure for the humanities. It is crucial that such an infrastructure can be used by different disciplines and not just by a single, specific project. If only one project can use software, it is not a true infrastructure. In contrast, as we have seen, Nodegoat can be used by different humanities disciplines. Another advantage of Nodegoat as a research infrastructure is that it does not require difficult software installations or programming skills. Thus, an infrastructure like Nodegoat allows users to focus on research. They don’t have to deal with technical things first. I think this is a big problem in the digital humanities: there is too much focus on the technical stuff and not enough on our core competence, which is to answer research questions with digital methods. In my opinion, too often we only talk about the possibilities of digital methods instead of delivering research results. It’s like constantly cleaning your glasses instead of just putting them on.”

14:00 Welcome and recap of last week’s session

14:15 Ingestion of publications from the Dutch Royal Library SPARQL endpoint

14:50 Break

15:00 Ingestion of SameAs references from lobid.org

15:15 Ingestion of Wikimedia Commons URLs from Wikidata

15:50 Break

16:00 Ingestion (TBD)

16:35 Q&A

Slides:

 

Linked Data Resources Suggestions

Linked Data Resources

Label Value
Name Query the KB SPARQL endpoint based on VIAF ID
Protocol SPARQL
URL http://data.bibliotheken.nl/sparql?default-graph-uri=&query=
URL Options &format=json&timeout=0&debug=on
Query SELECT DISTINCT ?pub ?name ?date (group_concat(?author_ids; separator=”, “) AS ?author_id)

WHERE {

?pub schema:author ?person.

[query=viaf]?person schema:sameAs <http://viaf.org/viaf/[variable=id]71399367[/variable]>.[/query]

?pub schema:name ?name.

?pub schema:author ?author_node.

?author_node schema:sameAs ?author_ids.

?pub schema:publication ?publication_node.

?publication_node schema:startDate ?date.

}

GROUP BY ?pub ?name ?date

Conversion INPUT

http://www.wikidata.org/entity/Q123034, http://viaf.org/viaf/71399367

Script:

const uris = INPUT;

const arr_viaf = uris.match(/viaf\/(\w+)/i);

const viaf_identifier = arr_viaf[1];

OUTPUT = {‘viaf_identifier’: viaf_identifier};

Original Query SELECT DISTINCT ?pub ?name ?date (group_concat(?author_ids; separator=”, “) AS ?author_id)

WHERE {

?pub schema:author ?person.

?person schema:sameAs <http://viaf.org/viaf/71399367>.

?pub schema:name ?name.

?pub schema:author ?author_node.

?author_node schema:sameAs ?author_ids.

?pub schema:publication ?publication_node.

?publication_node schema:startDate ?date.

}

GROUP BY ?pub ?name ?date

 

Label Value
Name Query the lobid.org API for ‘SameAs’
Protocol API
URL https://lobid.org/gnd/
URL Options .json
Query [query=id][variable]118637533[/variable][/query]

 

Label Value
Name Query Wikidata for Wiki Commons URLs based on Wikidata ID
Protocol SPARQL
URL https://query.wikidata.org/sparql?query=
URL Options &format=json
Query SELECT (CONCAT(“https://commons.wikimedia.org/wiki/Category:”,STR(?commons)) as ?commons_link)

WHERE {

<[query=id]http://www.wikidata.org/entity/[variable=id:uri-identifier]Q60866[/variable][/query]> wdt:P373 ?commons.

}

URI Template http://www.wikidata.org/entity/[[identifier]]
Link Click to open Query

 

Label Value
Name DiJeSt
Protocol SPARQL
URL http://tdk-jbs.cs.technion.ac.il:8890/sparql?default-graph-uri=&query=
URL Options &format=application%2Fsparql-results%2Bjson&timeout=0&debug=on&run=+Run+Query+
Query SELECT DISTINCT ?book ?title ?author_name

WHERE {

?book <http://purl.org/dc/terms/title> ?title .

[query=name]FILTER regex(?title,  “[variable]קודש[/variable]”, “i”)[/query]

?book <http://purl.org/dc/terms/creator> ?author_node .

?author_node <https://schema.org/name> ?author_name .

FILTER (lang(?author_name) = ‘und-hebr’)

}

OFFSET [[offset]] LIMIT [[limit]]

 

Label Value
Name The Getty Thesaurus of Geographic Names
Protocol SPARQL
URL http://vocab.getty.edu/sparql.json?query=
URL Options
Query SELECT DISTINCT ?place ?label ?parents (GROUP_CONCAT(?altlabel;SEPARATOR=”,”) AS ?altlabels) {

?place skos:inScheme tgn: .

?place luc:term “[query=name][variable]lemberg[/variable][/query]”.

?place gvp:prefLabelGVP [xl:literalForm ?label].

 

OPTIONAL { ?place xl:altLabel [ gvp:term ?altlabel ] }

OPTIONAL { ?place gvp:parentStringAbbrev ?parents }

}

GROUP BY ?place ?label ?parents

OFFSET [[offset]] LIMIT [[limit]]

 

Nodegoat Day 2021 @ Unibe

projects / sources / data / networks / people

From source to visualization: Data modeling and analysis with Nodegoat

Friday, 04 June, 9-17 h via Zoom

Programme

9.00 Introduction by Kaspar Gubler (Universität Bern, Historisches Institut): SNSF Spark Projekt ‘Dynamic Data Ingestion’ for server-side data harmonisation: Creating a database with 200k students and scholars 1200-1800: Method, concept and practical implementation

9:30 Simon Bürcky (Universität Giessen, Historisches Institut): Dynastic Networks of the Counts of Solms during the 15th Century, PhD project

10:00 Katharina Vukadin (Universität München, Institut für Kunstgeschichte): Relic Networks in the Early Modern Period: the Wittelsbach collection since 1577, PhD project as part of the ERC Projekt: SACRIMA

10:30 Giulia Iannuzzi (Università di Firenze / Università di Trieste): Plotting European sea routes in the Modern age (1500-1900): modelling, visualising, and linking data in Nodegoat, Global Sea Routes Project

11:00 Discussion / Questions / Partisan round: Opportunity to present your own Nodegoat project or project idea

12:00-13:00 Lunch break

13:00 Daniel Jaquet (Universität Bern, Historisches Institut): Mapping Swiss wars in the Middle Ages (1350-1550) as part of the Project Martial Culture in Medieval Town

13:30 Nina Janz / Sarah Maya Vercruysse / Michel R. Pauly (Université de Luxembourg, Project WARLUX): Using data analysis on recruited Luxembourgers in WWII, https://digiwarhist.hypotheses.org

14:00 Stefanie Mahrer (Universität Bern, Historisches Institut): Transnational Science. Switzerland and Forced Academic Migrants 1933 to 1950, https://forced-academic-migration.net

14:30 Nuno Camarinhas (Universidade Nova de Lisboa, Faculdade de Direito): Mapping justice administration in Portugal and the Portuguese empire (1600-1926), Project: Modern Portuguese judiciary

15:00 Milan Matthiesen (Europainstitut der Universität Basel): The Exterior of Philosophy: On the Practice of New Confucianism, https://europa.unibas.ch/de/forschung/european-global-knowledge-production/the-exterior-of-philosophy/

15:30 Pim van Bree / Geert Kessels (The Hague, LAB1100): Linked Open Data in the humanities: availability, linking and analysis with Nodegoat, https://lab1100.com

16:00 Discussion / Questions / Partisan round: Opportunity to present your own Nodegoat project or project idea

16:30 Apéro virtuel

 

 

 

CfP Nodegoat Day 2021

The Nodegoat Day 2021 will be run as an entirely virtual event via Zoom, hosted by the University of Bern (Switzerland). At Nodegoat Day 2020, only projects from the University of Bern were presented. As this had already attracted an international audience, projects from all over the world will be invited to Nodegoat Day 2021. Reports and impressions from Nodegoat Day 2020 can be found here:

https://www.infoclio.ch/de/tagungsbericht-nodegoat-day-2020

https://histdata.hypotheses.org/1937

Proposals for Project showcases (max. 300 word proposal)

Data visualisations in the Digital Humanities are booming. Through the visual representation of research data, previously unknown patterns and developments can be uncovered and lead to new insights. At the same time, data visualisation helps research gain more visibility and facilitates interdisciplinary exchange, especially when projects work with the same visualisation software. This is the case with Nodegoat, a multifunctional, virtual research environment for managing, analysing and visualising research data. The Nodegoat Day therefore will bring together research projects from very different disciplines. The aim of the conference is to show and reflect how a digital tool like Nodegoat can be used in humanities research and/or teaching, what influence digital tools can have on formulating and answering research questions, and how they can lead us to new insights and research horizons. Projects at Nodegoat Day should present experimental, substantial or completed research, provide concrete insights into their conceptual data models and visualisations, and situate their approach within their discipline and the Digital Humanities. Contributions from young scientists are explicitly welcome as well as trans- and interdisciplinary impulses. Special consideration should be given to the data models: the principle of data modeling in Nodegoat is object-oriented and follows the actor-network theory: Persons, events, artefacts, places or historical sources are first considered as objects of a horizontal order, which form a network and create a hierarchy only through their relationships to each other. Nodegoat users can define data models, objects and relationships individually and thus realise their own project-specific data structures. Furthermore, a data model can also be adapted to existing reference models, which also improves the interoperability of research data. The design of the data model is of course crucial for the visualisations: every object in Nodegoat can be given in the model geographical and temporal attributes that can be analysed and visualised for the research data. The potential of such data visualisation functions (maps, networks, time series) as well as the algorithmic calculations in Nodegoat will be reflected and discussed at the conference. Overall, there will be numerous opportunities for comparison between the individual projects, from which impulses and exchange across disciplinary boundaries can be expected as well as networking in the Nodegoat community. The contributors will have 20 minutes for presentation and afterwards there will be 10 minutes for questions and inputs.

In addition, two longer “partisan rounds” are scheduled for spontaneous ultra-short presentations on projects in the experimental stage, questions about how to run a Nodegoat project, functions of Nodegoat, long-term archiving of Nodegoat data, and questions about life in general, as well as critical reflections on methods and results of digital projects.

The abstracts are requested along with a brief biographical note by email no later than May 15,
2021, to Kaspar Gubler (kaspar.gubler@hist.unibe.ch). Feedback will be provided no later than
the end of May 2021. The conference language is English.

Guests can register sending an email to: larissa.achermann@hist.unibe.ch

Projects around the globe are welcome to participate in Nodegoat Day 2021. Map of Nodegoat projects worldwide (selection, December 2020):

https://nodegoat.net/usecases

 

Institutes where Nodegoat projects are running (November 2020):

SNSF SPARK Workshop on Dynamic Data Ingestion

Session 3

12 May 2021

“Introduction by Kaspar Gubler

I would like to welcome in this session all participants who are already familiar with Nodegoat and have therefore skipped sessions 1 and 2 and will now attend sessions 3 and 4. In sessions 1 and 2 we created a data model for people and books and imported data, including geo-coordinates, into Nodegoat by uploading CSV files. Importing data into Nodegoat will also be the central topic of today’s session. We have three ways to import data into Nodegoat.

1) We can upload data into Nodegoat as a CSV file, as we did in session 2.

2) We can import data directly into Nodegoat using a graphical interface without having to upload it, we will look at this process of dynamic data ingestion today.

3) We can import data into Nodegoat via an application programming interface (API), which, unlike 1) and 2), requires programming knowledge.

Of course, we can also start a project in Nodegoat without importing data first, not even geodata. In every Nodegoat installation, the object ‘City’ is already present. In ‘City’ about 130k places are available, which have geo-coordinates as well as a GeoNames-ID. ‘City’ is a collaborative object: all projects of a Nodegoat installation can add and use ‘City’-locations and thus benefit from each other.”

Programme Session 3 (12-05-2021)

14:00 Welcome and recap of last week’s session

14:15 Create Linked Data Resource to query GND

14:30 Create Linked Data Resource to query VIAF

14:50 Break

15:00 Ingestion of VIAF IDs from Wikidata

15:50 Break

16:00 Ingestion of biographical data from Wikidata

16:35 Looking forward to next session

16:45 Q&A

  • Can the mapping then only be done per 1 object? Or can you run it on a set (like reconciliation in open refine) and get unambiguous results automatically? → Yes, both.
  • Can I add/concatenate more of the json fields to the “label”? Because just the preferred name may not be sufficient to identify which one is the correct entry …→ Yes, add/include more of the relevant Values in your response, then open a “filter” dialog if necessary (there the additional fields/values will be shown).
  • Does nodegoat also support APIs that return XML instead of JSON? -> No
  • Wikidata SPARQL Query to only get Gregorian Dates Example: https://w.wiki/3KFq ->thanks!
  • Not really related to this session by I don’t see this referred to in any of the sessions: is there a nodegoat api from which one can draw the visualizations? or even simpler ‘embeds’? -> Public User Interfaces support embedding.
    • Does it expose JSON representations of our objects → next session
    • And is there a SPARQL endpoint (I know I would have to specify a kind of mapping)
  •  I am trying to create a LD resource from my API. The query: https://data.geo-kima.org/api/Variants/PlaceVariants/8964/100/1 works outside nodegoat. How can I spit this to fill the uri and query in nodegoat? (I get an error message when I do https://data.geo-kima.org/api/ in the url and Variants/PlaceVariants/8964/100/1 in the query. → next session
  • Is it possible to interact with database directly using SQL? → Not by design/purpose, API should be used

Links

https://nodegoat.net/usecases

Slides:

 

Preparation

If you are unfamiliar with the benefits of adding external identifiers to your dataset, please read this guide: https://nodegoat.net/guides/externalidentifiers.This example shows how to update 1 object at once. We can update multiple objects at once with the data ingestion (see below, data ingestion with Nodegoat).

Human Readable vs Machine Readable

Browse GND Data

Via Graphical User Interface (GUI):

https://d-nb.info/gnd/118637533 / https://lobid.org/gnd/118637533

Via Application Programming Interface (API): https://lobid.org/gnd/118637533.json

Query GND Data:

Via GUI: https://lobid.org/gnd/search?q=Zwingli

Via API: https://lobid.org/gnd/search?q=Zwingli&filter=type:Person&format=json

 

Data ingestion with Nodegoat

With the data ingestion in Nodegoat, you can enrich or update multiple objects with Linked Open Data (LOD) data from external data sources. This requires two steps. First, we configure a Linked Data Resource in Nodegoat, i.e. a query to the interface where the LOD data is available (data source). Secondly, we configure a data ingestion process, i.e. the mapping and storage of the LOD data in Nodegoat. Below are some examples of how to configure interfaces to Linked Data Resources.

Linked Data Resources

Label Value
Name Search the GND API via lobid.org
Protocol API
URL https://lobid.org/gnd/search?q=
URL Options &filter=type:Person&format=json
Query [query=name][variable]zwingli[/variable][/query]&from=[[offset]]&size=[[limit]]

 

Label Value
Name Search the VIAF API
Protocol API
URL http://www.viaf.org/viaf/AutoSuggest?query=
URL Options
Query [query=name][variable]zwingli[/variable][/query]

 

Label Value
Name Query Wikidata for VIAF ID based on GND ID
Protocol SPARQL
URL https://query.wikidata.org/sparql?query=
URL Options &format=json
Query SELECT ?person ?viaf

WHERE {

[query=gnd]?person wdt:P227 “[variable]118637533[/variable]” .[/query]

?person wdt:P214 ?viaf .}

Link Click to open Query

 

Label Value
Name Query Wikidata for Religion based on GND ID
Protocol SPARQL
URL https://query.wikidata.org/sparql?query=
URL Options &format=json
Query SELECT ?person ?religion ?religion_label

WHERE {

[query=gnd]?person wdt:P227 “[variable]118637533[/variable]” .[/query]

?person wdt:P140 ?religion .

?religion rdfs:label ?religion_label .

FILTER(LANG(?religion_label) = “en”)

}

Link Click to open Query

 

Label Value
Name Query Wikidata for Date of Birth based on GND ID
Protocol SPARQL
URL https://query.wikidata.org/sparql?query=
URL Options &format=json
Query SELECT ?person ?date_of_birth

WHERE {

[query=gnd]?person wdt:P227 “[variable]118637533[/variable]” .[/query]

?person wdt:P569 ?date_of_birth .

}

Conversion INPUT

1484-01-10T00:00:00Z

Script:

const date = new Date(INPUT);

const day = date.getDate();

const month = date.getMonth() + 1;

const year = date.getFullYear();

var OUTPUT = {‘date’: day+’-‘+month+’-‘+year};

Link Click to open Query

Click to open Query with reference statement

 

To start the ingestion process, we activate it for our project in ‘Management’. Then we define the process, the mapping and the storage of the LOD data, in the ‘Data’ section, like in the picture. We can add new objects (or values of objects = ‘object descriptions’ in Nodegoat), ad them if they not exist already or update objects, for example the object ‘Person’.

SNSF SPARK Workshop on Dynamic Data Ingestion

Session 2

05 May 2021

“Introduction by Kaspar Gubler

We are very pleased to have so many interesting projects and engaged participants in the workshop. And more participants have joined for session 2. For example, someone from Hamburg who used to do network analysis with the software Gephi and now wants to try out Nodegoat. A new participant is an archaeologist from the University of Bern who has documented sites in Excel and wants to import and visualise them in Nodegoat. Such data import is not difficult, especially if the data has been entered consistently in Excel. Another new participant plans to visualise cultural heritage with Nodegoat. A good example of how Nodegoat can be used for the presentation of digital, cultural heritage (thus also for art history) is the encyclopaedia on Romantic Nationalism: https://ernie.uva.nl/viewer.p/21/52/types/all/grid

Terminology

Before we start, I would like to remind you of the terminology of Nodegoat, in which we speak of Objects and Sub-Objects as well as Categories. We describe these Objects (column inExcel) in Nodegoat with Object descriptions (like rows in Excel). Object descriptions can be a text, a link, a picture or a link to another Object or a Category (= reference = relation). We can define in our data model the kind of description for each Object description. This gives us the possibility to describe an Object very precisely:

Find a common language

Important: if you want to communicate with another Nodegoat project it is very helpful if you use the terminology mentioned. So the first questions to another project would be: what Objects do you have? And how do you describe your Objects with what kind of Object descriptions? In which Sub-Object do you store your geo references? If you want to get in touch with other projects, you can organise your own zoom meetings on specific questions about Nodegoat. I see many projects that have a lot in common and could certainly benefit from an exchange. I would also like to draw your attention to the Nodegoat Day on 4 June, where you can present your project or your project idea.”

14:00 Welcome and recap of last week’s session

14:15 Object Type ‘Place’ Data Model + Data Entry

14:30 Object Type ‘Place’ Data Import

14:50 Break

15:00 Object Type ‘Person’ Data Import

15:30 Filter + Visualisation

15:50 Break

16:00 Scope & Visual Settings

16:15 Conditions & Export

16:35 Looking forward to next session

16:45 Q&A

  • Difference between Gephi and Nodegoat? → Nodegoat departs from data management + visualization functionality
  • Will there be a possibility to store Nodegoat Data in a data repository like Zenodo? → There are rumors about a Zenodo-Module in Nodegoat coming, currently it’s technically no problem to do it manually
  • How to download a “dump”?  → Via API, export dump of the data + of the model in JSON
  • Is it possible to export a complete project (instead of individual csv sheets)?  → Yes, via API you can export all of the data and the data model in JSON
  • How can I “undo” an import from csv when I notice that some things did not work as intended? Can I mass delete objects? → Yes, cou can mass delete objects via graphical interface, choosing all objects deleting them with the grey multi button, or delete in ‘Model’ the whole of a Object Type with clicking on ‘empty’ or mass delete mass objects via API
  • Can you import by just giving the URL of the Google Doc? → Yes, via API of Nodegoat, check what Google allows you to do via API
  • Can visualisations be downloaded in any way to the desktop? → Yes, Screenshot, or for high resolution use the ‘Capture’ functionality in the visualisation settings
  • Follow-up question to session 1:  can you create an itinerary of a person (object) with just knowing the sequence of the location but not the dates?→ Yes, with storing vague dates in Nodegoat, you make an statement in vague dates (‘Chronology’) like: ‘Studies came after Birth’. Or Yes, use as date: 1, 2, 3 etc. , or use the sequence identifier in a nodegoat date, so if you know a year use ‘1880 1’, ‘1818 2’, ‘1818 3’
  • We can include both a geometry (polygon) AND a precise coordinate in a sub-object? Or as separate subobjects of the same object? → Yes, , yes both options are possible! One geometry can be polygon + point(s) + line(s). Or each in a separate sub-object to be able to add attributes.
  • Are there any example projects that depicts more complex routes? → http://mnn.nodegoat.net/viewer.p/1/47/scenario/30/geo
  • Can you add your own icons to be displayed on the map? → Yes, in SVG format.
  • Nodegoat as Tool to visualise routes or itineraries? →  Yes
  • Is there also a method to show place-specific meta-information on the map instead of the person’s? →  Yes
  • In case of data model refactoring, how should we deal with the already inserted data? For example, if one wants to normalize repetitive data creating a new object type, how can he migrate the actual data to the new data model? Export + Transform + Import is the only way?  →  Yes, but because you now have nodegoat IDs, it’s a matter of a straightforward mapping. Or use an Ingestion process (session 4).
  • is it possible to mark a node with multiple conditions (e.g. one condition for people born in the low countries (orange) + people died in Italy (blue), so objects that fall in both categories marked in two colours)? →  Yes

Links

http://mnn.nodegoat.net/viewer.p/1/47/scenario/30/geo

Slides:

Download Google Sheets as CSV files:

RAG Places small selection: https://docs.google.com/spreadsheets/d/1zvcVj66nr1tm7PAmNJSSf2BI_o5e2rrPCSE2l4PAsHQ/
RAG People small selection:https://docs.google.com/spreadsheets/d/1K2SGF0TkQTVnZ5WQqgMc0MbJdGps1kA_oWVL3Qir6rs/

Guides:https://nodegoat.net/guides/csvfilehttps://nodegoat.net/guides/gazetteer

Another sample data Import:
https://histdata.hypotheses.org/nodegoat-tutorials
Tutorial No 10, to create this map (positions of ships):

SNSF SPARK Workshop on Dynamic Data Ingestion

Session 1

28 April 2021

Introduction by Kaspar Gubler

“Welcome to the Nodegoat SPARK workshop ‘Dynamic Data ingestion’ We are very happy about the 140 participants from all over the world. On the map, which was of course created with Nodegoat, we can see the places of origin of the participants. They come from very different disciplines: history, literary history, German studies, English studies, legal history, historical geography, art history, musicology, theatre studies, film studies, African studies, Islamic studies, sociology, digital humanities and also from archives and libraries. This impressively shows how Nodegoat has established itself in the last ten years as an international research infrastructure for the humanities, an interdisciplinary research infrastructure that helps digital research to gain new insights and more visibility – and facilitates the collaboration of projects, especially beyond one’s own subject boundaries. Pim van Bree and Geert Kessels began developing Nodegoat as part of a project at the University of Amsterdam in 2011. Pim van Bree has a Master’s degree in Media Studies, Geert Kessels a Master’s degree in History. Both are also accomplished software developers. Their particular strength is that they know both worlds, the world of humanities and the world of programming. They combine these two worlds in their workshops, which they conduct at educational institutions worldwide. With their deep knowledge of methods, sources and questions in the humanities, they can create fitting and working data models for the different disciplines in Nodegoat to extract new scientific information and knowledge from the data.”

Programme

14:00 Welcome by Kaspar Gubler

14:15 General introduction to nodegoat

14:40 Login and set up your nodegoat project

14:50 Break

15:00 Object Type ‘Person’ Data Model

15:15 Object Type ‘Person’ Data Entry

15:35 Classification ‘Capacity’ Data Model + Data Entry

15:50 Break

16:00 Object Type ‘Book’ Data Model

16:20 Object Type ‘Book’ Data Entry

16:35 Looking forward to next session

16:45 Q&A

  • I don’t have access to the ‘Model’ section in nodegoat → Check Page Clearence. Each nodegoat project has 1 administrator at the beginning, who can set up additional users. Management > Users > add User > add the user  and activate the ‘Model’ in the settings  for the page clearance (tab)
  • How does the Scope work? For Visualizations? → In the Scope you define which of your databasefields you want to use for the visualization, so you activate the field that contains the georeference wiht the coordinates
  • Are there facilities (planned?) helping to prepare a RDF rendition of the database? →Yes: on one’s own nodegoat installation, you can configure a translation module to translate the data model to some RDF vocabulary
  • Is it possible to export, as static or dynamic representation, a computed spatio-temporal / network analysis ? →Yes
  • Würden Sie sagen, dass sich Nodegoat grundsätzlich auch als Bilddatenbank (mit Zusatzbeschreibungen und Querverweisen) eignet? →Yes, absolutely, see the links below.
  • Regarding custom gazetteers and prosopographies: Are there size limitations? → Not in general, size limit of CSV import is set to 60’000 rows at a time
  • Is it better to store a region like Germany as JSON or via Reference >city > autofill option Germany? →It depends on what you want to show on your map, if you are more interested in areas it’s Geo JSON, if you are working in general with dots on your maps, it’s maybe better to store it as a point like your other data
  • Can we import polygon data from an existing map with territorial circumscription so that we don’t have to draw them by hand? → Yes
  • Can we specify a schema or other constraints to ensure consistency of the data (e.g. Birthdate < Deathdate, no overlapping residence periods etc.)? → You can use visualisation to do some error checking, but there can be no hard enforcement of such constraints at the moment. Or you can filter specifically on Birthdate < Deathdate, more advanced.
  • Does database harmonise the different sources of location, e.g. if I put “Roma (IT)” from “City” and also point from “Geometry” which is actually Rome, will database understand it is the same? On the map it looks the same, but how is it in database? Is it linked as the same entity? → More on this next session!
  • Is Arabic script supported or more generally, are scripts running from right to left supported? → Yes: Everything Unicode
  • Can we geo-visualize more than one object type, say author’s places and book’s publishing houses’ places? → Yes: using the Scope (https://nodegoat.net/guides/visualisationsettings)
  • Is there anything in the guides as regards Nodegoat and RDF? → Yes, it’s work in progress, see this sample nodegoat project working with a Subject-Predicate-Object (RDF Format) data model: https://www.manto-myth.org/blog/a-half-dozen-ways-to-die-mythically
  • If I uncheck „Fixed field“ in Object „book“ it throws: The data Model does not have a configuration that can be used to generate Object names, please check your settings.“ Why? → Tick/check one or more of the Object Description ‘name’ (‘use object description for name’) checkboxes
  • Can nodegoat handle localised object descriptions? E.g. book reviews in different languages? → Yes
  • Can nodegoat handle uncertain dates / data? → Yes, see the following blogposts:

https://nodegoat.net/blog.s/45/how-to-store-uncertain-data-in-nodegoat-ambiguous-identities

https://nodegoat.net/blog.s/44/how-to-store-uncertain-data-in-nodegoat-conflicting-information

https://nodegoat.net/blog.s/43/how-to-store-uncertain-data-in-nodegoat-incomplete-source-material

https://nodegoat.net/blog.s/42/how-to-store-uncertain-data-in-nodegoat

      Slides

Data Ingestion Episode III – May the linked open data be with you

The linking of research data has been a dominant topic for years, especially in digital history. Linked Open Data (LOD) is the buzzword at conferences and in research projects. However, it is not the collection of such data available on the internet that is the greatest challenge here, but its harmonisation, because research databases are usually structured differently. It is therefore not surprising that despite many initiatives no research project in digital history has yet been realised being able to harmonise data across several structural levels of the databases. This means, for example, not only linking persons of databases by their names, but going deeper into the data structure to harmonise, for example, the geographical origin or attributes of a person’s education. But that would be the aim: to answer scientific questions through structural data harmonisation. This is where our SPARK project comes in. The third and final phase of the project (Episode 3) has been completed in January 2021. What are the core results of this project? In essence, it is a software module (DDI module for ‘dynamic data ingestion) and a method: data (research data) is collected from different source databases and ingested on a central server using the module according to the spider principle, creating a new metadatabase. The harmonisation of the collected data in this new build database is done as far as possible already with the data ingestion by mapping the database fields of the source databases into corresponding database fields to the new metadatabase. If such a mapping is not or only partially possible because the database fields of the source database and the metadatabase are too dissimilar, in a second step, as soon as the data is stored on the central server, an algorithm can be used to bring uniformity to this data by data reconciliation. In addition, the data can also be automatically reclassified in order to standardise it. These measures prepare the data for analysis and ultimately for publication, which both can be done in the virtual research environment Nodegoat. We will explain the procedure with the help of a case study. In this study, we collected data from related projects that are researching the history of universities and have joined together in a network, the Atelier Heloïse. The common interest of the projects is a prosopographical based history of universities, scholars and academic knowledge in pre-modern Europe. The four projects were chosen more at random; there are numerous other and important database projects in the Atelier. However, we had to limit ourselves to these four projects. Furthermore, it will be the task of the Atelier to bring all of the databases together in a joint, international project. The projects we have been working on are covering the history of the universities of Bologna (http://asfe.unibo.it/it), Padova (https://www.ottocentenariouniversitadipadova.it), Paris (http://studium.univ-paris1.fr) and the universities of the Old Empire in the project Repertorium Academicum Germanicum (https://rag-online.org). The metadatabase we created from the four projects contains about 200,000 students and scholars from all over Europe in the period 1200-1800, with the projects covering different time spans. The tools for collecting, ingesting (1), reconciling (2) and reclassifying (3) the data in Nodegoat also represent the methodological approach to data harmonisation as a prerequisite for data analysis (visualisations, network analyses) and finally for the publication of the results on the internet.

As a result of this approach, the places of origin of the students and scholars in the four projects were united on a map for the first time in research, impressively demonstrating the potential of international data networking. The map is of course only the starting point for deeper analyses. Only through a joint analysis of the four research projects, which describe the areas of origin (of the students) of their universities and their sources, can a synthesis, which the projects work out together, lead to new insights and research questions. But how can such a map be created? We will now take a look at this, starting with data ingestion. In order to be able to collect data, we must first make two settings in Nodegoat: the definition of the Linked Data Resource and the definition of the Ingestion process. To create the definition, we use a graphical interface, which not only simplifies our work, but also makes the data ingestion process transparent for all team members (project members, programmers). The graphical interface is thus a very important tool for visual communication, enabling a common understanding of the structures of the data sources. In the graphical interface, all database fields can be made visible to the project team and then assigned together to the new meta-database. A clear mapping process in combination with very good knowledge of the database fields (and their meaning, especially in the humanities) are the success factors not only for the ingestion, but also for database migrations in general. In principle, the graphical interface helps to find a common language and understanding between historians and programmers. In the Linked Data Resource module, it must be first defined whether it is an API interface or a SPARQL endpoint. Then a test query is constructed, for example for a person. This requires the identifier of this person, which functions as a variable for all persons of the source database from which data is to be ingested. Then the mapping process follows and the database fields of the source database are assigned to the corresponding fields in the metadatabase. By mapping as closely as possible, harmonisation of the collected data can already be achieved. However, if the structures of the source database and the metadatabase are too different, the data will still be imported and subsequently harmonised with the data matching process in the reconciliation module. Things can get complicated if, in addition, the data formats of the source database are not compatible with the metadatabase. In such a case, the data can be converted before the data import, or only certain information and not the entire content can be extracted from the source database fields. With the reconciliation module, we can then search the imported, heterogeneous data for specific terms that we have previously defined in a vocabulary. The terms found are automatically saved in the metadatabase. We demonstrated at the Atelier Heloïse conference such a procedure using the places of origin of Parisian students as an example. The places of origin are not georeferenced in the Paris database and only the names of the places are available in the application programming interface (API). With Reconciliation we can assign geopoints to the places. To do this, we first import the places from the source database and then use the reconciliation module. We configure this module so that the names of the places in the Paris database are compared with a reference list of places with geo-coordinates that exists in our metadatabase. The places in this reference list also have identifiers of GeoNames, an internationally used geographic reference system. In the reconciliation module, we now set the algorithm to not only search for the Paris names in the reference list, but also to store the georeference location (which contains the coordinates) when a hit is made. In this way we can easily visualise the places of origin of the Paris database on a map and thus also check the data qualitatively in a simple way, as it is an interactive map: Clicking on a point takes us directly to the locations and georeferences. Of course, the reconciliation module works for any data type, not only for geodata. For example, texts can also be searched for specific terms with the algorithm and the hits are automatically saved. If at this point of the ingestion process the data is still not uniform enough for an evaluation, the data can be additionally classified with the reclassification module. The principle of reclassification is that we query the data according to certain criteria and use this query to automatically classify the results. Such reclassifications are useful when a project wants to organise a lot of complex data and prepare it for data analysis. However, automatic reclassification is also very important for maintaining data consistency, as inconsistent or incorrect entries of data can also be automatically filtered out. Let’s make an example. In our case study, the four projects use different categories and names for academic degrees. Although the degrees were already classified in a remarkably uniform way in pre-modern Europe. But of course, one must take into account here that the same degree can have different qualities depending on the university. But we can also overcome this challenge with reclassification by going from the general to the specific.  In the following, we are looking at the academic degrees of jurists. For reclassification of jurists, we first create a query in Nodegoat and check whether the expected results appear. Then we use this query in the reclassification module and give it a term. This term is then used to classify the data found. The reclassification is not bound to certain types of data. We can classify people, places, observations, texts, time periods or anything else. In our case, we can classify all jurists of the four projects accordingly and thus quickly obtain a general overview of the areas of origin and study of such persons. Of course, we have to take into account that the projects cover different periods and look at their definitions of jurists in detail. This is done subsequently to the overview and is part of the qualitative data evaluation, where we can further differentiate the data and, for example, reclassify scholars who had studied Roman law. For this group of people, we can then highlight the places of origin in colour and thus see the spaces of origin and communication of these jurists. If we have further data available, for example information on the activities of the jurists as in the Repertorium Academicum Germanicum (RAG), we can also see where the legal knowledge was transferred with the persons – whom we regard as ‘knowledge carriers’. In this way, we could show the spread of law, and Roman law in particular, in pre-modern Europe. Of course, this also works for the other disciplines such as medicine or theology as well as for the large number of scholars holding a degree as a ‘Magister Artium’. It is then up to the researcher to classify, interpret, describe, if necessary correct and refine the results of the reclassification. With the procedure described, however, we are able to combine quantitative and qualitative research and thus reconstruct European knowledge spaces. With data ingestion, we can not only import and evaluate data from research projects, but in parallel, of course, also from other Linked Open Data to supplement or enrich the data set of our metadatabase, for example query the data on Wikidata that is linked to a person in our metadatabase. It would go beyond the scope looking at all the features of data analysis in Nodegoat, for example modules of network analysis, which can of course also be applied to our metadatabase. In any case, the data can be searched easily in full text and / or filtered specifically with complex, combined. It is further possible to query the data spatially drawing a polygon in GeoJOSN and simply copy its code into the database field that contains the geoinformation. This feature enables us to reconstruct, search and analyse specific knowledge spaces. Such spaces, as other results (data sets), can be published in so called data scenarios using an internet module that is configured in the backend of Nodeogat. A scenario is understood to be a data set with the corresponding visualisation settings.

Conclusion: The SPARK project makes it possible to link databases (research data) in a simple and transparent way and to harmonise and analyse the linked data with sophisticated tools – may the data be with you!

Cite this article as: Kaspar Gubler: Data Ingestion Episode III – May the linked open data be with you, in HistData, 30/03/2021, https://histdata.hypotheses.org/2130.

Die Eidgenossenschaft als Wissensraum im vormodernen Europa, Vortrag am 25.03.2021

Kaspar Gubler / Christian Hesse (Bern): Die Eidgenossenschaft als Wissensraum im vormodernen Europa: Neue Erkenntnismöglichkeiten durch Datenvisualisierungen

Vortrag im Rahmen der Ringvorlesung des Berner Mittelalter Zentrums.

Donnerstag, 25.03.2021, 17:15-18:45 Uhr.

Bitte registrieren Sie sich bei Laura Hutter (laura.hutter@ikg.unibe.ch), wenn Sie den Vortrag besuchen möchten. Der Vortrag findet virtuell via GoToMeeting statt. Nach der Registrierung erhalten sie den Link für GoToMeeting.

https://www.bmz.unibe.ch/unibe/portal/microsites/micro_bmz/content/e760315/e760316/e761493/e761495/FS21_BMZFlyer_ger.pdf

Looking back to nodegoat Day 2020

Climbing on the shoulders of digital giants: from data to knowledge

On November 27, 2020, the first nodegoat Day in history took place at the Historical Institute of the University of Bern via Zoom. The University’s nodegoat projects provided insights into the implementation of their data models and their methods of data analysis, with a focus on data visualization (maps, networks, time series). Originally, the ‘nodegoat Day 2020’ was planned as a local conference of the University of Bern, but more and more an international audience showed interest: via Zoom and live stream on YouTube people from Switzerland, Italy, France, Germany, the Netherlands, Belgium and Luxembourg participated. The introduction to the conference, organized by Kaspar Gubler, University of Bern:

“In April this year (2020), the virtual research environment nodegoat was put into operation as a pilot project of the Historical Institute at the University of Bern within the framework of the university’s digitisation strategy. This was preceded by two workshops on nodegoat at the Walter Benjamin Kolleg here at the university. The great interest in these workshops made clear the need for digital tools. Supported by the Historical Institute, the pilot project nodegoat GO, a nodegoat installation for the entire faculty of humanities was launched. This means that all members of the faculty can now apply for a personal nodegoat research environment. Details can be found on the website of the Digital Humanities Department here in Bern. Since this month, nodegoat GO is officially supported by the faculty and the Digital Humanities departement within the framework of the university’s digitisation strategy, including a nodegoat support office starting next year. I would like to take this opportunity to thank all those who have supported the nodegoat GO project. Some short remarks on the virtual research environment. What does ‘environment’ actually mean here? Environment means in principle: one software for many things. What once had to be programmed laboriously with individual digital tools is now available to us in a virtual research environment, ready to use: Databases, front-ends for data input as well as analysis and visualisation functions, interfaces for data exchange and a website to present your results to the world. With this and with its sophisticated visualisation possibilities, nodegoat is also an important tool for visual communication.

The origins of nodegoat are in the Netherlands. Nodegoat was developed about 10 years ago at the University of Amsterdam by Pim van Bree (Master in Media Studies) and Geert Kessels (Master in Modern History) for specific research requirements and was transferred to a university spin-off called LAB1100. This spin-off now leads the development of the software, which is available in open source. In the course of time different functional modules were added to nodegoat, so that nodegoat today represents a sophisticated system for data analysis. These modules are, depending on the research needs, financed by different institutes worldwide and integrated into the open source version of the software, making it available to all users. So you finance, but you also profit when others do so.

The methodology

In simple terms, nodegoat works similar to an Excell table. In contrast to Excell, however, nodegoat has extensive analysis functions with which the entered data can be immediately analysed, visualised and contextualised spatially and chronologically – all this without any programming knowledge. Data analysis and visualisation therefore take place within nodegoat. The data do not have to be exported to another visualisation software first. Nodegoat is not a data prison. All data can be exported at any time from nodegoat into another software, either as CSV file or via JSON interface.

Data modelling

In Nodegoat, users define their own data models without restrictions in terms of structure or depth. Each object can be classified with geographical and temporal attributes and evaluated accordingly. Users are therefore free to implement a completely individual data model or to create a data model that is adapted to existing vocabularies (e.g. Dublin Core or the reference model CIDOC). Nodegoat can therefore also generate and provide standard data and is at the same time a digital tool for networking data sets, thus improving the interoperability of research data in the field of humanities. From the point of view of data modelling, nodegoat follows an object-oriented approach. Following the actor-network theory, this means that persons, events, artefacts and sources are regarded as equivalent objects. Only the linking of objects through relationships forms and hierarchises a network.

But why should one work with data at all in the humanities? Why with a database?

The answer is simple: data can make visible factors, patterns and developments that would otherwise remain hidden in the sources. By changing the aggregate state of the data collected from the sources, we can, thanks to visualisations, for example, identify patterns that can lead to new insights. At the same time, the data visualisations help research to become more visible. The data show us the path that can lead us onto the shoulders of the digital giants. Once we reach the top, new horizons open up for us when data becomes information and knowledge.

The nodegoat projects that give us insights today come from very different fields. Nodegoat is an interdisciplinary tool which, as my personal experience shows, promotes the exchange of information across disciplinary boundaries. The projects that we will see today are at different stages of development. It is not about delivering a glossy brochure, but about giving as concrete an insight as possible into the project work with nodegoat as well as getting to know the possibilities and the basic functions of nodegoat, including data management, visualisations, networks and time series.

Temporally and thematically we will go on a great journey today. It begins in the 20th century, opens up national and transnational perspectives with the academic forced migration to Switzerland and war-torn societies in Southeastern Europe, leads us to festivals in contemporary theatre, then back to melodies and songs of the early modern period, into the European Middle Ages to church account books and academic knowledge spaces, and finally ends up at the cradle of mankind in Mesopotamia. Towards the end, more technical aspects will be presented, such as the data harmonisation of Linked Open Data and the developers of LAB1100 will conclude with an overview of nodegoat projects in other countries and insights into software development.”

Kaspar Gubler (Universität Bern, Historisches Institut): Kaspar Gubler used the REPAC project as an example to show how nodegoat works as a collaborative research platform for international projects that enter and analyze data web-based (and thus independent of location) in nodegoat and publish it on the net in a live environment. REPAC operates a pool of prosopographic data, which contains about 70’000 persons with about 400’000 records about biographical stations and networks. From this data pool the persons and biographical information are automatically assigned to the different projects in nodegoat based on certain criteria (Germanicum / Helvetcium / Bernense).

Fig. Areas of origin of students at European universities 1250-1550.

 

Stefanie Mahrer (Universität Bern, Historisches Institut): Forced Academic Migration (FAM) is a research project (funded by SNF-PRIMA) at the Department of History of the University of Bern on the history of forced academic migration in Switzerland during the Nazi regime and the post-war period. FAM-online provides insight into research results and enables visitors to access, filter and graphically display research data in the near future.
The aim is to collect biographical data of the academics who fled to Switzerland, data of the academic refugee assistance organizations and their helpers, data of the universities concerning forced migrants as well as relevant decrees and laws as completely as possible. The data is published continuously, taking into account legal regulations.
FAM-online links projects and refers to publications with similar topics and thus also sees itself as a platform for scientific research into the history of academic forced migration in the context of National Socialism. The project uses nodegoat to visualize escape routes on maps and to analyze networks of academics, escape helpers and involved organizations.

Fig. Example visualization of escape routes of academics

 

Franziska Zaugg / Mevlane Sejdiji (Universität Bern, Historisches Institut): “A longue durée of violence? War-disabled societies in Southeastern Europe” is a postdoctoral project (SNSF Ambizione), based on the concept of “long duration” developed by Fernand Braudel, which leads the historian’s focus away from the history of events towards more long-term social, cultural and economic structures. The project examines war-disabled societies in Southeastern Europe from the Balkan wars of 1912/1913 to the Balkan conflicts of the late 20th century. The project asks about possible connections between the violence experienced, the nature of memory and its relevance for future conflicts. The projects uses nodegoat to identify and visualize violence clusters on maps and within actor networks.

Fig. Example visualization of clusters of violence

 

Alexandra Portmann, Anna Barmettler, Dominik Kilchmann (Universität Bern, Institut für Theaterwissenschaften): International theater festivals shape the contemporary theater landscape, although the variety of festival formats is difficult to categorize. The spectrum ranges from festivals that focus on a specific theme or author (e.g. Shakespeare), to festivals of the independent scene (e.g. Impulse Festival) and festivals such as the Manchester International Festival, which explicitly only shows premieres of international co-productions. These transnational co-productions of festivals with globally operating artists and independent production houses seem to increasingly shape the festival repertoire. This research project asks the question of how transnational working methods from the festival sector have a lasting effect on local theater systems. This SNSF Ambizione project uses nodegoat for visualizing the processes of festival productions on maps and within networks.

Fig. Example visualization of a network analysis on festival productions

 

Elie Jolliet (Universität Bern, Institut für Musikwissenschaft): Studied music (organ, historical keyboard instruments, choral conducting and church music) in Bern (B.A.) and Lausanne (M.A.). Church musician in Köniz and concert activity as soloist, ensemble musician and choir director. Winner of the Migros Culture Percentage Instrumental Competition 2016. Member of the board of the International Association for Hymnology. Dissertation project: The Bernese Songbooks 1606 to 1853. Corpus analysis of the songs outside the Geneva Psalter. Elie Jolliet uses nodegoat, for the difficult analysis of songs, which he examines and visualizes separately for melodies and texts. More about Elie Jolliet as a professional musician on his website: https://www.eliejolliet.ch/

Fig. Collection of church songs in the backend of nodegoat

 

Corina Liebi (Universität Bern, Historisches Institut): Corina Liebi studies history with a focus on the Middle Ages and is an assistant at the Historical Institute in Bern. In her master’s thesis she deals with the finances of the Hochstift Bamberg and evaluates a chamber office account from 1478. She visualizes the entries of these books on maps, which gives her insights into the quantitative and spatial distribution of financial transactions. With a network analysis she also investigates connections between officials.

Fig. Spaces of the diocese and the Hochstift Bamberg, reconstructed within nodegoat

 

Sebastian Borkowski (Universität Bern, Institut für Archäologische Wissenschaften): Sebastian Borkowski, Master in Near Eastern Archaeology at the University of Bern, currently a PhD student at the Unité d’Études Mésopotamiennes of the University of Geneva and assistant in the Department of Ancient Oriental Philology in the RIMES project (The Rivers of Mesopotamia) presented the project that Dr. Susanne Ruthishauser is leading at the Department of Near Eastern Archaeology at the University of Bern. For the area in the south of present-day Iraq, the project will evaluate satellite image data combined with archaeological, written and geomorphological sources in order to reconstruct the position of rivers and channels of the Mesopotamian alluvial plain during different epochs. This project uses very many functions of nodegoat. Among others, Sebastian Borkowski evaluates about 10’000 written sources in nodegoat.

Fig. Network analysis for reconstructing the rivers in Mesopotamia

 

Kaspar Gubler (Universität Bern, Historisches Institut): SNFS SPARK Projekt ‘Dynamic Data Ingestion’ for server-side data harmonisation. The principle of data ingestion in the so-called DDI module of nodegoat is that nodegoat pulls together data centrally on the server from any data sources available via interface. The DDI Module has two important strengths. Firstly, this software module is integrated into a fixed structure. It is therefore not a script which is stored and executed somewhere on a server and, as so often, at some point is no longer updated. Secondly, the DDI module has a graphical interface (Linked Data Module) in which the database fields of the data source can be assigned to the database fields of the nodegoat database, the mapping of the data. A great benefit of the DDI module is therefore the linking of data sets, for example Linked Open Data.

Fig. Testquery and response in the DDI Module. The responded data will be used for the mapping of the databasefields (from data source to nodegoat)

 

Pim van Bree / Geert Kessels (The Hague, LAB1100): nodegoat on the globe. Overview of nodegoat projects running at other institutes and insights into new and planned features of nodegoat. Pim van Bree received his Master in New Media Studies at the University of Amsterdam. Geert Kessels his Master in History as a research master at the same University. Pim van Bree and Geert Kessels bring together skills in new media, history, humanities and software development. They work with universities, research institutes, museums to conceptualise and develop dynamic applications. Their most important application is certainly nodegoat. Pim van Bree and Geert Kessels have extensive project experience in the field of Digital Humanities, and are engaged worldwide as consultants for digital projects and workshops sharing. On Nodegoat Day, they presented an overview of nodegoat projects in other countries, gave insights into the principles of nodegoat as well as in latest software developments and answered users’ questions.

Fig. Overview on nodegoat projects running and a sample visualisation out of the project ‘Encyclopedia of Romantic Nationalism in Europe’  (https://ernie.uva.nl/)

Cite this article as: Kaspar Gubler: Looking back to nodegoat Day 2020, in HistData, 28/11/2020, https://histdata.hypotheses.org/1937.

SNSF SPARK workshop: data ingestion and harmonization

Workshop on the results of the SPARK project of the Swiss National Science Foundation (SNSF) “Dynamic Data Ingestion (DDI): Server-side data harmonization in historical research. A centralized approach to networking and providing interoperable research data to answer specific scientific questions” (http://p3.snf.ch/project-190161). The workshop will take place in four sessions via Zoom. In sessions 1 and 2, participants will be introduced to the functions of the virtual research environment Nodegoat (VRE) and create a data model and import a data sample, which they will use in sessions 3 and 4 for the exercises on data ingestion. At the end, each participant will have a working VRE that can be used for further research or also used in teaching. It is highly recommended to attend all 4 sessions. The workshop is primarily aimed at members of the Phil.-Hist. faculty of the University of Bern, but is generally open to other interested parties on planet earth. The zoom link to the workshop will be sent to participants after registration. The workshop will be led by Nodegoat developers Pim van Bree and Geert Kessels (LAB1100), together with Kaspar Gubler, Institute of History, University of Bern.

Members of the Phil.-Hist. faculty of the University of Bern can apply for an VRE free of charge at the following link: https://www.dh.unibe.ch/dienstleistungen/nodegoat_go/index_ger.html Other participants can obtain an VRE at nodegoat.net. Or get Nodegoat Open Soruce on GitHub: https://github.com/nodegoat/nodegoat

The workshops always take place on Wednesdays from 2 – 5 pm. The workshops are recorded and can therefore be re-watched if a session cannot be attended.

Dates: 28.04.2021 / 05.05.2021 / 12.5.2021 / 26.5.2021

Registration for the workshop until 25.04.2021 to: kaspar.gubler@hist.unibe.ch

Session 1: Data Modelling (people and books)

In session 1 we get to know the central functions of Nodegoat (NG). Since NG is managed via the web browser, no additional software needs to be installed on the computer and location-independent working is no problem. With NG, research projects can be created, research data managed, analyzed, visualized, published on the Internet and shared with other researchers without any special programming skills. We will create our first data model, which we will fill with data in the next sessions. As we will see, NG is not a rigid ”boutique solution” that only fits a specific question or data model. Students and researchers can use NG to create custom data models based on their specific questions.

Session 2: Importing Data (including a VIAF id for each person)

In Session 2, we will import our first data sample and an identifier (VIAF) for each person. So we will work with a prosopographically oriented data model, but we can easily extend it for other research questions.

Session 3: Ingesting Biographical Data (like other IDs, or birth/death)

In session 3, we will learn about the principles of “Dynamic Data Ingestion”. There are numerous data sources on the Internet, whether for research or for the interested public. What types of data sources are there? And what about data quality? We will first explore these questions before connecting our Nodegoat environment to a typical data source via an interface (API) and importing the first test data. These data can be further identifiers of persons or also information about the life data.

Session 4: Ingesting Related Data (like published books of people)

In Session 4, we will enrich the data on the persons and check what other data is available on the Internet. For example, publications that we can add and, if we have full texts, additionally analyze in nodegoat. Finally, we will also look at the data harmonization capabilities in nodegoat.

Data Ingestion Episode II – The Empire strikes back, but not for long

The second test phase of the SNSF SPARK project (Episode 2) on Dynamic Data Ingestion (DDI) and server-side data harmonisation has been completed. Data from as many different data sources as possible were collected and stored centrally on the DDI server according to the spider principle, resulting in a new meta-database.

Fig. 1: One example of a data collection via DDI module , testphase 2: standardised biografical data and related publications.

However, the tests again drew our attention to two core problems of Linked Open Data (LOD) in research: The “empire strikes back” against LOD on a technical and content level, which are interdependent. On the technical level, the most important prerequisite for data exchange is missing, namely a kind of “industry standard” for a uniform query language and a standardised output format as well as a standardised structure of the data published via Application Programming Interface (API). Many data projects have individually designed APIs and project-specific data structures that do not comply with international standards and/or there is a lack of adequate documentation of the data output. At the content level of LOD, we are faced with challenges due to the heterogeneity of research data, especially in the humanities, which inevitably leads to inconsistent database structures and makes data exchange more difficult. Despite numerous international initiatives in recent years, no research project has yet been able to gain significant or groundbreaking scientific knowledge through Linked Open Data, especially not in the Digital Humanities. In addition to the technical and content-related obstacles, the ‘Empire’ has been quite successful in blocking communication between humanities scholars and software developers. Any researcher who has participated in relevant conferences of the humanities knows about the long discussions on possibilities and potential of linking databases in the field of Digital Humanities. In the end, it all comes down to good ideas and declarations of intent – without having linked a single data set. At the other extreme are initiatives that store large amounts of LOD data in a meta-database and reflect on whether this data represents information or already knowledge. This is an important question as LOD has been praised in the scientific community of the so-called ‘Semantic Web’ as pointing the way to the future of an ‘Internet of Knowledge’, an Internet from which users can retrieve data in a structured and standardised way and transform it into information and knowledge. But we have not yet reached that point. One important project that pursues these goals is Wikidata, a sister project of Wikipedia. Wikidata offers open, international standards for the storage, sharing and exchange of data. In order to exchange and harmonise data, projects would therefore have to store and document their data on Wikidata. This means efforts that not every project can or wants to make. Thus the situation in the Digital Humanities is still more or less the same when it comes to exchanging LOD: on the one side is the humanities spirit that floats on clouds with brilliant ideas for linking data, and on the other side is the analytical developer who collects highly complex data with a down-to-earth approach. The experience from many conferences shows that communication between cloud and earth has hardly been possible so far. Both sides send out signals that are usually misunderstood by the other side – people speak in a different language and do not understand each other, even if they mean the same thing. But how can we bring both worlds together and install a translation board (Rosetta’s stone) between cloud and earth? One such board is the DDI module developed as part of the SNSF SPARK project. In this module a graphical interface facilitates communication between humanities scholars and developers.

Fig. 2 Definition of the Linked Data Resource, in this example the API of swissbib (catalog and data hub of Swiss libraries, which will soon be replaced by a new version)

In the interface of the DDI module, the Linked Data Resource can be defined and queried for a sample data output, which can then be used to assign the data fields of the Linked Data Resource (e.g. data from another research project, from a library or an archive) to one’s own researcher database, which thus becomes a meta-database consisting of data from various data sources.

Fig. 3 Running a test query on swissbib API resources

The advantage of a graphical interface for data ingestion lies in the visual communication: researchers and developers (or researchers experienced in IT) jointly define the interface (API) and immediately see the result, the data output, in the test query. With the test query, it is visible to all participants which database fields and contents are actually present in the data source. Researchers and developers can then use the data of the test query as a template for mapping the database fields of the data source to the new meta-database. This type of visual communication leads to fewer misunderstandings, as we have already seen in test series in the context of the SNF project SPARK.

Fig4. Mapping source (right) and target database fields (left)

The test query also shows whether the data is compatible with your own question or whether it is meaningful – and to what extent these data structures must be compared with those of other data sources in order to obtain significant, scientific results. This brings us to the crucial point: collecting data is one thing, but harmonising data (structured and unstructured) from various data sources in order to be able to evaluate them is the last, but often too big hurdle, especially in the humanities. Therefore a translation tool for harmonising the data had to be integrated in the SPARK project: after the data has been collected by the DDI module, the reconciliation module for data harmonising is used. The module has an algorithmic pattern matching (named entity matching) function that identifies predefined terms or categories (in the sense of a controlled vocabulary) in the data, makes suggestions for assignments or can automatically store the matched terms in the database. This also means that one can see all relations of a term to the texts in which it was found. This matching of vocabularies (or: keyword spotting) has great potential not only for data harmonisation but also for the structuring and analysis of heterogeneous data in general, including for example texts available as OCR (Optical Character Recognition), generated by specialized OCR software like Transkribus. Not only researchers, but also libraries and archives will be able to use these functions to make their (handwritten) texts more accessible to the public, for example in the form of data visualisations such as the following example of publication locations.

Fig. 6 Publication locations of books, published in 1540, from the swissbib API resource found with pattern matching (reconciliation) in text strings

The results (locations) are automatically linked to geo-references, allowing a map of the publication locations to be created immediately after the reconciliation process, in this example with a historical background map (Mercator 1607). In addition to publication locations, authors or specific contents of publications can be found and linked by means of pattern matching, this is just one example, of course, the process of data ingestion and data reconciliation is not limited to certain data types. In the third and final phase of the SPARK project (Episode 3), different scenarios for data harmonisation will be run through to examine to what extent data harmonisation can lead to new research questions, especially in terms of a heuristic approach.

Cite this article as: Kaspar Gubler: Data Ingestion Episode II – The Empire strikes back, but not for long, in HistData, 09/09/2020, https://histdata.hypotheses.org/1635.