Quellenkritik als Kernkompetenz der Geschichtswissenschaft: eingenebelt von Vanilla.

Mein Kommentar zur Gastkolumne in der Neuen Zürcher Zeitung (Sonntagsausgabe) vom 17. März 2024:

“Quellenkritik als Kernkompetenz der Geschichtswissenschaft: eingenebelt von Vanilla.

In der NZZ am Sonntag vom 17. März 2024, wurde auf S. 20 eine Gastkolumne eines Geschichtsprofessors veröffentlicht. Titel: „Es gibt eine neue Sprache, in der Reden ohne Denken geht: Vanilla.“

Der Text beleuchtet indirekt aktuelle Herausforderungen in der universitären Lehre der Geschichtswissenschaft, die durch die Digitalisierung bedingt sind. Das tragende Element, auf dem der Text aufbaut, ist eine Episode (die der Autor als „skurrile Szene“ bezeichnet) in einem Kurs des Professors, den er zusammen mit einem Soziologen durchführte: Der Professor habe einen „…schwierigen Sachverhalt erläutert und hielt inne, damit die Studierenden Verständnisfragen stellen konnten. Bevor sie dazu kamen, meldete sich ein Student und fasste meine Ausführungen unaufgefordert in wenigen Worten zusammen. Er klang klug, redete flüssig, mit einer seltsam unpräzisen Bestimmtheit und ohne jeden persönlichen Ton. Obwohl es mir vorkam, als würde er ablesen, war ich baff und beglückwünschte ihn zu seiner raschen Auffassungsgabe.“

In der Pause habe ihn dann sein Kollege, der Soziologe, der über „technikwissenschaftlichen Scharfsinn“ verfüge, die Augen geöffnet und gesagt, der Student haben ihn mit „Vanilla eingeschmiert“. Vanilla ist eine Wortschöpfung des Soziologen für Texte oder Inhalte, bei denen er aufgrund seiner Erfahrungen vermutet, dass künstliche Intelligenz beim Verfassen geholfen habe. Aus dem Hinweis des Kollegen schliesst der Professor: „Der Student musste der Maschine meine Präsentation verfüttert haben, damit sie ihm postwendend ein Vanilla-Resumee ausspuckte. Verstanden haben dürfte er davon so wenig wie die Maschine selbst.“

Mehrere Aspekte erscheinen mir als Historiker bei der geschilderten Szene bedenkenswert, besonders was den Ursprung und die Qualität der Informationen anbelangt (Quellenkritik):

– Hatte der Student tatsächlich künstliche Intelligenz zu Hilfe genommen?
– Oder handelt es sich um einem Menschen mit verblüffend rascher Auffassungsgabe?
– Vielleicht hatte sich der Student bereits vor der Veranstaltung mit Schriften und dem Gedankengut des Professors befasst, ja, möglicherweise ist er sogar einer seiner Anhänger?
– Weshalb spricht der Student mit einem eigenartigen Klang?
– Weshalb wird in der Gastkolumne suggeriert, der Student habe ohnehin wenig vom Sachverhalt verstanden?

Es ist bedenklich, dass anscheinend kein Versuch unternommen wurde, das Gespräch mit dem Studenten zu führen. Dadurch stützt sich die Gastkolumne lediglich auf Vermutungen. Dies führt dazu, dass in der Kolumne über Studierende gesprochen wird, aber nicht mit ihnen. Dies lässt eine gewisse Distanz zu den Studierenden vermuten, insbesondere wenn der jungen Generation pauschal vorgeworfen wird, sich unreflektiert der Vanilla-Technik zu bedienen.
Die Szene hätte jedoch ein herausragendes Beispiel für moderne Lehre werden können, indem gezeigt wird, wie Studierende mit neuen Technologien wie künstlicher Intelligenz umgehen können. Durch gezieltes Nachfragen beim Studenten nach seinem Wissen und den von ihm möglicherweise genutzten Techniken hätte ein offener Dialog entstehen können, ohne dabei möglicherweise falsche Vermutungen anzustellen.
Dieser Dialog hätte zu Reflexionen über die Anwendung von künstlicher Intelligenz, digitalen Methoden und Tools in der Geschichtsforschung führen können. Der anwesende Soziologe hätte etwas zu den Methoden der empirischen Sozialforschung, wie Datenerhebung und Netzwerkanalyse, sagen können und wie diese erfolgreich in die digitale Geschichtswissenschaft integriert wurden.
Abschliessend lässt sich festhalten, dass – basierend auf meinen persönlichen Eindrücken – in den Historischen Instituten generell eine offene Gesprächskultur herrscht und die Anliegen sowie Ideen der Studierenden gehört und ernst genommen werden. Dabei ist es sicherlich angebracht, bescheiden zu sein und von den Studierenden menschliche Intelligenz zu erwarten, die jederzeit in der Lage ist, ein sehr hohes akademisches Niveau zu erreichen. Besonders beeindruckt hat mich diesbezüglich mein Geschichtsprofessor, bei dem ich mein erstes Seminar besuchte. Er liess sich von den Anfänger:innen des Faches von ihren Ansichten und Ideen überraschen, unabhängig von ihrem Erfahrungsschatz, auch in technischer Hinsicht.”

Kaspar Gubler, 17. März 2024.

 

Seminar of the research platform ‘Hector’ (Heritage. Cluture. Norms) at the Jagiellonian University (Krakow) on 8 March 2024

Illustration: Places of origin (red) of the RAG-scholars who studied at the University of Krakow and the other places where they studied (blue) (1364-1550), source: Repertorium Academicum Germanicum (RAG), 01/2024.

On 8 March 2024, a seminar on digital (legal) history will take place at the invitation of the Faculty of Law of the University of Krakow. The methodological focus of the workshop will be on contextualised digital prosopography and its connection to legal history and legal texts.

Program:

Hector research platform_seminar_march_8_2024 (PDF)

I am very much looking forward to this workshop, which my colleague in Krakow, Dr hab. Maciej Zdanek, prof. UJ, has organised, including the following contributions:

Dr Kaspar Gubler (University of Bern): Education of Jurists and the Transfer of Legal Knowledge in the Context of Prosopographical and Textual Databases. The example of the Repertorium Academicum Germanicum (1250-1550)

Prof. Dr. Martin Holý (University of Prague): Law and Lawyers at the University of Prague in the Late Middle Ages and Early Modern Period (1457/1458-1622)

 

Interactive data scenarios for the University of Krakow, provided by the Repertorium Academicum Germanicum (RAG):

https://database.rag-online.org/viewer.p/42

Visualisierungspraktiken in den Digital Humanities (Einführung)

Im Rahmen der Weiterbildung in Archiv-, Bibliotheks- und Informationswissenschaft der Universitäten Bern und Lausanne durfte ich letzten Freitag (15.12.2023) in einem Kurs-Modul kurzfrisitg Prof. Dr. Gerhard Lauer vertreten. Da ich nicht viel Zeit für die Vorbereitung hatte war ich froh, auf die Datenbankvorlagen (ein Projekt der Phil.-hist Fakultät der Uni Bern) in nodegoat zurückgreifen zu können. So konnte ich rasch ein Datenbankmodell aktivieren, das die Studierenden im Workshop zur Arbeit mit Forschungsdaten nutzen konnten. Ziel des Workshops war es (neben einem Überblick zu den Visualisierungspraktiken in den Digital Humanities), die praktische Erfassung und Visualisierung von Forschungsdaten durchzuspielen. Dank der Datenbankvorlagen konnte ich mich bei der Vorbreitung mehr auf die Inhalte konzentrieren und hatte von der technischen Seite her gesehen keinen grossen Aufwand. Der Kurs hat Spass gemacht, die Studierenden waren sehr schon versiert im Umgang mit digitalen Tools und vertraut mit neusten Entwicklungen der Digitalisierung.

Review of the X workshop of Atelier Heloise

The volume under review contains the contribution by van Bree / Gubler / Kessels on results of the SNSF SPARK project on ‘Dynamic Data Ingestion’ (data integration, harmonisation and reconciliation of Linked Open Data):

QFIAB 103 Review X Workshop Heloise (PDF)

SPARK

Heralds of Globalization: Philanthropic foundations’ Fellows. Their history and politics (20-21st century), International conference, Geneva, 23/25 November 2023

DAY 2 (Friday 24 November)

9:30-11:00 Roundtable 3: Databases across Scales and Periods

Thomas David (University of Lausanne), Chair

Béatrice Joyeux-Prunel (University of Geneva)

Madeleine Herren (University of Basel)

Kaspar Gubler (University of Bern)

Gubler_working paper_roundtable_Geneva_conference (PDF)

Mark Towsey (University of Liverpool)

“Throughout the 20th century, the activities of philanthropic foundations had significant consequences for thousands of individuals, hundreds of institutions, and numerous governments from Asia to Africa, Europe and the Americas. Some foundations set up programs granting individual awards. Their history and politics have only drawn scholars’ sporadic attention probably because of the difficulties of tracing the lives and careers of hundreds or thousands of grantees. From 1914 to 1970 the Rockefeller Foundation, one of the most active foundations, granted about 13’600 awards to individuals from 134 countries and territories. From 2018 to 2022 the Swiss National Science Foundation funded a collective research project entitled Rockefeller Fellows as Heralds of Globalization: The Circulation of Elites, Knowledge and Practices of Modernization (1920-1970). One of its main outcomes is The Rockefeller Fellows and Awards Database, which will be presented during the conference. We intend to use it as a thread and a springboard to compare and contrast philanthropic foundations’ programs and the decision of some foundations not to set them up. This conference gathers historians of different fields and practitioners from award-granting institutions to discuss individual awards programs’ objectives, outcomes, and the impact of these programs on the lives of the recipients and on the development of their home and host institutions.”

Rockefeller Heralds Conference Draft (PDF)

Project:

https://www.unil.ch/obelis/en/home/menuinst/projects/current-projects/rockefeller-fellows-as-heralds-of-globalization.html

café_digital: Datenmodelle in den Geschichtswissenschaften

Im Anschluss an die Fachtagung Daten, Datenbanken und Datenmodelle in den Geschichtswissenschaften, die am 14. November 2023 in Bern stattfindet. Organisation: Schweizerische Gesellschaft für Geschichte (SGG), diskutieren wir im nächsten café_digital (Donnerstag, 16. November 2023)  Datenmodelle, die sich für die digitale Umsetzung historischer Forschungsfragen eignen.

Unter anderem wird ein Datenmodell präsentiert, für das im Rahmen des Projekts Datenbankvorlagen für die Geisteswissenschaften (Förderung: strategische Mittel der Dekanin der Phil.-Hist. Fakultät, Universität Bern) ein Vorlage entwickelt wurde. Diese kann in der nodegoat GO-Installation der Universität Bern für jedes Projekt aktiviert werden. Wie das konkret geht, wird im café_digital erläutert. Das Datenmodell der Vorlage eignet sich speziell für die Erfassung und Analyse von Korrespondenzen und ist zudem frei verfügbar. Es kann via Schnittstelle (API) in nodgoat-Installationen anderer Universitäten importiert und so in der Community ausgetauscht werden. Der Import des Datenmodells wird in diesem Tutorial beschrieben. Dort steht auch das Datenmodell (in JSON) zum Herunterladen bereit.

Ort: Universität Bern, Mittelstrasse 43, Raum 220, Zeit: 12.15 – 13 Uhr. Die Teilnahme am café_digital ist auch via Zoom möglich.

Following the symposium Data, databases and data models in the historical sciences, which will take place on 14 November 2023 in Bern. Organised by the Swiss Historical Society (SGG), the next café_digital (Thursday, 16 November 2023) will discuss data models that are suitable for the digital implementation of historical research questions.

Among other things, a data model will be presented for which a template was developed as part of the project Database Templates for the Humanities (funding: strategic funds from the Dean of the Faculty of Humanities, University of Bern). This template can be activated for each project in the nodegoat GO installation at the University of Bern. How to do this is explained in café_digital. The template’s data model is particularly suitable for recording and analysing correspondence and is also freely available. It can be imported into nodgoat installations at other universities via an interface (API) and thus exchanged within the community. The import of the data model is described in this tutorial. The data model (in JSON) is also available for download there.

Location: University of Bern, Mittelstrasse 43, Room 220, Time: 12.15 – 1 pm. Participation in café_digital is also possible via Zoom.

cafédigital_16112023 (PDF)

Transkribus kombiniert mit nodegoat: Ein vielseitiges Werkzeug für Datenanalysen

Transkribus und nodegoat werden in kombinierter Anwendung zu einem vielseitig einsetzbaren Werkzeug für Datenanalysen und -visualisierungen. Dazu werden die Texte zuerst von Transkribus via Schnittstelle direkt in nodegoat importiert. Anschliessend werden die Texte in nodegoat automatisch mit Vokabularen abgeglichen (pattern matching). Die Vokabulare können frei definiert werden und etwa auch Orte, Institutionen oder Personen enthalten. Da die Treffer in den Texten automatisch gespeichert werden, sind unmittelbar nach dem Abgleich Visualisierungen auf Karten, in Netzwerken und auf Zeitreihen sowie weitere Analysen möglich.

Die kombinierte Anwendung ermöglicht es damit, sich inner kuzer Zeit einen Überblick über die Strukturen und Inhalte grössere Textmengen zu verschaffen (Stichwort: distant reading). Durch den Abgleich benutzerdefinierter Vokabluare mit den Texten im Rahmen einer Analyse in nodegoat, können somit Forschungserkenntnisse gewonnen oder neue Fragestellungen generiert werden. Die Methodik eignet sich gleichsam für Studium, Forschung und Lehre. Ein Projekt zu tschechischen ‘Untergrund-Zeitschriften’, die mit dieser Technik nach Begriffen und geographischen Entitäten durchsucht wurden, findet sich hier:

https://nodegoat.net/usecase.p/372.m/53/czechoslovak-underground-journals

Die Tutorials zeigen, wie Texte von Transkribus importiert werden in nodegoat und wie der automatisch Abgleich der Vokabulare konfiguriert wird. Die Tutorials richten sich an Fortgeschrittene:

Tutorial: Texte und Bilder aus Transkribus über die API in nodegoat importieren

Tutorial: Texte mit Vokabularen abgleichen in nodegoat

When used together, Transkribus and nodegoat become a versatile tool for data analysis and visualisation. The texts are first imported from Transkribus directly into nodegoat via an interface. The texts are then automatically compared with vocabularies in nodegoat (pattern matching). The vocabularies can be specifically defined and can also include places, institutions or people. As the matches in the texts are saved automatically, visualisations on maps, in networks and on time series as well as further analyses are possible immediately after matching.

The combined application thus makes it possible to obtain an overview of the structures and content of large volumes of text within a short space of time (keyword: distant reading). By comparing user-defined vocabularies with the texts as part of an analysis in nodegoat, research insights can be gained or new questions generated. The methodology is equally suitable for study, research and teaching. A project on Czech ‘underground journals’, which were searched for terms and geographical entities using this technique, can be found here:

https://nodegoat.net/usecase.p/372.m/53/czechoslovak-underground-journals

The tutorials show how to import texts from Transkribus into nodegoat and how to configure the automatic reconciliation of vocabularies. The tutorials are aimed at advanced users:

Tutorial: Import Texts and Images from Transkribus into nodeogat using the API

Tutorial: Reconcile the texts in nodegoat with vocabularies

 

Digitale Arbeitsinstrumente in der Mediävistik. Bestand und Forschungsperspektiven.

Vortrag von Dr. Kaspar Gubler im Rahmen der BMZ Ringvorlesung.

Livestream:

https://tobira.unibe.ch/!v/NGD0gfBX8kB

BMZ-Ringvorlesung HS 2023
Donnerstag, 9. November, 17:15-18:45h
Universität Bern Hauptgebäude | Hörsaal 220

https://www.bmz.unibe.ch/

HS23_BMZ Programm HS 2023

nodegoat as a research infrastructure @ the DARIAH-CH Study Days (October 20, 2023)

2023_DARIAH-CH_StudyDay_nodegoat (PDF)

13-14: Picht of the nodegoat infrastructure at university of Bern by Sebastian Borkowski, Kaspar Gubler and Sophia Marxer

14.15 – 15.15: Counseling on the nodeogat research infrastructure

Journée d’étude d’Heloise a Pécs – New digital approaches to university history

Kaspar Gubler (Universität Bern): Hungarian Students in the Repertorium Academicum Germanicum (1372-1526): Data analysis in the context of a digital prosopography.

Fig: Places of all 62k scholars in the Repertorium Academicum Germanicum (RAG) 1250-1150. Red: Students from the Kingdom of Hungary 1372-1526

Heloise Workshop Poster and Program_31_08_2023 (PDF)

Abstracts_Héloise_Pécs 2023 (PDF)

New: Café digital @ Phil.-hist. Fakultät Uni Bern

Das nächste Café digital findet am 21. September 2023 statt, 12.15 – 13 Uhr, Raum wird noch bekanntgegeben. Meldet euch für den Mailverteiler bei joel.zschoge@unibe.ch an, um auf dem Laufenden zu bleiben.

Das Café digital ist ein Austauschformat rund um Fragen zur Digitalisierung. Es wurde im Rahmen des Projekts “nodegoat templates” (Kaspar Gubler & Joel Zschoge) lanciert mit Unterstützung der Digital Humanities der Uni Bern und des CoDaLAb (Archäolog:innen der Universität Bern).

Im Café digital treffen wir uns einmal im Monat, um uns in lockerer Atmosphäre über Aspekte der Digitalisierung in Studium, Forschung und Lehre auszutauschen. Das Café ist als Präsenztreffen geplant. Falls Interesse an einer Online-Teilnahme besteht, werden wir auch diese Möglichkeit anbieten. Die Auftaktveranstaltung findet am 29. Juni 2023 an der Universität Bern (Tobler) statt. Weitere Termine werden auf diesem Kanal bekannt gegeben oder via Mailingliste. Wir freuen uns auf viele Teilnehmer:innen!

Joel Zschoge verwaltet die Mailingliste des Cafés. Schickt ihm eine E-Mail, wenn ihr über die Treffen informiert werden möchten:

joel.zschoge@unibe.ch

Flyer_Café_digital_29_Juni_2023 (PDF)

nodegoat templates project @ Institutskonferenz (Historisches Institut @ Uni Bern)

We (Kaspar Gubler, Joel Zschoge) were able to present our project (supported with strategic funds from the Dean of the Faculty of Humanities at the University of Bern) at the Institute Conference of the Institute of History (University of Bern). We chose the Pecha Kucha format (20 PP slides of 20 seconds each) for the presentation. Joel told us how, as a history student, he didn’t really know much about digital research. But with nodegoat and other tools it was easier for him to take his first steps in digital history. The project ‘nodegoat templates’, which will provide database templates for the nodegoat research environment, is intended to provide additional support for students (but also for research and teaching) to find a low-threshold access to digital research.

Institutskonferenz_Historisches Institut Uni Bern (PDF)

nodegoat templates @ Bern Data Science Day 2023

The project will be presented at the Bern Data Science Day (5.5.2023)

UniS – Schanzeneckstrasse 1, 3012 Bern, room A019, starting  10.15 a.m.

“The Bern Data Science Initiative (BeDSI) invites to the 3rd Bern Data Science Day. It brings together data scientists from the University of Bern, the Bern University Hospital, sitem-insel and the Psychiatric University Clinic (UPD) for a unique conference on applied Data Science (DS). It gathers scientists from different domains to network and exchange ideas on emerging trends and research results in data science.”

BDSD2023_flyer (PDF)

nodegoat templates bern data science day 2023 (Poster, PDF)

https://www.dsl.unibe.ch/lab/bdsd/

Datenbankvorlagen in nodegoat für Studium, Forschung und Lehre

Am Forschungstag der Phil.-hist. Fakultät der Universität Bern (27. März 2023) stellen Kaspar Gubler und Joel Zschoge das Projekt zu Datenbankvorlagen für die Forschungsumgebung nodegoat vor.

Neu können Datenbankmodelle in der nodegoat GO Installation der Universität Bern als Vorlagen definiert werden (durch die Admins der Installation). Solche Vorlagen bestehen grundsätzlich aus Datenbankobjekten mit den zugehörigen Klassifikationen. Vorlagen können dabei so konfiguriert werden, dass sie entweder nur die Struktur des Datenmodells vorgeben oder zusätzlich bereits ein Set an ausgewählten Forschungsdaten enthalten. Letzteres ist für die Lehre von Nutzen, da die Studierenden eine solche Vorlage innert Minuten aktivieren können und dadurch einen niederschwellig Zugang zur Arbeit mit Forschungsdaten erhalten, u.a. zu Datenvisualisierungen und zur Netzwerkanalyse. Bei Interesse können die Studierenden schliesslich ihr eigenes, individuelles Datenmodell in nodegoat erstellen.

Ein weiterer Vorteil ist, dass in nodegoat beliebige Referenzmodelle wie etwa CIDOC CRM oder Dublin Core in das Datenmodell integriert werden können. Dadurch können Forschungsdaten von Projektbeginn an für eine Vernetzung mit anderen Projekten wie auch für die Langzeitarchivierung vorbereitet werden.

Zudem ist es mit nodegoat möglich, ganze Datenmodelle (optional inkl. Daten) mit anderen Usern an anderen Bildungsinstitutionen und in anderen Ländern auszutauschen (via JSON Schnittstelle von nodegoat).

Forschungstag_2023_Programm_final (PDF)

Hector. Heritage, Culture, Norms – research platform @ Jagiellonian University in Kraków

I am looking forward to the first meeting of the members of the interdisciplinary research platform ‘Hector’ of Krakow University (Thursday, 16 March 2023).

„HECTOR: Heritage, Culture, Norms” is an innovative, interdisciplinary, international research platform project that is being sponsored by the Jagiellonian University in the framework of the Initiative of Excellence – Research University programme. “

The “Hector” Project is aimed to provide a proper venue for academic discussion on a wide range of topics on legal heritage: from transforming fundamental / universal values into norms, through the consolidation of the latter ones into artifacts, to their presence in the present as well as their role in the future.

Further information on HECTOR:

HECTOR – Legal Heritage Lab – Wydział Prawa i Administracji (uj.edu.pl)

https://lhlab.wpia.uj.edu.pl/collaborators

 

 

Datenanalyse für die Digital Humanities: Projekte, Methoden, Einsichten

Workshop, 06.-08. Februar 2023 (Montag-Mittwoch)  am CAIDAS (Center for Artificial Intelligence and Data Science) der Julius-Maximilians-Universität Würzburg.

Montag, 6. Februar, 11 Uhr: Kaspar Gubler (Uni Bern): Repertorium Academicum (REPAC): Digitale Rekonstruktion akademischer Wissens- und Kommunikationsräume im vormodernen Europa

“Der Workshop bringt 20 Wissenschaftlerinnen und Wissenschaftler aus den Sozial- und
Geisteswissenschaften (Geschichte, Theologie, Literatur, Kultur) und den
Computer- und Komplexitätswissenschaften (Datenanalyse, maschinelles Lernen, Netzwerkanalyse) zusammen, um die Herausforderungen für die “Digital Humanities” zu
diskutieren. Drei unterschiedliche Perspektiven werden thematisiert”

https://www.sg.ethz.ch/events/digital-humanities/

CAIDAS-Workshop-Final (PDF)

L’université de Dole et les fondations princières en Europe au XVe siècle.

Colloque international (Besançon, 22-23 juin 2023)

Programme provisoire: Dole et les universités princières-Programme provisoire (PDF)

Jeudi 22 juin 2023, 14h-15h30, Séance 2. Dole dans les réseaux de la peregrinatio academica

Kaspar Gubler (Université de Berne), La création d’un espace d’innovation juridique: démarche et effets à l’exemple de l’Université de Dôle 1423-1525.

Illustration : Lieux d’origine des étudiants de l’Université de Dole à partir de 1498 (résultats provisoires, source : KG)

https://mediacenter.univ-fcomte.fr/videos/kaspar-gubler/

Vielen Dank für die Blumen 2022…

Anonyme Ziege. Wohnt im Sommer beim Lago Mognola, auf einer Alp oberhalb von Fusio (TI).

 

“Hiermit möchte ich mich bei Ihnen für den tollen Nodegoat Tag und die hilfreichen Informationen auf Ihrem Blog bedanken.”

“Ich möchte die Gelegenheit nutzen mich bei Ihnen für den nodegoat day 2022 insgesamt noch einmal zu bedanken. Die vorgestellten Projekte fand ich sehr eindrucksvoll, sie zeigten sehr unterschiedliche Möglichkeiten der Anwendung von nodegoat auf.”

“Vielen Dank für hilfreichen Tipps und Links. Nodegoat scheint wirklich das Tool zu sein, nachdem ich gesucht habe. Ihre Informationen auf dem Ihrem Blog haben mir auch sehr geholfen, mein neues Vorhaben zu strukturieren.”

Book presentation on the occasion of the 800th anniversary of the University of Padova (1222-2022)

It is an honour for me to be able to contribute to this presentation. Among other things, I will show the numerous possibilities for data networking offered by the research database created for the anniversary, containing students and scholars @ university of Padova: https://www.mobilityandhumanities.it/2020/06/18/bo-2022-project (Database project coordinated by Pierluigi Terenzi, Dennj Solera, Giulia Zornetta, Andrea Martini)

Conferenza | Stranieri. Itinerari di vita studentesca tra XIII e XVIII secolo

Archivio Antico, Palazzo del Bo. Via VIII Febbraio, 2 – Padova

06.12.2022

“Nell’ambito degli eventi celebrativi per gli 800 anni dell’Università di Padova, il 6 dicembre a Palazzo del Bo viene presentato al pubblico il volume a cura di Maria Cristina La Rocca e Giulia Zornetta Stranieri. Itinerari di vita studentesca tra XIII e XVIII secolo (Donzelli – Padova University Press, 2022), che fa parte della collana Patavina libertas. Una storia europea dell’Università di Padova”; la collana è composta da volumi di alta divulgazione, frutto di solide ricerche d’archivio condotte da giovani storici e storiche dell’Ateneo, che rileggono il percorso padovano in chiave europea, tra spazi e forme della libertas e il suo ruolo negli sviluppi dei saperi umanistici e scientifici.

Il volume osserva l’Università di Padova come punto di incontro tra uomini di origini geografiche diverse, in rapporto alla città, Padova, in cui essa si trova. I suoi protagonisti sono i laureati presso l’Università di Padova dalle sue origini fino al XVIII secolo. La costruzione del database Bo2022, che ammonta finora a circa 46.000 nomi, permette di osservare la comunità degli studenti come un ampio gruppo formato da un’ampia componente proveniente dall’impero germanico, dalla Francia, dalla Polonia, dalla Grecia, ma anche dall’Italia meridionale. Le tre tappe del volume sono quelle che strutturano la mobilità: partire, soggiornare, ritornare e ricordare. Per ognuna di esse si sono identificati gli itinerari di migrazione, e si è dato vita al soggiorno degli studenti a Padova, mostrando i momenti, compresi quelli di tensione e di scontro, attraverso cui si trasformava la loro identità: da “stranieri” diventavano una parte speciale degli abitanti di Padova. E poi, finiti gli studi e ritornati al proprio luogo di origine, procedevano talvolta a ricoprire importanti cariche pubbliche. Questi uomini tornavano a casa molto diversi da come erano partiti e conservavano il ricordo degli anni vissuti a Padova come una tappa fondamentale nella loro vita. Il volume intende mostrare il ruolo della mobilità nelle Università nel costruire il progresso delle conoscenze scientifiche e del dialogo tra “stranieri”.

All’evento di presentazione, moderato dal regista e scrittore Giacomo Battiato, intervengono Kaspar Gubler (University of Bern, CH), Brigitte Marin (directrice, École Française de Rome), Pavlina Rychterovà (Universität Wien, Österreichische Akademie der Wissenschaften) e, in collegamento, Roberto Delle Donne (Università di Napoli Federico II). Gli interventi si tengono nelle lingue dei relatori e delle relatrici. Porta i saluti istituzionali la prorettrice con delega alle Relazioni Internazionali, Cristina Basso.

Partecipazione su iscrizione.

L’evento viene trasmesso anche in diretta streaming su YouTube.”

https://ilbolive.unipd.it/sites/default/files/2022-11/2022.12.06_800.pdf

https://www.dissgea.unipd.it/conferenza-stranieri-itinerari-di-vita-studentesca-tra-xiii-e-xviii-secolo

Forschungsdaten vernetzen, harmonisieren und auswerten

Kaspar Gubler: Forschungsdaten vernetzen, harmonisieren und auswerten: Methodik und Umsetzung am Beispiel einer prosopographischen Datenbank mit rund 200.000 Studenten europäischer Universitäten (1200–1800), in: Oberdorf, Andreas (Hrsg.): Digital Turn und Historische Bildungsforschung. Bestandesaufnahme und Forschungsperspektiven, Bad Heilbrunn, 2022, S. 127-147.

https://library.oapen.org/handle/20.500.12657/57392

Conference: Student migration, scholarly networks and book culture, Prague, 14.–15. 6. 2022

Kaspar Gubler (Bern): Gelehrtennetzwerke der Universität Basel im Repertorium Academicum Germanicum (RAG) 1460–1550 / Scholarly Networks of the University of Basel in the Repertorium Academicum Germanicum (RAG) 1460–1550.

Programme_Student migration scholarly networks and book culture_Prague, 14-16 June 2022 (PDF)

Nodegoat Show & Tell by DH@UniBern

Universität Bern | Walter Benjamin Kolleg | Digital Humanities

Nodegoat Show & Tell | 20. Mai 2022 | 11:00–16:30 Uhr | Uni Mittelstrasse, Hörraum 124
11:00–16:30 Uhr

Uni Mittelstrasse, Hörraum 124

Nodegoat Show & Tell_Programm (PDF)

11:00–11:05 Uhr
Begrüssung
Prof. Dr. Tobias Hodel
Digital Humanities

11:05–11:25 Uhr
Netzwerkanalyse eine Chance in der Archäologie: Ein praktisches Beispiel an einem Keramikfundkorpus
M.A. Sophia Marxer, Institut für Archäologische Wissenschaften

In den Ausgrabungen (2012–2021) des Siedlungshügels Sirkeli Höyük in Kilikien kamen zahlreiche Funde zu Tage, unter denen die Keramik zu der häufigsten Fundgattung zählt. Aus Schichten der Kulturstufe LCI 1 (330–50 v. Chr.) kamen bisher 7’842 Keramikfragmente zum Vorschein, von denen 1’444 Stück genauer untersucht wurden. Ziel ist es nun, durch digitale Netzwerke mithilfe des Programms Nodegoat zu visualisieren, wie sich verschiedene Formen, Waren und Herstellungstechniken dieser LCI 1-zeitlichen Keramikfragmente zueinander verhalten.

11:25–11:45 Uhr
Sexarbeit im antiken Mesopotamien: Die Forschungsgeschichte
B.A. Oliver Rindlisbacher, Institut für Archäologische Wissenschaften

In der Assyriologie sind Themen wie weibliche Sexualität und Sexarbeit im antiken Mesopotamien häufig noch immer mit orientalistischen Vorurteilen behaftet, was z. B. die genaue Übersetzung gewisser sumerischer/akkadischer Begriffe in diesem Themenbereich beinahe unmöglich macht. In meiner Masterarbeit untersuche ich deshalb nicht nur keilschriftliche Originalquellen, sondern widme auch einen grossen Teil der dazugehörigen Forschungsgeschichte, um den orientalistischen Hintergrund vieler heute noch bestehender Ideen zu «Prostitution im Alten Orient» etwas besser beleuchten zu können. Nodegoat hilft mir bei dieser Aufgabe, indem es mir ermöglicht, wissenschaftliche Zitate und somit auch die Einflussbeziehungen zwischen Forschern, ihren Werken und den besprochenen Originalquellen als Netzwerk zu visualisieren.

11:45–12:00 Uhr
Kaffeepause

12:00–12:20 Uhr Bronze- und eisenzeitliche Siedlungsgeografie und Verkehrsnetze in Cilicia Pedias M.A. Sebastian Borkowski, Digital Humanities | Institut für Archäologische Wissenschaften

Mit dem Übergang von der Bronze- zur Eisenzeit im 12. Jh. v. Chr. änderte sich die Siedlungsgeografie in Cilicia Pedias, heutige Region Çukurova (Türkei), maßgebend. Auf Basis der Open-Access-Datensammlung archäologischer Stätten in Kilikien von Dr. Susanne Rutishauser (Institut für Archäologische Wissenschaften) und eines Verkehrsnetzmodells wird die sich wandelnde Bedeutung bronze- und eisenzeitlicher Siedlungen mittels Netzwerkanalyse in Nodegoat untersucht.

12:20–12:40 Uhr
Dynamischer Datenimport in Nodegoat
Dr. Kaspar Gubler, Historisches Institut

Vorgestellt wird das Fallbeispiel (Methodik und Vorgehen) einer Meta-Datenbank, die mittels dynamischem Datenimport aus verschiedenen Datenquellen in Nodegoat zusammengestellt wurde. Die Datenbank enthält rund 200’000 Studenten europäischer Universitäten für die Zeit 1250–1800.

12:40–13:00 Uhr TAVO Karten visualisiert: 1991 vs. 2022
M.A. Silvana Hunger, Institut für Archäologische Wissenschaften

1991 wurden zwei TAVO (Tübinger Atlas des Vorderen Orients) Karten publiziert zu “Kleinasien von 12.bis 6. Jahrhundert v. Chr.”, welche eine Verteilungskarte aufzeigen, wie sie in den 1990er-Jahrenüblich war. Mithilfe von Nodegoat können aus den dazumal gesammelten Daten neueDarstellungsweisen und Erkenntnisse generiert werden.

13:00–14:00 Uhr

Meet & Eat Pizza-Büfett an der Uni Mittelstrasse

14:00–16:30 Uhr

Workshop: Datenvisualisierung in Nodegoat, M.A. Sebastian BorkowskiDigital Humanities

In diesem hands-on Workshop* lernen die Teilnehmer:innen die Möglichkeiten zur Gestaltung von Visualisierungen in Nodegoat kennen und probieren diese an einem Beispieldatensatz aus. Die beste Visualisierung wird prämiert. Interessent:innen melden sich bitte bis zum 17. Mai bei sebastian.g.borkowski@unibe.ch an. Voraussetzung für die aktive Teilnahme ist ein Nodegoat-Account. Student:innen und Mitarbeiter:innen der Universität Bern können einen Account über das Antragsformular aufder Website der DH beantragen: https://www.dh.unibe.ch/dienstleistungen/nodegoat_go/index_ger.htmlExterne Teilnehmer:innen werden gebeten, einen Account auf der Nodegoat-Website von LAB 1100 zubeantragen: https://nodegoat.net/requestaccount (Die Aktivierung kann bis zu 48 Std. dauern.)

Prague Talks on Digital Humanities

Kaspar Gubler (University of Bern, Switzerland):
Digital Humanities as Data Science: Potentials and Limits

Digital humanities is an umbrella term for disciplines in the humanities that focus on research with digital resources and tools. As diverse as the disciplines are, so are their methods and approaches to digital research. For example, machine learning, network analyses or data visualisations, with which we can recognise new patterns, connections and developments in the research data. In these diverse research processes, data science, as will be shown in the presentation, can support the digital humanities by focusing on the quality, consistency and informational value of research data and thus reflecting on the potentials and limitations of digital research in the humanities at the same time.

To join this lecture, please send an email to: cassi@flu.cas.cz

https://www.flu.cas.cz/cz/akce-filosofickeho-ustavu-av-cr/27-prednasky/3711-digitizing-the-past-prague-talks-on-digital-humanities

 

New publication on the SNSF Spark Project on ‘dynamic data ingestion’

Kaspar Gubler, Pim van Bree, Geert Kessels: Server-side Data Harmonization through Dynamic Data Ingestion. A Centralized Approach to Link Data in Historical Research , in: Fonti per la storia delle popolazioni accademiche in Europa. Sources for the History of European Academic Communities. X Atelier Héloïse a cura di Gian Paolo Brizzi, Carla Frova, Ferdinando Treggiari Bologna, 2022, pp. 9-14.

Dynamic data ingestion: Gubler / van Bree / Kessels (Only the table of contents is available)

Historische Quellen digital auswerten – Kurs FS 2022 Uni Bern (Geschichte, Digital Humanities)

Unter dem Titel “(K)ein Buch mit sieben Siegeln: Verwaltung im Spätmittelalter” findet am Historischen Institut der Universität Bern im FS 2022 ein Kurs für Studierende (und Interessierte) statt.

Kurszeiten: Der Kurs wird als einwöchiger Blockkurs durchgeführt vom 20. bis 24. Juni 2022 (Montag-Freitag)

Kursinhalte: Überblick zu Methoden und Tools für die digitale Erfassung, Bearbeitung und Auswertung historischer Quellen. Unter anderem werden die Tools nodegoat, Transkribus und Voyant vorgestellt, mit denen die Teilnehmenden erste Analysen erstellen werden.

Kursziel: Die Teilnehmenden können selbständig historische Quellen digitalisieren und auswerten (u.a. mit Methoden der Netzwerkanalyse und mittels Datenvisualsierung).

Kursleitung: Kaspar Gubler (Historiker) und Christa Schneider (Linguistin)

 

Anmeldung (Studierende der Universität Bern)
https://ilias.unibe.ch/goto_ilias3_unibe_crs_2329795.html

Anmeldung (Auswärtige): E-Mail an die Kursleitung.

 

Bei Fragen zu Kurs, stehen Christa Schneider und Kaspar Gubler zur Verfügung:
christa.schneider@unibe.ch
kaspar.gubler@unibe.ch
Kursunterlagen, zusammengestellt von Kaspar Gubler:

Von Daten zu Informationen und Wissen. Zum Stand der Datenbank des Repertorium Academicum Germanicum

Kaspar Gubler: Von Daten zu Informationen und Wissen. Zum Stand der Datenbank des Repertorium Academicum Germanicum, in: Kaspar Gubler, Christian Hesse, Rainer C. Schwinges (Hrsg.): Person und Wissen. Bilanz und Perspektiven, Zürich (RAG Forschungen 4), Zürich 2022, S. 19-47.

https://vdf.ch/index.php?route=product/product/download&eoa_id=9155&product_id=2297

https://vdf.ch/person-und-wissen-e-book.html

The coffee break as a driver of science: Nodegoat @ Uni Bern (2017-2021)

The coffee break was sometimes referred to as the ‘driver of science’ during the COVID 19 pandemic, as the isolation in the home office made people aware of how important the break-related, social interactions are for scientific exchange in the corridors, on the stairs and in the canteen. In fact, the success story of the virtual research environment Nodegoat at the University of Bern can also be traced back to a coffee break. It took place in 2017 in the research pool of the Institute of History. The starting point was a database migration that I had to manage in the same year for the digital research project Repertorium Academicum Germanicum (RAG), a major project that was being worked on by research groups at the universities of Bern and Giessen (D). The project required new software for data entry and data visualisation, as the previous system was getting on in years and could no longer be brought up to date at a reasonable cost. After some research on potential solutions, it turned out that no software met the complex requirements. And developing my own software was out of the question due to time constraints. Instead of doing more research, I preferred to let things rest for a moment and reconsider the overall situation over a cup of coffee. Nothing was more obvious than to take this opportunity to call on my colleague, who was working on her dissertation for another project in the neighbouring office, to share some thoughts about God and the world, about software and database migrations. After briefly explaining the complicated situation, the colleague (a historian) immediately asked whether we had also considered Nodegoat. She had attended a Nodegoat workshop in Düsseldorf by the developers of Nodegoat (LAB1000) to gain insights into network analysis, which she would consider for her dissertation. To my surprise, I had never heard of this software. Nodegoat, she continued, is specialised in managing and visualising research data, with both functions integrated into the same software. Amazed and slightly electrified, I immediately set about testing Nodegoat in detail. LAB1100 provided a test environment. Surprisingly, Nodegoat had exactly the range of functions we needed for the RAG. Now it was only a matter of convincing the top level of the project management of Nodegoat, which was sceptical at first. Only a workshop with LAB1100 in Bern finally brought the breakthrough. In the workshop, a new data model was created and the data migration was analysed in detail. The starting signal was given and after three months the previously used database was history. On 1 January 2018, the time had come: Nodegoat was put into operation by the teams in Bern and Giessen. This was followed by training and a series of data cleansing sessions, which progressed quickly thanks to the new interface, which displays the data clearly and makes it correspondingly easy to detect irregularities. The collaboration with LAB1100 proved to be a stroke of luck. In close coordination with the developers, the RAG was able to initiate various software modules. It started with an extension of the Nodegoat interface (API) to speed up the data migration. After the data migration, it was about the complex module for the collection and visualisation of approximate temporal information as well as various modules for data visualisation and data export. The first project year of the RAG with Nodegoat was satisfactory for the teams. The training effort for the new working environment was kept within narrow limits. Data could now be entered much faster and more fluently than in the old system (MS Access as frontend, MS SQL Server as backend). For the first time, visualisations allow a complete overview of the immense data stock (60,000 persons, about one million pieces of information on curricula vitae, careers, institutions) that has been collected by hand over the years. Here you can see, for example, the scholars’ areas of origin by university in the Old Empire 1250-1550, whereby the University of Krakow was also taken into account due to its outstanding importance in education (source: rag-online.org).

When, in 2019, the University of Bern promised the faculties a budget for corresponding projects as part of its digitisation strategy, it was obvious to me to bring Nodegoat into play. The reason for this was that Nodegoat offers a digital infrastructure for the humanities that is not project specific. In contrast, research projects in the humanities today still develop their own software specifically adapted to the project, which therefore only functions as an isolated solution and cannot be used by other projects. Nodegoat, on the other hand, can be used by all disciplines (not only) in the humanities, because the data model can be defined individually and thus adapted to different sources and questions. Another plus point is the global aspect. The development of Nodegoat, which is available as open source software, is shared by various educational institutions and projects worldwide. This international collaboration for a flexible research environment is, in my opinion, the key to a sustainable digital infrastructure (not only) in the humanities. Also, the worldwide use of the same research environment, which is also specialised in the visualisation of research data, automatically promotes inter- and transdisciplinary exchange as well as research collaboration in general. These arguments were also convincing at the Institute of History and we were able to submit the “Application for Strategic Faculty Funds, Funding Line III, Digitisation” to the university. Title of the project: “Establishing a shared Virtual Research Environment (VRE)”. The funds were granted and Nodegoat GO went live as a multi-user platform in April 2020. To facilitate the launch, I took over the Nodegoat support position and was able to advise numerous projects from a wide range of fields in the humanities (history, archaeology, German studies, English studies, music and theatre studies): from sources to data model and visualisations. In mid-2021, after Nodegoat GO had got off to a good start, I handed over the support position to a student as planned, with the intention of promoting the transfer of knowledge about Nodegoat (as well as digital skills in general) at all levels: support from students for students and projects. Incidentally, the University of Bern is the first university to have created a position for Nodegoat support. A pioneering act that has not failed to have an impact: as of November 2021, around 50 projects with around 100 users are now working with Nodegoat GO at the university, as well as other, larger projects with their own Nodegoat installations. And all this because of a coffee break.

Cite this article as: Kaspar Gubler: The coffee break as a driver of science: Nodegoat @ Uni Bern (2017-2021), in HistData, 07/12/2021, https://histdata.hypotheses.org/2559

Die Kaffeepause als Treiberin der Wissenschaft: Nodegoat @ Uni Bern (2017-2021)

Die Kaffeepause wurde während der COVID-19-Pandemie zuweilen als ‘Treiberin der Wissenschaft’ bezeichnet, da die Isolation im Homeoffice bewusst machte, wie wichtig die pausenbedingten, sozialen Interaktionen für den wissenschaftlichen Austausch in den Gängen, auf den Treppen und in der Mensa sind. Tatsächlich ist auch die Erfolgsgeschichte der virtuellen Forschungsumgebung Nodegoat an der Universität Bern auf eine Kaffeepause zurückzuführen. Sie fand 2017 statt im Forschungspool des Historischen Instituts. Ausgangspunkt war eine Datenbankmigration, die ich im selben Jahr für das digitale Forschungsprojekt Repertorium Academicum Germanicum (RAG) bewerkstelligen sollte, einem Grossprojekt, das von Forschungsgruppen an den Universitäten Bern und Giessen (D) bearbeitet wurde. Das Projekt benötigte eine neue Software für die Datenerfassung und für die Datenvisualisierung, da das bisherige System in die Jahre gekommen war und nicht mehr mit verhältnismässigem Aufwand auf einen aktuellen Stand gebracht werden konnte. Nach einigen Recherchen zu potentiellen Lösungen stellt sich heraus, dass keine Software den komplexen Anforderungen genügte. Und eine eigene Software zu entwickeln kam aus Zeitgründen nicht in frage. Anstelle einer erweiterten Recherche zog ich es vor, die Dinge einen Moment ruhen zu lassen und bei einer Tasse Kaffee die Gesamtsituation noch einmal zu überdenken. Nichts war naheliegender als bei dieser Gelegenheit die Kollegin, die im Nachbarbüro für ein anderes Projekt an ihrer Dissertation arbeitete, aufzusuchen, um über Gott und die Welt, über Software und Datenbankmigrationen einige Gedanken anzustellen. Nach kurzen Erläuterungen zur vertrackten Situation fragte die Kollegin (Historikerin) sogleich, ob wir auch Nodegoat in Betracht gezogen hätten. Sie habe in Düsseldorf einen Nodegoat-Workshop der Entwickler von Nodegoat (LAB1000) besucht, um Einblicke in die Netzwerkanalyse zu erhalten, die sie für ihre Dissertation in Betracht ziehen würde. Zu meiner Überraschung hatte ich noch nie von dieser Software gehört. Nodegoat, so die Kollegin weiter, sei spezialisiert auf die Verwaltung und Visualisierung von Forschungsdaten, wobei beide Funktionen in derselben Software integriert seien. Verblüfft und leicht elektrisiert machte ich mich sogleich daran, Nodegoat eingehend zu prüfen. LAB1100 stellte eine Testumgebung zur Verfügung. Erstaunlich: Nodegoat verfügte exakt über den Funktionsumfang, den wir für das RAG benötigten. Nun galt es nur noch, die oberste Ebene der Projektleitung von Nodegoat zu überzeugen, welche zuerst skeptisch war. Erst ein Workshop mit LAB1100 in Bern brachte letztlich den Durchbruch. Im Workshop wurde ein neues Datenmodell erstellt und die Datenmigration eingehend analysiert. Der Startschuss fiel und nach drei Monaten war die vorher genutzte Datenbank Geschichte. Am 1. Januar 2018 war es soweit: Nodegoat wurde von den Teams in Bern und Giessen in Betrieb genommen. Es folgten Schulungen und eine Reihe von Datenbereinigungen, die dank dem neuen Interface, welche die Daten übersichtlich darstellt und entsprechend einfach Unregelmässigkeiten entdecken lässt, zügig voran gingen. Die Zusammenarbeit mit LAB1100 erwies sich als Glücksfall. In enger Abstimmung mit den Entwicklern konnte das RAG verschiedene Software-Module initiieren. Es fing an mit einer Erweiterung der Schnittstelle von Nodegoat (API) , um die Datenmigration zu beschleunigen. Nach der Datenmigration war es etwa das komplexe Modul für die Erfassung und Visualisierung ungefährer zeitlicher Angaben sowie diverse Module für die Datenvisualisierung und den Datenexport. Das erste Projektjahr des RAG mit Nodegoat verlief für die Teams zufriedenstellend. Der Schulungsaufwand für die neue Arbeitsumgebung hielt sich in engen Grenzen. Daten konnten nun wesentlich schneller und flüssiger erfasst werden als im alten System (MS Access als Frontend, MS SQL Server als Backend). Visualisierungen ermöglichen erstmals einen Gesamtüberblick auf den über die Jahre in Handarbeit zusammengetragenen, immensen Datenbestand (60’000 Personen, gegen eine Million an Informationen zu Lebensläufen, Karrieren, Institutionen). Hier zu sehen sind etwa die Herkunftsräume der Gelehrten nach Universitäten im Alten Reich 1250-1550, wobei die Universität Krakau aufgrund ihrer überragenden Bedeutung im Bildungswesen ebenfalls berücksichtig wurde (Quelle: rag-online.org).

Als dann 2019 die Universität Bern im Rahmen ihrer Digitalisierungsstrategie den Fakultäten Budget für entsprechende Projekte in Aussicht stellte, war es für mich naheliegend, Nodegoat ins Spiel zu bringen. Dies aus dem Grund, da Nodegoat eine digitale Infrastruktur für die Geisteswissenschaften bietet, die dieser Bezeichnung auch gerecht wird. Im Gegensatz dazu entwickeln Forschungsprojekte der Geisteswissenschaften heute immer noch ihre eigene, dem Projekt spezifisch angepasste Software, die damit nur als Insellösung funktioniert und nicht durch weitere Projekten genutzt werden kann. Nodegoat dagegen können sämtliche Disziplinen (nicht nur) der Geisteswissenschaften nutzen, weil das Datenmodell individuell festgelegt und damit an unterschiedliche Quellen und Fragestellungen angepasst werden kann. Ein weiterer Pluspunkt ist der globale Aspekt. Die Entwicklung von Nodegoat, das als Open Source Software verfügbar ist, wird weltweit durch verschieden Bildungsinstitutionen und Projekte gemeinsam getragen. Dieses internationale Zusammenspannen für eine flexible Forschungsumgebung ist meiner Ansicht nach der Schlüssel für eine nachhaltige, digitale Infrastruktur (nicht nur) in den Geisteswissenschaften. Auch fördert die weltweite Nutzung derselben Forschungsumgebung, die zudem auf die Visualisierung von Forschungsdaten spezialisiert ist, automatisch den inter- und transdizsiplinären Austausch wie die Forschungszusammenarbeit allgemein. Diese Argumente überzeugten auch am Historischen Institut und wir konnten der Universität den “Antrag auf strategische Mittel der Fakultät, Förderlinie III, Digitalisierung” einreichen. Titel des Projekts: “Einrichten einer gemeinsamen virtuellen Forschungsumgebung (Virtual Research Environment VRE)”. Die Gelder wurden gesprochen und Nodegoat GO wurde im April 2020 als Mehrbenutzerplattform in Betrieb genommen. Um den Start zu erleichtern, übernahm ich die Stelle für den Nodegoat-Support und durfte dabei zahlreiche Projekte aus den verschiedensten Bereichen der Geisteswissenschaften (Geschichte, Archäologie, Germanistik, Anglistik, Musik- und Theaterwissenschaften) beraten: von den Quellen zum Datenmodell und zu den Visualisierungen. Mitte 2021, nachdem Nodegoat GO gut angelaufen war, habe ich planmässig die Support-Stelle an einen Studierenden weitergegeben, mit der Absicht den Wissenstransfer zu Nodegoat (wie überhaupt digital skills) auf allen Ebenen zu fördern: Support von Studierenden für Studierende und Projekte. Die Universität Bern ist übrigens die erste Universität, die eine Stelle für den Nodegoat-Support geschaffen hat. Eine Pioniertat, die ihre Wirkung nicht verfehlt hat: Stand November 2021 arbeiten an der Universität mittlerweile rund 50 Projekte mit gegen 100 Usern mit Nodegoat GO sowie andere, grössere Projekte mit eigenen Nodegoat-Installationen. Und dies alles wegen einer Kaffeepause.

Cite this article as: Kaspar Gubler: Die Kaffeepause als Treiberin der Wissenschaft: Nodegoat @ Uni Bern (2017-2021), in HistData, 18/11/2021, https://histdata.hypotheses.org/2465

Nodegoat als Tool für digitale Editionen: Ringvorlesung von Kaspar Gubler

Ringvorlesung an der Universität Bern: Einblicke in die Digital Humanities → Fokus Editionen

Lecture series at the University of Bern: Insights into the Digital Humanities → Focus Editions

6. Dezember 2021

Kaspar Gubler: Nodegoat als Tool für digitale Editionen

“Die Präsentation stellt die Funktionen der virtuellen Forschungsumgebung Nodegoat  für die Bearbeitung, Analyse und Edition von Texten vor. Bislang war Nodegoat in den Digital Humanities vor allem für Datenmanagement, Netzwerkanalyse und Visualisierung von Forschungsdaten bekannt. Dagegen sind die Textverarbeitungstools von Nodegoat erst Eingeweihten vertraut: mit Nodegoat können etwa Informationen aus Texten automatisch extrahiert und gespeichert werden (pattern matching). Weiter können Texte von Hand oder automatisiert ausgezeichnet werden. Auch ist es möglich, verschollene Texte (Bibliotheken) aufgrund von Querverweisen digital zu rekonstruieren. Ausgefeilte Funktionen zur chronologischen Einreihung von Texten mit vagen oder fehlenden Datumsangaben runden die Funktionspalette ab.”

Kaspar Gubler: Nodegoat as a tool for digital editions

“The presentation introduces the functions of the virtual research environment Nodegoat for editing, analysing and editing texts. So far, Nodegoat has been known in the digital humanities mainly for data management, network analysis and visualisation of research data. In contrast, Nodegoat’s text processing tools are only familiar to the initiated: with Nodegoat, for example, information can be automatically extracted from texts and stored (pattern matching). Furthermore, texts can be marked up manually or automatically. It is also possible to digitally reconstruct lost texts (libraries) based on cross-references. Sophisticated functions for the chronological classification of texts with vague or missing dates round off the range of functions.”

Mit anschliessendem Crashkurs (‘Eine digitale Edition mit Nodegoat erstellen’). Es werden keine Vorkenntnisse vorausgesetzt. Die Teilnehmenden sollten Zugang zu einer Nodegoat-Forschungsumgebung haben, zu beantragen hier: https://www.dh.unibe.ch/dienstleistungen/nodegoat_go/index_ger.html. Oder, für Personen ausserhalb der Uni Bern, hier: nodegoat.net

Followed by a crash course (‘Creating a digital edition with Nodegoat’). No previous knowledge is required. Participants should have access to a Nodegoat research environment, to apply here: https://www.dh.unibe.ch/dienstleistungen/nodegoat_go/index_ger.html. Or, for people outside the University of Bern, here: nodegoat.net.

Poster_Ringvorlesung_HS2021_A3

 

Links und Informationen zur Ringvorlesung:

Ein beispielhaftes Projekt, das die Möglichkeiten und Funktionen von Nodegoat für eine digitale Edition zeigt, ist die Encyclopedia of Romantic Nationalism in Europe.

An exemplary project that shows the possibilities and functions of Nodegoat for a digital edition is the Encyclopedia of Romantic Nationalism in Europe.

https://ernie.uva.nl/viewer.p/21/56/object/131-158438

Für dieses Projekt wurden die Daten (mit Tags markierte Texte) aus Nodegoat via Schnittstelle (API) exportiert und in das Datenformat XML konvertiert, um so die Daten für die Publikation in Buchform vorzubereiten. Die Bücher, die der Online-Version des Projekts damit weitgehend entsprechen, liegen in zwei Bänden vor.

For this project, the data (tagged texts) were exported from Nodegoat via interface (API) and converted into the data format XML in order to prepare the data for publication in book form. The books, which thus largely correspond to the online version of the project, are available in two volumes.

https://spinnet.eu/ernie/erniethebook

Jeder Artikel in diesem Projekt entspricht einem Objekt in Nodegoat. Jedem Objekt in Nodegoat wird bei der Erstellung automatisch eine eindeutige Kennung (Unique identifier) zugewiesen. Damit lassen sich die Artikel einfach zititieren, auch mit einem Digital Object Identifier (DOI), wie hier zu sehen.

Each item in this project corresponds to an object in Nodegoat. Each object in Nodegoat is automatically assigned a unique identifier when it is created. This makes it easy to cite the articles, even with a Digital Object Identifier (DOI), as seen here.

Wie können wir Texte ‘taggen’ (auszeichnen) in Nodegoat? Hintergrund: in Nodegoat können wir für jedes Objekt (und für die Kategorien, die die Objekte beschreiben oder klassifizieren) verschiedene Typen von Inhaltselementen im Datenmodell definieren. Wollen wir einen Text auszeichnen, wählen wir im Datenmodell  das Element ‘Text (Tags & Layout)’.

How can we ‘tag’ texts in Nodegoat? Background: in Nodegoat we can define different types of content elements in the data model for each object (and for the categories that describe or classify the objects). If we want to tag a text, we select the element ‘Text (Tags & Layout)’ in the data model.

Im Datenbereich sieht dann dieses Textfeld folgendermassen aus, hier mit einem Beispieltext zu einer Pilgerreise gefüllt.

In the data area, this text field looks like this, here filled with an example text for a pilgrimage.

In diesem Beispiel handelt es sich um den Objekttypen ‘Dokument’, in dem wir die Texte als einzelne Objekte erfassen und taggen. Wir können in den Conditions von Nodegoat bestimmte Begriffe, die uns besonders interessieren, farblich hervorheben. Mit einem Klick auf einen farbigen Begriff gelangen wir zum anderen Objekt, zum Beispiel zu einer Person. Im Tab ‘Cross-Referencing’ sehen wir alle Objekte aufgeführt, zu denen von unserem Text aus solche Verlinkungen erstellt wurden. Die Verlinkungen machen es zudem möglich, umgehen ein Netzwerk dieses Textes zu erstellen und in dieser Form die Verlinkungen darzustellen (mit den Methoden der Netzwerkanalyse).

In this example, we are dealing with the object type ‘document’, in which we capture and tag the texts as individual objects. In the conditions of Nodegoat, we can highlight in colour certain terms that are of particular interest to us. Clicking on a coloured term takes us to the other object, for example to a person. In the tab ‘Cross-Referencing’ we see all objects listed to which such links have been created from our text. The links also make it possible to create a network of this text and to display the links in this form (using the methods of network analysis).

Die Texte in Nodegoat werden nicht in XML getaggt, sondern in HTML. Das Besondere: im HTML-Code sehen wir hellblau die Objekte, die getaggt wurden, mit ihren Identifikationsnummern. Dank dieser Identifikationsnummern, die in der Datenbank gespeichert werden, verfügen wir durch das Tagging über eine klar definierte Struktur des Textes, die wir sogleich für Auswertungen nutzen können, etwa für die erwähnte Netzwerkanalyse, für Kartenvisualisierung und weitere Analysen. Kann Nodegoat auch Daten im XML-Format herstellen? Zum Beispiel für eine Publikation? Ja. Das wurde auch schon gemacht, siehe dazu das Beispiel der eingangs erwähnten Enzyklopädie. Die Daten, also hier die Tags mit ihren Identifikationsnummern werden dazu via Schnittstelle in Nodegoat im Format JSON heruntergeladen und mit einem XML-Parser in das XML-Datenformat konvertiert. Dieser Vorgang ist insofern nicht schwierig, da die Nodegoat-Daten klar strukturiert und definiert im Format JSON vorliegen. Könnte ein XML-Editor für Nodegoat entwickelt werden, sodass man die Tags nicht in HTML, sondern gleich in XML abspeichern könnte? Ja, das ist möglich. Das Tagging mit HMTL + eindeutigen Identifikationsnummern sollte allerdings nicht unterschätzt werden. Es bietet gerade im Hinblick auf die Her- und Bereitstellung von Forschungsdaten gewisse Vorteile gegenüber dem XML-Format, dessen Stärken mehr im Bereich des Publishing zu sehen sind. Doch sollen hier die Formate nicht gegeneinander ausgespielt werden, sondern der Fokus auf die Forschung gelegt werden und damit auf die Frage, welche neuen Erkenntnisse wir mit digitalen Tools gewinnen können? Dabei sollten wir nicht nur Daten sammeln, sondern diese auswerten, möglichst über verschiedene Ebenen der Kontextualisierung hinweg. Für solche Auswertungen bietet Nodegoat einen einfachen Zugang, insbesondere auch für die Lehre.

The texts in Nodegoat are not tagged in XML, but in HTML. The special feature: in the HTML code we see in light blue the objects that have been tagged with their identification numbers. Thanks to these identification numbers, which are stored in the database, we have a clearly defined structure of the text through tagging, which we can immediately use for evaluations, for example for the network analysis mentioned, for map visualisation and other analyses. Can Nodegoat also produce data in XML format? For example, for a publication? Yes. This has already been done, see the example of the encyclopaedia mentioned at the beginning. The data, in this case the tags with their identification numbers, are downloaded into Nodegoat in JSON format via an interface and converted into the XML data format with an XML parser. This process is not difficult insofar as the Nodegoat data is clearly structured and defined in JSON format. Could an XML editor be developed for Nodegoat so that the tags could be saved in XML instead of HTML? Yes, that is possible. However, tagging with HMTL + unique identification numbers should not be underestimated. Especially with regard to the production and provision of research data, it offers certain advantages over the XML format, whose strengths are to be seen more in the area of publishing. However, the formats should not be played off against each other here, but the focus should be on research and thus on the question of what new insights we can gain with digital tools? In doing so, we should not only collect data, but also evaluate it, if possible across different levels of contextualisation. Nodegoat offers easy access for such evaluations, especially for teaching.

Nodegoat verfügt über verschiedene andere Funktionen zur Bearbeitung und Auswertung von Texten und Bildern. Neben dem ‘taggen’ von Texten können etwa mit Regex (regular expression) Begriffe in den Conditions von Nodegoat definiert und sogleich im Text farblich hervorgehoben werden (braun, gelb, blau).

Nodegoat has various functions for editing and evaluating texts and images. In addition to tagging texts, terms can be defined in the Nodegoat conditions using regex (regular expression) and immediately highlighted in colour in the text (brown, yellow, blue).

Um den Begriff ‘Schiff’ in den Conditions blau einzufärben, tragen wir bei den Descriptions des Objekts Folgendes ein: (Schiff)  und dann diese Formatierung: <span style=”background-color: #81BEF7;”>$1</span>. Wo die Angaben einzutragen sind, sehen wir in der folgenden Abbildung.

In order to colour the term ‘Schiff’ blue in the conditions, we enter the following in the object’s descriptions: (Schiff) and then this formatting: <span style=”background-color: #81BEF7;”>$1</span>. We can see where to enter the information in the following illustration.

Eine weitere nützliche Funktion für die Arbeit mit Texten ist das Modul der ‘Data reconciliation’ in Nodegoat. Damit können wir Beschreibungen zu Objekten automatisiert nach bestimmten Begriffen durchsuchen lassen, die wir zuvor in einem Vokabular defniert haben. Die Treffer werden sogleich in der Datenbank abgespeichert. So können wir auch Texte durchsuchen und die gefundenen Begriffe automatisch ‘taggen’ und wiederum abspeichern lassen, wobei der Algorithumus zur Zeit so eingestellt ist, dass er nach einem ganz spezifischen Begriff in einem Text sucht und nicht nach der Gesamtzahl dieser Begriffe.

Another useful function for working with texts is the ‘Data reconciliation’ module in Nodegoat. This allows us to automatically search descriptions of objects for certain terms that we have previously defined in a vocabulary. The hits are immediately stored in the database. In this way, we can also search texts and have the terms found automatically ‘tagged’ and saved again, whereby the algorithm is currently set to search for a very specific term in a text and not for the total number of these terms.

Kann man Text in Nodegoat importieren? Ja. Die einfachste Art ist, die Texte per Copy / Paste in das Textfeld einzufügen. Bei vielen Texten kann man diese entweder hochladen via Interface (CSV-Format) oder man importiert Texte via Schnittstelle (API). Letzteres kann man mit Transkribus kombinieren. Dies bedeutet: Wir können unsere Texte in Transkribus, zum Beispiel im Webinterface von Transkribus lite, zuerst automatisch (mit OCR) transkribieren lassen und dann jede Seite unsers Dokuments als ein Objekt in Nodegoat importieren mit der ‘Data Ingestion’ Funktion. Anschliessend können wir die Texte in Nodegoat taggen und etwa mit Abfragen oder Visualisierungen auswerten. Wir gehen an dieser Stelle nicht weiter auf die Einzelheiten ein, um zu einem späteren Zeitpunkt dazu ein Tutorial zu erstellen. In der Abbildung unten sehen wir als Beispiel einige importierte Pages aus Transkribus mit ihren IDs und den Seitenzahlen. Wir können also ganze Werke aus Transkribus in Nodegoat importieren.

Can I import text into Nodegoat? Yes. The easiest way is to copy / paste the texts into the text field. For many texts you can either upload them via interface (CSV format) or import texts via interface (API). The latter can be combined with Transkribus. This means: We can first have our texts transcribed automatically (with OCR) in Transkribus, for example in the web interface of Transkribus lite, and then import each page of our document as an object into Nodegoat using the ‘Data Ingestion’ function. Afterwards, we can tag the texts in Nodegoat and evaluate them with querys and visualisations. We will not go into further detail here, in order to create a tutorial on this at a later date. In the figure below we see as an example some imported pages from Transkribus with their IDs and page numbers. So we can import whole works from Transkribus into Nodegoat.

Dies war nur ein erster Überblick zu den Funktionen, die Nodegoat für die Arbeit mit Texten bietet. Abschliessend kann darauf verwiesen werden, dass in jeder Nodegoat-Umgebung (domain) ein Webinterface integriert ist, mit dem die eigene digitale Edition via Web der Öffentlichkeit zugänglich gemacht werden kann. Wie eine solches Webinterface konfiguriert wird, werden wir zu einem späteren Zeitpunkt auf diesem Blog erläutern.

This was only a first overview of the functions that Nodegoat offers for working with texts. Finally, it can be pointed out that a web interface is integrated in every Nodegoat environment (domain), with which one’s own digital edition can be made accessible to the public via the web. How to configure such a web interface will be explained later on this blog.

Nodegoat Workshop – Get Linked Open Data into Nodegoat

These workshops follow a workshop series earlier this year, organised in collaboration with the University of Bern in the framework of the SNFS SPARK project ‘Dynamic Data Ingestion’ as well as two of the NEP4DISSENT Summer Schools …”

https://nodegoat.net/blog.s/56/linking-your-historical-sources-to-open-data-workshop-series-organised-by-cost-action-nep4dissent

I can highly recommend the workshop taking place on 13 and 21 September 2021. In particular, it will show how to import Linked Open Data into Nodegoat via an interface, which does not require any special programming skills, allowing you to devote your energy and brain power to the structure, content and consistency of the imported research data.

 

SPARK Workshop on Dynamic Data Ingestion

Programme Session 4 (26-05-2021)

“Introduction by Kaspar Gubler

Welcome to the fourth and final session of the SPARK Nodegoat Workshop. We are very happy about the participation and the numerous feedbacks on the workshop. I can only recommend that you organise such a workshop yourself. With a video conference this is no longer a problem. For example, several projects could get together and organise a workshop, perhaps on a specific topic related to Nodegoat.  A comment on Nodegoat as a research infrastructure in the humanities: Nodegoat is jointly funded by various projects and universities, a model that I believe is the solution for the long-term development of a digital infrastructure for the humanities. It is crucial that such an infrastructure can be used by different disciplines and not just by a single, specific project. If only one project can use software, it is not a true infrastructure. In contrast, as we have seen, Nodegoat can be used by different humanities disciplines. Another advantage of Nodegoat as a research infrastructure is that it does not require difficult software installations or programming skills. Thus, an infrastructure like Nodegoat allows users to focus on research. They don’t have to deal with technical things first. I think this is a big problem in the digital humanities: there is too much focus on the technical stuff and not enough on our core competence, which is to answer research questions with digital methods. In my opinion, too often we only talk about the possibilities of digital methods instead of delivering research results. It’s like constantly cleaning your glasses instead of just putting them on.”

14:00 Welcome and recap of last week’s session

14:15 Ingestion of publications from the Dutch Royal Library SPARQL endpoint

14:50 Break

15:00 Ingestion of SameAs references from lobid.org

15:15 Ingestion of Wikimedia Commons URLs from Wikidata

15:50 Break

16:00 Ingestion (TBD)

16:35 Q&A

Slides:

 

Linked Data Resources Suggestions

Linked Data Resources

Label Value
Name Query the KB SPARQL endpoint based on VIAF ID
Protocol SPARQL
URL http://data.bibliotheken.nl/sparql?default-graph-uri=&query=
URL Options &format=json&timeout=0&debug=on
Query SELECT DISTINCT ?pub ?name ?date (group_concat(?author_ids; separator=”, “) AS ?author_id)

WHERE {

?pub schema:author ?person.

[query=viaf]?person schema:sameAs <http://viaf.org/viaf/[variable=id]71399367[/variable]>.[/query]

?pub schema:name ?name.

?pub schema:author ?author_node.

?author_node schema:sameAs ?author_ids.

?pub schema:publication ?publication_node.

?publication_node schema:startDate ?date.

}

GROUP BY ?pub ?name ?date

Conversion INPUT

http://www.wikidata.org/entity/Q123034, http://viaf.org/viaf/71399367

Script:

const uris = INPUT;

const arr_viaf = uris.match(/viaf\/(\w+)/i);

const viaf_identifier = arr_viaf[1];

OUTPUT = {‘viaf_identifier’: viaf_identifier};

Original Query SELECT DISTINCT ?pub ?name ?date (group_concat(?author_ids; separator=”, “) AS ?author_id)

WHERE {

?pub schema:author ?person.

?person schema:sameAs <http://viaf.org/viaf/71399367>.

?pub schema:name ?name.

?pub schema:author ?author_node.

?author_node schema:sameAs ?author_ids.

?pub schema:publication ?publication_node.

?publication_node schema:startDate ?date.

}

GROUP BY ?pub ?name ?date

 

Label Value
Name Query the lobid.org API for ‘SameAs’
Protocol API
URL https://lobid.org/gnd/
URL Options .json
Query [query=id][variable]118637533[/variable][/query]

 

Label Value
Name Query Wikidata for Wiki Commons URLs based on Wikidata ID
Protocol SPARQL
URL https://query.wikidata.org/sparql?query=
URL Options &format=json
Query SELECT (CONCAT(“https://commons.wikimedia.org/wiki/Category:”,STR(?commons)) as ?commons_link)

WHERE {

<[query=id]http://www.wikidata.org/entity/[variable=id:uri-identifier]Q60866[/variable][/query]> wdt:P373 ?commons.

}

URI Template http://www.wikidata.org/entity/[[identifier]]
Link Click to open Query

 

Label Value
Name DiJeSt
Protocol SPARQL
URL http://tdk-jbs.cs.technion.ac.il:8890/sparql?default-graph-uri=&query=
URL Options &format=application%2Fsparql-results%2Bjson&timeout=0&debug=on&run=+Run+Query+
Query SELECT DISTINCT ?book ?title ?author_name

WHERE {

?book <http://purl.org/dc/terms/title> ?title .

[query=name]FILTER regex(?title,  “[variable]קודש[/variable]”, “i”)[/query]

?book <http://purl.org/dc/terms/creator> ?author_node .

?author_node <https://schema.org/name> ?author_name .

FILTER (lang(?author_name) = ‘und-hebr’)

}

OFFSET [[offset]] LIMIT [[limit]]

 

Label Value
Name The Getty Thesaurus of Geographic Names
Protocol SPARQL
URL http://vocab.getty.edu/sparql.json?query=
URL Options
Query SELECT DISTINCT ?place ?label ?parents (GROUP_CONCAT(?altlabel;SEPARATOR=”,”) AS ?altlabels) {

?place skos:inScheme tgn: .

?place luc:term “[query=name][variable]lemberg[/variable][/query]”.

?place gvp:prefLabelGVP [xl:literalForm ?label].

 

OPTIONAL { ?place xl:altLabel [ gvp:term ?altlabel ] }

OPTIONAL { ?place gvp:parentStringAbbrev ?parents }

}

GROUP BY ?place ?label ?parents

OFFSET [[offset]] LIMIT [[limit]]

 

Nodegoat Day 2021 @ Unibe

projects / sources / data / networks / people

From source to visualization: Data modeling and analysis with Nodegoat

Friday, 04 June, 9-17 h via Zoom

Programme

9.00 Introduction by Kaspar Gubler (Universität Bern, Historisches Institut): SNSF Spark Projekt ‘Dynamic Data Ingestion’ for server-side data harmonisation: Creating a database with 200k students and scholars 1200-1800: Method, concept and practical implementation

9:30 Simon Bürcky (Universität Giessen, Historisches Institut): Dynastic Networks of the Counts of Solms during the 15th Century, PhD project

10:00 Katharina Vukadin (Universität München, Institut für Kunstgeschichte): Relic Networks in the Early Modern Period: the Wittelsbach collection since 1577, PhD project as part of the ERC Projekt: SACRIMA

10:30 Giulia Iannuzzi (Università di Firenze / Università di Trieste): Plotting European sea routes in the Modern age (1500-1900): modelling, visualising, and linking data in Nodegoat, Global Sea Routes Project

11:00 Discussion / Questions / Partisan round: Opportunity to present your own Nodegoat project or project idea

12:00-13:00 Lunch break

13:00 Daniel Jaquet (Universität Bern, Historisches Institut): Mapping Swiss wars in the Middle Ages (1350-1550) as part of the Project Martial Culture in Medieval Town

13:30 Nina Janz / Sarah Maya Vercruysse / Michel R. Pauly (Université de Luxembourg, Project WARLUX): Using data analysis on recruited Luxembourgers in WWII, https://digiwarhist.hypotheses.org

14:00 Stefanie Mahrer (Universität Bern, Historisches Institut): Transnational Science. Switzerland and Forced Academic Migrants 1933 to 1950, https://forced-academic-migration.net

14:30 Nuno Camarinhas (Universidade Nova de Lisboa, Faculdade de Direito): Mapping justice administration in Portugal and the Portuguese empire (1600-1926), Project: Modern Portuguese judiciary

15:00 Milan Matthiesen (Europainstitut der Universität Basel): The Exterior of Philosophy: On the Practice of New Confucianism, https://europa.unibas.ch/de/forschung/european-global-knowledge-production/the-exterior-of-philosophy/

15:30 Pim van Bree / Geert Kessels (The Hague, LAB1100): Linked Open Data in the humanities: availability, linking and analysis with Nodegoat, https://lab1100.com

16:00 Discussion / Questions / Partisan round: Opportunity to present your own Nodegoat project or project idea

16:30 Apéro virtuel

 

 

 

CfP Nodegoat Day 2021

The Nodegoat Day 2021 will be run as an entirely virtual event via Zoom, hosted by the University of Bern (Switzerland). At Nodegoat Day 2020, only projects from the University of Bern were presented. As this had already attracted an international audience, projects from all over the world will be invited to Nodegoat Day 2021. Reports and impressions from Nodegoat Day 2020 can be found here:

https://www.infoclio.ch/de/tagungsbericht-nodegoat-day-2020

https://histdata.hypotheses.org/1937

Proposals for Project showcases (max. 300 word proposal)

Data visualisations in the Digital Humanities are booming. Through the visual representation of research data, previously unknown patterns and developments can be uncovered and lead to new insights. At the same time, data visualisation helps research gain more visibility and facilitates interdisciplinary exchange, especially when projects work with the same visualisation software. This is the case with Nodegoat, a multifunctional, virtual research environment for managing, analysing and visualising research data. The Nodegoat Day therefore will bring together research projects from very different disciplines. The aim of the conference is to show and reflect how a digital tool like Nodegoat can be used in humanities research and/or teaching, what influence digital tools can have on formulating and answering research questions, and how they can lead us to new insights and research horizons. Projects at Nodegoat Day should present experimental, substantial or completed research, provide concrete insights into their conceptual data models and visualisations, and situate their approach within their discipline and the Digital Humanities. Contributions from young scientists are explicitly welcome as well as trans- and interdisciplinary impulses. Special consideration should be given to the data models: the principle of data modeling in Nodegoat is object-oriented and follows the actor-network theory: Persons, events, artefacts, places or historical sources are first considered as objects of a horizontal order, which form a network and create a hierarchy only through their relationships to each other. Nodegoat users can define data models, objects and relationships individually and thus realise their own project-specific data structures. Furthermore, a data model can also be adapted to existing reference models, which also improves the interoperability of research data. The design of the data model is of course crucial for the visualisations: every object in Nodegoat can be given in the model geographical and temporal attributes that can be analysed and visualised for the research data. The potential of such data visualisation functions (maps, networks, time series) as well as the algorithmic calculations in Nodegoat will be reflected and discussed at the conference. Overall, there will be numerous opportunities for comparison between the individual projects, from which impulses and exchange across disciplinary boundaries can be expected as well as networking in the Nodegoat community. The contributors will have 20 minutes for presentation and afterwards there will be 10 minutes for questions and inputs.

In addition, two longer “partisan rounds” are scheduled for spontaneous ultra-short presentations on projects in the experimental stage, questions about how to run a Nodegoat project, functions of Nodegoat, long-term archiving of Nodegoat data, and questions about life in general, as well as critical reflections on methods and results of digital projects.

The abstracts are requested along with a brief biographical note by email no later than May 15,
2021, to Kaspar Gubler (kaspar.gubler@hist.unibe.ch). Feedback will be provided no later than
the end of May 2021. The conference language is English.

Guests can register sending an email to: larissa.achermann@hist.unibe.ch

Projects around the globe are welcome to participate in Nodegoat Day 2021. Map of Nodegoat projects worldwide (selection, December 2020):

https://nodegoat.net/usecases

 

Institutes where Nodegoat projects are running (November 2020):

SNSF SPARK Workshop on Dynamic Data Ingestion

Session 3

12 May 2021

“Introduction by Kaspar Gubler

I would like to welcome in this session all participants who are already familiar with Nodegoat and have therefore skipped sessions 1 and 2 and will now attend sessions 3 and 4. In sessions 1 and 2 we created a data model for people and books and imported data, including geo-coordinates, into Nodegoat by uploading CSV files. Importing data into Nodegoat will also be the central topic of today’s session. We have three ways to import data into Nodegoat.

1) We can upload data into Nodegoat as a CSV file, as we did in session 2.

2) We can import data directly into Nodegoat using a graphical interface without having to upload it, we will look at this process of dynamic data ingestion today.

3) We can import data into Nodegoat via an application programming interface (API), which, unlike 1) and 2), requires programming knowledge.

Of course, we can also start a project in Nodegoat without importing data first, not even geodata. In every Nodegoat installation, the object ‘City’ is already present. In ‘City’ about 130k places are available, which have geo-coordinates as well as a GeoNames-ID. ‘City’ is a collaborative object: all projects of a Nodegoat installation can add and use ‘City’-locations and thus benefit from each other.”

Programme Session 3 (12-05-2021)

14:00 Welcome and recap of last week’s session

14:15 Create Linked Data Resource to query GND

14:30 Create Linked Data Resource to query VIAF

14:50 Break

15:00 Ingestion of VIAF IDs from Wikidata

15:50 Break

16:00 Ingestion of biographical data from Wikidata

16:35 Looking forward to next session

16:45 Q&A

  • Can the mapping then only be done per 1 object? Or can you run it on a set (like reconciliation in open refine) and get unambiguous results automatically? → Yes, both.
  • Can I add/concatenate more of the json fields to the “label”? Because just the preferred name may not be sufficient to identify which one is the correct entry …→ Yes, add/include more of the relevant Values in your response, then open a “filter” dialog if necessary (there the additional fields/values will be shown).
  • Does nodegoat also support APIs that return XML instead of JSON? -> No
  • Wikidata SPARQL Query to only get Gregorian Dates Example: https://w.wiki/3KFq ->thanks!
  • Not really related to this session by I don’t see this referred to in any of the sessions: is there a nodegoat api from which one can draw the visualizations? or even simpler ‘embeds’? -> Public User Interfaces support embedding.
    • Does it expose JSON representations of our objects → next session
    • And is there a SPARQL endpoint (I know I would have to specify a kind of mapping)
  •  I am trying to create a LD resource from my API. The query: https://data.geo-kima.org/api/Variants/PlaceVariants/8964/100/1 works outside nodegoat. How can I spit this to fill the uri and query in nodegoat? (I get an error message when I do https://data.geo-kima.org/api/ in the url and Variants/PlaceVariants/8964/100/1 in the query. → next session
  • Is it possible to interact with database directly using SQL? → Not by design/purpose, API should be used

Links

https://nodegoat.net/usecases

Slides:

 

Preparation

If you are unfamiliar with the benefits of adding external identifiers to your dataset, please read this guide: https://nodegoat.net/guides/externalidentifiers.This example shows how to update 1 object at once. We can update multiple objects at once with the data ingestion (see below, data ingestion with Nodegoat).

Human Readable vs Machine Readable

Browse GND Data

Via Graphical User Interface (GUI):

https://d-nb.info/gnd/118637533 / https://lobid.org/gnd/118637533

Via Application Programming Interface (API): https://lobid.org/gnd/118637533.json

Query GND Data:

Via GUI: https://lobid.org/gnd/search?q=Zwingli

Via API: https://lobid.org/gnd/search?q=Zwingli&filter=type:Person&format=json

 

Data ingestion with Nodegoat

With the data ingestion in Nodegoat, you can enrich or update multiple objects with Linked Open Data (LOD) data from external data sources. This requires two steps. First, we configure a Linked Data Resource in Nodegoat, i.e. a query to the interface where the LOD data is available (data source). Secondly, we configure a data ingestion process, i.e. the mapping and storage of the LOD data in Nodegoat. Below are some examples of how to configure interfaces to Linked Data Resources.

Linked Data Resources

Label Value
Name Search the GND API via lobid.org
Protocol API
URL https://lobid.org/gnd/search?q=
URL Options &filter=type:Person&format=json
Query [query=name][variable]zwingli[/variable][/query]&from=[[offset]]&size=[[limit]]

 

Label Value
Name Search the VIAF API
Protocol API
URL http://www.viaf.org/viaf/AutoSuggest?query=
URL Options
Query [query=name][variable]zwingli[/variable][/query]

 

Label Value
Name Query Wikidata for VIAF ID based on GND ID
Protocol SPARQL
URL https://query.wikidata.org/sparql?query=
URL Options &format=json
Query SELECT ?person ?viaf

WHERE {

[query=gnd]?person wdt:P227 “[variable]118637533[/variable]” .[/query]

?person wdt:P214 ?viaf .}

Link Click to open Query

 

Label Value
Name Query Wikidata for Religion based on GND ID
Protocol SPARQL
URL https://query.wikidata.org/sparql?query=
URL Options &format=json
Query SELECT ?person ?religion ?religion_label

WHERE {

[query=gnd]?person wdt:P227 “[variable]118637533[/variable]” .[/query]

?person wdt:P140 ?religion .

?religion rdfs:label ?religion_label .

FILTER(LANG(?religion_label) = “en”)

}

Link Click to open Query

 

Label Value
Name Query Wikidata for Date of Birth based on GND ID
Protocol SPARQL
URL https://query.wikidata.org/sparql?query=
URL Options &format=json
Query SELECT ?person ?date_of_birth

WHERE {

[query=gnd]?person wdt:P227 “[variable]118637533[/variable]” .[/query]

?person wdt:P569 ?date_of_birth .

}

Conversion INPUT

1484-01-10T00:00:00Z

Script:

const date = new Date(INPUT);

const day = date.getDate();

const month = date.getMonth() + 1;

const year = date.getFullYear();

var OUTPUT = {‘date’: day+’-‘+month+’-‘+year};

Link Click to open Query

Click to open Query with reference statement

 

To start the ingestion process, we activate it for our project in ‘Management’. Then we define the process, the mapping and the storage of the LOD data, in the ‘Data’ section, like in the picture. We can add new objects (or values of objects = ‘object descriptions’ in Nodegoat), ad them if they not exist already or update objects, for example the object ‘Person’.

SNSF SPARK Workshop on Dynamic Data Ingestion

Session 2

05 May 2021

“Introduction by Kaspar Gubler

We are very pleased to have so many interesting projects and engaged participants in the workshop. And more participants have joined for session 2. For example, someone from Hamburg who used to do network analysis with the software Gephi and now wants to try out Nodegoat. A new participant is an archaeologist from the University of Bern who has documented sites in Excel and wants to import and visualise them in Nodegoat. Such data import is not difficult, especially if the data has been entered consistently in Excel. Another new participant plans to visualise cultural heritage with Nodegoat. A good example of how Nodegoat can be used for the presentation of digital, cultural heritage (thus also for art history) is the encyclopaedia on Romantic Nationalism: https://ernie.uva.nl/viewer.p/21/52/types/all/grid

Terminology

Before we start, I would like to remind you of the terminology of Nodegoat, in which we speak of Objects and Sub-Objects as well as Categories. We describe these Objects (column inExcel) in Nodegoat with Object descriptions (like rows in Excel). Object descriptions can be a text, a link, a picture or a link to another Object or a Category (= reference = relation). We can define in our data model the kind of description for each Object description. This gives us the possibility to describe an Object very precisely:

Find a common language

Important: if you want to communicate with another Nodegoat project it is very helpful if you use the terminology mentioned. So the first questions to another project would be: what Objects do you have? And how do you describe your Objects with what kind of Object descriptions? In which Sub-Object do you store your geo references? If you want to get in touch with other projects, you can organise your own zoom meetings on specific questions about Nodegoat. I see many projects that have a lot in common and could certainly benefit from an exchange. I would also like to draw your attention to the Nodegoat Day on 4 June, where you can present your project or your project idea.”

14:00 Welcome and recap of last week’s session

14:15 Object Type ‘Place’ Data Model + Data Entry

14:30 Object Type ‘Place’ Data Import

14:50 Break

15:00 Object Type ‘Person’ Data Import

15:30 Filter + Visualisation

15:50 Break

16:00 Scope & Visual Settings

16:15 Conditions & Export

16:35 Looking forward to next session

16:45 Q&A

  • Difference between Gephi and Nodegoat? → Nodegoat departs from data management + visualization functionality
  • Will there be a possibility to store Nodegoat Data in a data repository like Zenodo? → There are rumors about a Zenodo-Module in Nodegoat coming, currently it’s technically no problem to do it manually
  • How to download a “dump”?  → Via API, export dump of the data + of the model in JSON
  • Is it possible to export a complete project (instead of individual csv sheets)?  → Yes, via API you can export all of the data and the data model in JSON
  • How can I “undo” an import from csv when I notice that some things did not work as intended? Can I mass delete objects? → Yes, cou can mass delete objects via graphical interface, choosing all objects deleting them with the grey multi button, or delete in ‘Model’ the whole of a Object Type with clicking on ‘empty’ or mass delete mass objects via API
  • Can you import by just giving the URL of the Google Doc? → Yes, via API of Nodegoat, check what Google allows you to do via API
  • Can visualisations be downloaded in any way to the desktop? → Yes, Screenshot, or for high resolution use the ‘Capture’ functionality in the visualisation settings
  • Follow-up question to session 1:  can you create an itinerary of a person (object) with just knowing the sequence of the location but not the dates?→ Yes, with storing vague dates in Nodegoat, you make an statement in vague dates (‘Chronology’) like: ‘Studies came after Birth’. Or Yes, use as date: 1, 2, 3 etc. , or use the sequence identifier in a nodegoat date, so if you know a year use ‘1880 1’, ‘1818 2’, ‘1818 3’
  • We can include both a geometry (polygon) AND a precise coordinate in a sub-object? Or as separate subobjects of the same object? → Yes, , yes both options are possible! One geometry can be polygon + point(s) + line(s). Or each in a separate sub-object to be able to add attributes.
  • Are there any example projects that depicts more complex routes? → http://mnn.nodegoat.net/viewer.p/1/47/scenario/30/geo
  • Can you add your own icons to be displayed on the map? → Yes, in SVG format.
  • Nodegoat as Tool to visualise routes or itineraries? →  Yes
  • Is there also a method to show place-specific meta-information on the map instead of the person’s? →  Yes
  • In case of data model refactoring, how should we deal with the already inserted data? For example, if one wants to normalize repetitive data creating a new object type, how can he migrate the actual data to the new data model? Export + Transform + Import is the only way?  →  Yes, but because you now have nodegoat IDs, it’s a matter of a straightforward mapping. Or use an Ingestion process (session 4).
  • is it possible to mark a node with multiple conditions (e.g. one condition for people born in the low countries (orange) + people died in Italy (blue), so objects that fall in both categories marked in two colours)? →  Yes

Links

http://mnn.nodegoat.net/viewer.p/1/47/scenario/30/geo

Slides:

Download Google Sheets as CSV files:

RAG Places small selection: https://docs.google.com/spreadsheets/d/1zvcVj66nr1tm7PAmNJSSf2BI_o5e2rrPCSE2l4PAsHQ/
RAG People small selection:https://docs.google.com/spreadsheets/d/1K2SGF0TkQTVnZ5WQqgMc0MbJdGps1kA_oWVL3Qir6rs/

Guides:https://nodegoat.net/guides/csvfilehttps://nodegoat.net/guides/gazetteer

Another sample data Import:
https://histdata.hypotheses.org/nodegoat-tutorials
Tutorial No 10, to create this map (positions of ships):

SNSF SPARK Workshop on Dynamic Data Ingestion

Session 1

28 April 2021

Introduction by Kaspar Gubler

“Welcome to the Nodegoat SPARK workshop ‘Dynamic Data ingestion’ We are very happy about the 140 participants from all over the world. On the map, which was of course created with Nodegoat, we can see the places of origin of the participants. They come from very different disciplines: history, literary history, German studies, English studies, legal history, historical geography, art history, musicology, theatre studies, film studies, African studies, Islamic studies, sociology, digital humanities and also from archives and libraries. This impressively shows how Nodegoat has established itself in the last ten years as an international research infrastructure for the humanities, an interdisciplinary research infrastructure that helps digital research to gain new insights and more visibility – and facilitates the collaboration of projects, especially beyond one’s own subject boundaries. Pim van Bree and Geert Kessels began developing Nodegoat as part of a project at the University of Amsterdam in 2011. Pim van Bree has a Master’s degree in Media Studies, Geert Kessels a Master’s degree in History. Both are also accomplished software developers. Their particular strength is that they know both worlds, the world of humanities and the world of programming. They combine these two worlds in their workshops, which they conduct at educational institutions worldwide. With their deep knowledge of methods, sources and questions in the humanities, they can create fitting and working data models for the different disciplines in Nodegoat to extract new scientific information and knowledge from the data.”

Programme

14:00 Welcome by Kaspar Gubler

14:15 General introduction to nodegoat

14:40 Login and set up your nodegoat project

14:50 Break

15:00 Object Type ‘Person’ Data Model

15:15 Object Type ‘Person’ Data Entry

15:35 Classification ‘Capacity’ Data Model + Data Entry

15:50 Break

16:00 Object Type ‘Book’ Data Model

16:20 Object Type ‘Book’ Data Entry

16:35 Looking forward to next session

16:45 Q&A

  • I don’t have access to the ‘Model’ section in nodegoat → Check Page Clearence. Each nodegoat project has 1 administrator at the beginning, who can set up additional users. Management > Users > add User > add the user  and activate the ‘Model’ in the settings  for the page clearance (tab)
  • How does the Scope work? For Visualizations? → In the Scope you define which of your databasefields you want to use for the visualization, so you activate the field that contains the georeference wiht the coordinates
  • Are there facilities (planned?) helping to prepare a RDF rendition of the database? →Yes: on one’s own nodegoat installation, you can configure a translation module to translate the data model to some RDF vocabulary
  • Is it possible to export, as static or dynamic representation, a computed spatio-temporal / network analysis ? →Yes
  • Würden Sie sagen, dass sich Nodegoat grundsätzlich auch als Bilddatenbank (mit Zusatzbeschreibungen und Querverweisen) eignet? →Yes, absolutely, see the links below.
  • Regarding custom gazetteers and prosopographies: Are there size limitations? → Not in general, size limit of CSV import is set to 60’000 rows at a time
  • Is it better to store a region like Germany as JSON or via Reference >city > autofill option Germany? →It depends on what you want to show on your map, if you are more interested in areas it’s Geo JSON, if you are working in general with dots on your maps, it’s maybe better to store it as a point like your other data
  • Can we import polygon data from an existing map with territorial circumscription so that we don’t have to draw them by hand? → Yes
  • Can we specify a schema or other constraints to ensure consistency of the data (e.g. Birthdate < Deathdate, no overlapping residence periods etc.)? → You can use visualisation to do some error checking, but there can be no hard enforcement of such constraints at the moment. Or you can filter specifically on Birthdate < Deathdate, more advanced.
  • Does database harmonise the different sources of location, e.g. if I put “Roma (IT)” from “City” and also point from “Geometry” which is actually Rome, will database understand it is the same? On the map it looks the same, but how is it in database? Is it linked as the same entity? → More on this next session!
  • Is Arabic script supported or more generally, are scripts running from right to left supported? → Yes: Everything Unicode
  • Can we geo-visualize more than one object type, say author’s places and book’s publishing houses’ places? → Yes: using the Scope (https://nodegoat.net/guides/visualisationsettings)
  • Is there anything in the guides as regards Nodegoat and RDF? → Yes, it’s work in progress, see this sample nodegoat project working with a Subject-Predicate-Object (RDF Format) data model: https://www.manto-myth.org/blog/a-half-dozen-ways-to-die-mythically
  • If I uncheck „Fixed field“ in Object „book“ it throws: The data Model does not have a configuration that can be used to generate Object names, please check your settings.“ Why? → Tick/check one or more of the Object Description ‘name’ (‘use object description for name’) checkboxes
  • Can nodegoat handle localised object descriptions? E.g. book reviews in different languages? → Yes
  • Can nodegoat handle uncertain dates / data? → Yes, see the following blogposts:

https://nodegoat.net/blog.s/45/how-to-store-uncertain-data-in-nodegoat-ambiguous-identities

https://nodegoat.net/blog.s/44/how-to-store-uncertain-data-in-nodegoat-conflicting-information

https://nodegoat.net/blog.s/43/how-to-store-uncertain-data-in-nodegoat-incomplete-source-material

https://nodegoat.net/blog.s/42/how-to-store-uncertain-data-in-nodegoat

      Slides

Data Ingestion Episode III – May the linked open data be with you

The linking of research data has been a dominant topic for years, especially in digital history. Linked Open Data (LOD) is the buzzword at conferences and in research projects. However, it is not the collection of such data available on the internet that is the greatest challenge here, but its harmonisation, because research databases are usually structured differently. It is therefore not surprising that despite many initiatives no research project in digital history has yet been realised being able to harmonise data across several structural levels of the databases. This means, for example, not only linking persons of databases by their names, but going deeper into the data structure to harmonise, for example, the geographical origin or attributes of a person’s education. But that would be the aim: to answer scientific questions through structural data harmonisation. This is where our SPARK project comes in. The third and final phase of the project (Episode 3) has been completed in January 2021. What are the core results of this project? In essence, it is a software module (DDI module for ‘dynamic data ingestion) and a method: data (research data) is collected from different source databases and ingested on a central server using the module according to the spider principle, creating a new metadatabase. The harmonisation of the collected data in this new build database is done as far as possible already with the data ingestion by mapping the database fields of the source databases into corresponding database fields to the new metadatabase. If such a mapping is not or only partially possible because the database fields of the source database and the metadatabase are too dissimilar, in a second step, as soon as the data is stored on the central server, an algorithm can be used to bring uniformity to this data by data reconciliation. In addition, the data can also be automatically reclassified in order to standardise it. These measures prepare the data for analysis and ultimately for publication, which both can be done in the virtual research environment Nodegoat. We will explain the procedure with the help of a case study. In this study, we collected data from related projects that are researching the history of universities and have joined together in a network, the Atelier Heloïse. The common interest of the projects is a prosopographical based history of universities, scholars and academic knowledge in pre-modern Europe. The four projects were chosen more at random; there are numerous other and important database projects in the Atelier. However, we had to limit ourselves to these four projects. Furthermore, it will be the task of the Atelier to bring all of the databases together in a joint, international project. The projects we have been working on are covering the history of the universities of Bologna (http://asfe.unibo.it/it), Padova (https://www.ottocentenariouniversitadipadova.it), Paris (http://studium.univ-paris1.fr) and the universities of the Old Empire in the project Repertorium Academicum Germanicum (https://rag-online.org). The metadatabase we created from the four projects contains about 200,000 students and scholars from all over Europe in the period 1200-1800, with the projects covering different time spans. The tools for collecting, ingesting (1), reconciling (2) and reclassifying (3) the data in Nodegoat also represent the methodological approach to data harmonisation as a prerequisite for data analysis (visualisations, network analyses) and finally for the publication of the results on the internet.

As a result of this approach, the places of origin of the students and scholars in the four projects were united on a map for the first time in research, impressively demonstrating the potential of international data networking. The map is of course only the starting point for deeper analyses. Only through a joint analysis of the four research projects, which describe the areas of origin (of the students) of their universities and their sources, can a synthesis, which the projects work out together, lead to new insights and research questions. But how can such a map be created? We will now take a look at this, starting with data ingestion. In order to be able to collect data, we must first make two settings in Nodegoat: the definition of the Linked Data Resource and the definition of the Ingestion process. To create the definition, we use a graphical interface, which not only simplifies our work, but also makes the data ingestion process transparent for all team members (project members, programmers). The graphical interface is thus a very important tool for visual communication, enabling a common understanding of the structures of the data sources. In the graphical interface, all database fields can be made visible to the project team and then assigned together to the new meta-database. A clear mapping process in combination with very good knowledge of the database fields (and their meaning, especially in the humanities) are the success factors not only for the ingestion, but also for database migrations in general. In principle, the graphical interface helps to find a common language and understanding between historians and programmers. In the Linked Data Resource module, it must be first defined whether it is an API interface or a SPARQL endpoint. Then a test query is constructed, for example for a person. This requires the identifier of this person, which functions as a variable for all persons of the source database from which data is to be ingested. Then the mapping process follows and the database fields of the source database are assigned to the corresponding fields in the metadatabase. By mapping as closely as possible, harmonisation of the collected data can already be achieved. However, if the structures of the source database and the metadatabase are too different, the data will still be imported and subsequently harmonised with the data matching process in the reconciliation module. Things can get complicated if, in addition, the data formats of the source database are not compatible with the metadatabase. In such a case, the data can be converted before the data import, or only certain information and not the entire content can be extracted from the source database fields. With the reconciliation module, we can then search the imported, heterogeneous data for specific terms that we have previously defined in a vocabulary. The terms found are automatically saved in the metadatabase. We demonstrated at the Atelier Heloïse conference such a procedure using the places of origin of Parisian students as an example. The places of origin are not georeferenced in the Paris database and only the names of the places are available in the application programming interface (API). With Reconciliation we can assign geopoints to the places. To do this, we first import the places from the source database and then use the reconciliation module. We configure this module so that the names of the places in the Paris database are compared with a reference list of places with geo-coordinates that exists in our metadatabase. The places in this reference list also have identifiers of GeoNames, an internationally used geographic reference system. In the reconciliation module, we now set the algorithm to not only search for the Paris names in the reference list, but also to store the georeference location (which contains the coordinates) when a hit is made. In this way we can easily visualise the places of origin of the Paris database on a map and thus also check the data qualitatively in a simple way, as it is an interactive map: Clicking on a point takes us directly to the locations and georeferences. Of course, the reconciliation module works for any data type, not only for geodata. For example, texts can also be searched for specific terms with the algorithm and the hits are automatically saved. If at this point of the ingestion process the data is still not uniform enough for an evaluation, the data can be additionally classified with the reclassification module. The principle of reclassification is that we query the data according to certain criteria and use this query to automatically classify the results. Such reclassifications are useful when a project wants to organise a lot of complex data and prepare it for data analysis. However, automatic reclassification is also very important for maintaining data consistency, as inconsistent or incorrect entries of data can also be automatically filtered out. Let’s make an example. In our case study, the four projects use different categories and names for academic degrees. Although the degrees were already classified in a remarkably uniform way in pre-modern Europe. But of course, one must take into account here that the same degree can have different qualities depending on the university. But we can also overcome this challenge with reclassification by going from the general to the specific.  In the following, we are looking at the academic degrees of jurists. For reclassification of jurists, we first create a query in Nodegoat and check whether the expected results appear. Then we use this query in the reclassification module and give it a term. This term is then used to classify the data found. The reclassification is not bound to certain types of data. We can classify people, places, observations, texts, time periods or anything else. In our case, we can classify all jurists of the four projects accordingly and thus quickly obtain a general overview of the areas of origin and study of such persons. Of course, we have to take into account that the projects cover different periods and look at their definitions of jurists in detail. This is done subsequently to the overview and is part of the qualitative data evaluation, where we can further differentiate the data and, for example, reclassify scholars who had studied Roman law. For this group of people, we can then highlight the places of origin in colour and thus see the spaces of origin and communication of these jurists. If we have further data available, for example information on the activities of the jurists as in the Repertorium Academicum Germanicum (RAG), we can also see where the legal knowledge was transferred with the persons – whom we regard as ‘knowledge carriers’. In this way, we could show the spread of law, and Roman law in particular, in pre-modern Europe. Of course, this also works for the other disciplines such as medicine or theology as well as for the large number of scholars holding a degree as a ‘Magister Artium’. It is then up to the researcher to classify, interpret, describe, if necessary correct and refine the results of the reclassification. With the procedure described, however, we are able to combine quantitative and qualitative research and thus reconstruct European knowledge spaces. With data ingestion, we can not only import and evaluate data from research projects, but in parallel, of course, also from other Linked Open Data to supplement or enrich the data set of our metadatabase, for example query the data on Wikidata that is linked to a person in our metadatabase. It would go beyond the scope looking at all the features of data analysis in Nodegoat, for example modules of network analysis, which can of course also be applied to our metadatabase. In any case, the data can be searched easily in full text and / or filtered specifically with complex, combined. It is further possible to query the data spatially drawing a polygon in GeoJOSN and simply copy its code into the database field that contains the geoinformation. This feature enables us to reconstruct, search and analyse specific knowledge spaces. Such spaces, as other results (data sets), can be published in so called data scenarios using an internet module that is configured in the backend of Nodeogat. A scenario is understood to be a data set with the corresponding visualisation settings.

Conclusion: The SPARK project makes it possible to link databases (research data) in a simple and transparent way and to harmonise and analyse the linked data with sophisticated tools – may the data be with you!

Die Eidgenossenschaft als Wissensraum im vormodernen Europa, Vortrag am 25.03.2021

Kaspar Gubler / Christian Hesse (Bern): Die Eidgenossenschaft als Wissensraum im vormodernen Europa: Neue Erkenntnismöglichkeiten durch Datenvisualisierungen

Vortrag im Rahmen der Ringvorlesung des Berner Mittelalter Zentrums.

Donnerstag, 25.03.2021, 17:15-18:45 Uhr.

Bitte registrieren Sie sich bei Laura Hutter (laura.hutter@ikg.unibe.ch), wenn Sie den Vortrag besuchen möchten. Der Vortrag findet virtuell via GoToMeeting statt. Nach der Registrierung erhalten sie den Link für GoToMeeting.

https://www.bmz.unibe.ch/unibe/portal/microsites/micro_bmz/content/e760315/e760316/e761493/e761495/FS21_BMZFlyer_ger.pdf

Looking back to nodegoat Day 2020

Climbing on the shoulders of digital giants: from data to knowledge

On November 27, 2020, the first nodegoat Day in history took place at the Historical Institute of the University of Bern via Zoom. The University’s nodegoat projects provided insights into the implementation of their data models and their methods of data analysis, with a focus on data visualization (maps, networks, time series). Originally, the ‘nodegoat Day 2020’ was planned as a local conference of the University of Bern, but more and more an international audience showed interest: via Zoom and live stream on YouTube people from Switzerland, Italy, France, Germany, the Netherlands, Belgium and Luxembourg participated. The introduction to the conference, organized by Kaspar Gubler, University of Bern:

“In April this year (2020), the virtual research environment nodegoat was put into operation as a pilot project of the Historical Institute at the University of Bern within the framework of the university’s digitisation strategy. This was preceded by two workshops on nodegoat at the Walter Benjamin Kolleg here at the university. The great interest in these workshops made clear the need for digital tools. Supported by the Historical Institute, the pilot project nodegoat GO, a nodegoat installation for the entire faculty of humanities was launched. This means that all members of the faculty can now apply for a personal nodegoat research environment. Details can be found on the website of the Digital Humanities Department here in Bern. Since this month, nodegoat GO is officially supported by the faculty and the Digital Humanities departement within the framework of the university’s digitisation strategy, including a nodegoat support office starting next year. I would like to take this opportunity to thank all those who have supported the nodegoat GO project. Some short remarks on the virtual research environment. What does ‘environment’ actually mean here? Environment means in principle: one software for many things. What once had to be programmed laboriously with individual digital tools is now available to us in a virtual research environment, ready to use: Databases, front-ends for data input as well as analysis and visualisation functions, interfaces for data exchange and a website to present your results to the world. With this and with its sophisticated visualisation possibilities, nodegoat is also an important tool for visual communication.

The origins of nodegoat are in the Netherlands. Nodegoat was developed about 10 years ago at the University of Amsterdam by Pim van Bree (Master in Media Studies) and Geert Kessels (Master in Modern History) for specific research requirements and was transferred to a university spin-off called LAB1100. This spin-off now leads the development of the software, which is available in open source. In the course of time different functional modules were added to nodegoat, so that nodegoat today represents a sophisticated system for data analysis. These modules are, depending on the research needs, financed by different institutes worldwide and integrated into the open source version of the software, making it available to all users. So you finance, but you also profit when others do so.

The methodology

In simple terms, nodegoat works similar to an Excell table. In contrast to Excell, however, nodegoat has extensive analysis functions with which the entered data can be immediately analysed, visualised and contextualised spatially and chronologically – all this without any programming knowledge. Data analysis and visualisation therefore take place within nodegoat. The data do not have to be exported to another visualisation software first. Nodegoat is not a data prison. All data can be exported at any time from nodegoat into another software, either as CSV file or via JSON interface.

Data modelling

In Nodegoat, users define their own data models without restrictions in terms of structure or depth. Each object can be classified with geographical and temporal attributes and evaluated accordingly. Users are therefore free to implement a completely individual data model or to create a data model that is adapted to existing vocabularies (e.g. Dublin Core or the reference model CIDOC). Nodegoat can therefore also generate and provide standard data and is at the same time a digital tool for networking data sets, thus improving the interoperability of research data in the field of humanities. From the point of view of data modelling, nodegoat follows an object-oriented approach. Following the actor-network theory, this means that persons, events, artefacts and sources are regarded as equivalent objects. Only the linking of objects through relationships forms and hierarchises a network.

But why should one work with data at all in the humanities? Why with a database?

The answer is simple: data can make visible factors, patterns and developments that would otherwise remain hidden in the sources. By changing the aggregate state of the data collected from the sources, we can, thanks to visualisations, for example, identify patterns that can lead to new insights. At the same time, the data visualisations help research to become more visible. The data show us the path that can lead us onto the shoulders of the digital giants. Once we reach the top, new horizons open up for us when data becomes information and knowledge.

The nodegoat projects that give us insights today come from very different fields. Nodegoat is an interdisciplinary tool which, as my personal experience shows, promotes the exchange of information across disciplinary boundaries. The projects that we will see today are at different stages of development. It is not about delivering a glossy brochure, but about giving as concrete an insight as possible into the project work with nodegoat as well as getting to know the possibilities and the basic functions of nodegoat, including data management, visualisations, networks and time series.

Temporally and thematically we will go on a great journey today. It begins in the 20th century, opens up national and transnational perspectives with the academic forced migration to Switzerland and war-torn societies in Southeastern Europe, leads us to festivals in contemporary theatre, then back to melodies and songs of the early modern period, into the European Middle Ages to church account books and academic knowledge spaces, and finally ends up at the cradle of mankind in Mesopotamia. Towards the end, more technical aspects will be presented, such as the data harmonisation of Linked Open Data and the developers of LAB1100 will conclude with an overview of nodegoat projects in other countries and insights into software development.”

Kaspar Gubler (Universität Bern, Historisches Institut): Kaspar Gubler used the REPAC project as an example to show how nodegoat works as a collaborative research platform for international projects that enter and analyze data web-based (and thus independent of location) in nodegoat and publish it on the net in a live environment. REPAC operates a pool of prosopographic data, which contains about 70’000 persons with about 400’000 records about biographical stations and networks. From this data pool the persons and biographical information are automatically assigned to the different projects in nodegoat based on certain criteria (Germanicum / Helvetcium / Bernense).

Fig. Areas of origin of students at European universities 1250-1550.

 

Stefanie Mahrer (Universität Bern, Historisches Institut): Forced Academic Migration (FAM) is a research project (funded by SNF-PRIMA) at the Department of History of the University of Bern on the history of forced academic migration in Switzerland during the Nazi regime and the post-war period. FAM-online provides insight into research results and enables visitors to access, filter and graphically display research data in the near future.
The aim is to collect biographical data of the academics who fled to Switzerland, data of the academic refugee assistance organizations and their helpers, data of the universities concerning forced migrants as well as relevant decrees and laws as completely as possible. The data is published continuously, taking into account legal regulations.
FAM-online links projects and refers to publications with similar topics and thus also sees itself as a platform for scientific research into the history of academic forced migration in the context of National Socialism. The project uses nodegoat to visualize escape routes on maps and to analyze networks of academics, escape helpers and involved organizations.

Fig. Example visualization of escape routes of academics

 

Franziska Zaugg / Mevlane Sejdiji (Universität Bern, Historisches Institut): “A longue durée of violence? War-disabled societies in Southeastern Europe” is a postdoctoral project (SNSF Ambizione), based on the concept of “long duration” developed by Fernand Braudel, which leads the historian’s focus away from the history of events towards more long-term social, cultural and economic structures. The project examines war-disabled societies in Southeastern Europe from the Balkan wars of 1912/1913 to the Balkan conflicts of the late 20th century. The project asks about possible connections between the violence experienced, the nature of memory and its relevance for future conflicts. The projects uses nodegoat to identify and visualize violence clusters on maps and within actor networks.

Fig. Example visualization of clusters of violence

 

Alexandra Portmann, Anna Barmettler, Dominik Kilchmann (Universität Bern, Institut für Theaterwissenschaften): International theater festivals shape the contemporary theater landscape, although the variety of festival formats is difficult to categorize. The spectrum ranges from festivals that focus on a specific theme or author (e.g. Shakespeare), to festivals of the independent scene (e.g. Impulse Festival) and festivals such as the Manchester International Festival, which explicitly only shows premieres of international co-productions. These transnational co-productions of festivals with globally operating artists and independent production houses seem to increasingly shape the festival repertoire. This research project asks the question of how transnational working methods from the festival sector have a lasting effect on local theater systems. This SNSF Ambizione project uses nodegoat for visualizing the processes of festival productions on maps and within networks.

Fig. Example visualization of a network analysis on festival productions

 

Elie Jolliet (Universität Bern, Institut für Musikwissenschaft): Studied music (organ, historical keyboard instruments, choral conducting and church music) in Bern (B.A.) and Lausanne (M.A.). Church musician in Köniz and concert activity as soloist, ensemble musician and choir director. Winner of the Migros Culture Percentage Instrumental Competition 2016. Member of the board of the International Association for Hymnology. Dissertation project: The Bernese Songbooks 1606 to 1853. Corpus analysis of the songs outside the Geneva Psalter. Elie Jolliet uses nodegoat, for the difficult analysis of songs, which he examines and visualizes separately for melodies and texts. More about Elie Jolliet as a professional musician on his website: https://www.eliejolliet.ch/

Fig. Collection of church songs in the backend of nodegoat

 

Corina Liebi (Universität Bern, Historisches Institut): Corina Liebi studies history with a focus on the Middle Ages and is an assistant at the Historical Institute in Bern. In her master’s thesis she deals with the finances of the Hochstift Bamberg and evaluates a chamber office account from 1478. She visualizes the entries of these books on maps, which gives her insights into the quantitative and spatial distribution of financial transactions. With a network analysis she also investigates connections between officials.

Fig. Spaces of the diocese and the Hochstift Bamberg, reconstructed within nodegoat

 

Sebastian Borkowski (Universität Bern, Institut für Archäologische Wissenschaften): Sebastian Borkowski, Master in Near Eastern Archaeology at the University of Bern, currently a PhD student at the Unité d’Études Mésopotamiennes of the University of Geneva and assistant in the Department of Ancient Oriental Philology in the RIMES project (The Rivers of Mesopotamia) presented the project that Dr. Susanne Ruthishauser is leading at the Department of Near Eastern Archaeology at the University of Bern. For the area in the south of present-day Iraq, the project will evaluate satellite image data combined with archaeological, written and geomorphological sources in order to reconstruct the position of rivers and channels of the Mesopotamian alluvial plain during different epochs. This project uses very many functions of nodegoat. Among others, Sebastian Borkowski evaluates about 10’000 written sources in nodegoat.

Fig. Network analysis for reconstructing the rivers in Mesopotamia

 

Kaspar Gubler (Universität Bern, Historisches Institut): SNFS SPARK Projekt ‘Dynamic Data Ingestion’ for server-side data harmonisation. The principle of data ingestion in the so-called DDI module of nodegoat is that nodegoat pulls together data centrally on the server from any data sources available via interface. The DDI Module has two important strengths. Firstly, this software module is integrated into a fixed structure. It is therefore not a script which is stored and executed somewhere on a server and, as so often, at some point is no longer updated. Secondly, the DDI module has a graphical interface (Linked Data Module) in which the database fields of the data source can be assigned to the database fields of the nodegoat database, the mapping of the data. A great benefit of the DDI module is therefore the linking of data sets, for example Linked Open Data.

Fig. Testquery and response in the DDI Module. The responded data will be used for the mapping of the databasefields (from data source to nodegoat)

 

Pim van Bree / Geert Kessels (The Hague, LAB1100): nodegoat on the globe. Overview of nodegoat projects running at other institutes and insights into new and planned features of nodegoat. Pim van Bree received his Master in New Media Studies at the University of Amsterdam. Geert Kessels his Master in History as a research master at the same University. Pim van Bree and Geert Kessels bring together skills in new media, history, humanities and software development. They work with universities, research institutes, museums to conceptualise and develop dynamic applications. Their most important application is certainly nodegoat. Pim van Bree and Geert Kessels have extensive project experience in the field of Digital Humanities, and are engaged worldwide as consultants for digital projects and workshops sharing. On Nodegoat Day, they presented an overview of nodegoat projects in other countries, gave insights into the principles of nodegoat as well as in latest software developments and answered users’ questions.

Fig. Overview on nodegoat projects running and a sample visualisation out of the project ‘Encyclopedia of Romantic Nationalism in Europe’  (https://ernie.uva.nl/)

SNSF SPARK workshop: data ingestion and harmonization

Workshop on the results of the SPARK project of the Swiss National Science Foundation (SNSF) “Dynamic Data Ingestion (DDI): Server-side data harmonization in historical research. A centralized approach to networking and providing interoperable research data to answer specific scientific questions” (http://p3.snf.ch/project-190161). The workshop will take place in four sessions via Zoom. In sessions 1 and 2, participants will be introduced to the functions of the virtual research environment Nodegoat (VRE) and create a data model and import a data sample, which they will use in sessions 3 and 4 for the exercises on data ingestion. At the end, each participant will have a working VRE that can be used for further research or also used in teaching. It is highly recommended to attend all 4 sessions. The workshop is primarily aimed at members of the Phil.-Hist. faculty of the University of Bern, but is generally open to other interested parties on planet earth. The zoom link to the workshop will be sent to participants after registration. The workshop will be led by Nodegoat developers Pim van Bree and Geert Kessels (LAB1100), together with Kaspar Gubler, Institute of History, University of Bern.

Members of the Phil.-Hist. faculty of the University of Bern can apply for an VRE free of charge at the following link: https://www.dh.unibe.ch/dienstleistungen/nodegoat_go/index_ger.html Other participants can obtain an VRE at nodegoat.net. Or get Nodegoat Open Soruce on GitHub: https://github.com/nodegoat/nodegoat

The workshops always take place on Wednesdays from 2 – 5 pm. The workshops are recorded and can therefore be re-watched if a session cannot be attended.

Dates: 28.04.2021 / 05.05.2021 / 12.5.2021 / 26.5.2021

Registration for the workshop until 25.04.2021 to: kaspar.gubler@hist.unibe.ch

Session 1: Data Modelling (people and books)

In session 1 we get to know the central functions of Nodegoat (NG). Since NG is managed via the web browser, no additional software needs to be installed on the computer and location-independent working is no problem. With NG, research projects can be created, research data managed, analyzed, visualized, published on the Internet and shared with other researchers without any special programming skills. We will create our first data model, which we will fill with data in the next sessions. As we will see, NG is not a rigid ”boutique solution” that only fits a specific question or data model. Students and researchers can use NG to create custom data models based on their specific questions.

Session 2: Importing Data (including a VIAF id for each person)

In Session 2, we will import our first data sample and an identifier (VIAF) for each person. So we will work with a prosopographically oriented data model, but we can easily extend it for other research questions.

Session 3: Ingesting Biographical Data (like other IDs, or birth/death)

In session 3, we will learn about the principles of “Dynamic Data Ingestion”. There are numerous data sources on the Internet, whether for research or for the interested public. What types of data sources are there? And what about data quality? We will first explore these questions before connecting our Nodegoat environment to a typical data source via an interface (API) and importing the first test data. These data can be further identifiers of persons or also information about the life data.

Session 4: Ingesting Related Data (like published books of people)

In Session 4, we will enrich the data on the persons and check what other data is available on the Internet. For example, publications that we can add and, if we have full texts, additionally analyze in nodegoat. Finally, we will also look at the data harmonization capabilities in nodegoat.