nodegoat Tutorials

If you are in a hurry and looking for quick success with nodegoat, you should start right away with tutorial no. 10 (or no. 13).  Tutorial no. 10 shows how to create a basic data model, import research data (ship positions) of a climate project and visualise it on a map. A short video (no sound, no comments) shows from scratch how to do it. For the tutorial you need a nodegoat account and the test data set I provide on this website. If you don’t have an account yet, ask your friend where to get one. Or your institution (university), if they provide nodegoat as a digital tool (for example Nodegoat GO, a multi-user platform). If you are studying or working at the University of Bern, you can get a nodegoat account @ DH Uni Bern:

https://www.dh.unibe.ch/dienstleistungen/nodegoat_go/domain_account_beantragen/index_ger.html

Or get a student account directly at nodegoat: https://nodegoat.net

1. Getting started: create your first Project
2. Your first visualization
3. Entering dates
4. Entering vague dates
5. Importing Locations with CSV data
6. Create your first relation
7. Create your first Classification
8. Expand your Data Model
9. Change the background map, add multiple maps as background layers
10. Import your Excel data (CSV data)
11. Advanced Tutorial – Import of Open Data
12. Show observations (with date and places) for an object in list view
13. Import Linked Open Data from Wikidata (import module)
14. Import Linked Open Data from Wikidata (dynamic data ingestion)
15. Colouring dots on a map and creating a legend for them
16. Draw an area yourself (GeoJSON) and embed it as a map
17. Network analysis of correspondences with nodegoat
18. Import Texts from Transkribus to nodegoat (Ingestion module)
19. Reconcile texts in nodegoat with a vocabulary (Reconciliation module) 
20. Classify a nodegoat data modell with the CIDOC CRM
21. Import a data model for correspondence networks using the nodegoat interface (API)
22. Export of a data model using the nodegoat interface (API)

 

1. Getting started: create your first Project in ‘Management’ /  add Object Types in ‘Model’ / add Object descriptions in Model / activate the Object Types in ‘Management’ / work with the Object Types in ‘Data’

If you have a new Nodegoat account, just follow the instructions in the video after logging in to create your first project with one object and two object descriptions (the videos have no sound, no comments, just watch exactly what is done). Additionally, after logging in, there are infotexts in Nodegoat that show you what you need to do to create a new project.

 

2. Your first visualization: Locations must be stored in the Sub-Object of an Object. Create a Sub-Object ‘Location’ for your Object in ‘Model’

In this video we will add a field to store locations in our “First Object” that we created in video 1. Locations are not stored in the object description (like the name of the Object), but in the Sub-Object, as you will see in the video. You’ll also see how to change a Location:

 

3. Entering dates: Dates must be stored in the Sub-Object of an Object

In this video we will store a date for our “first Object”. Then we go to “Model” and select “Period” in the Sub-Object so that we can store two dates (start date and end date) in our “first Object”. This means that for each Object in Nodegoat you can choose a point in time or a period of time. In addition, you can also save vague dates, as we will show later in video 4. Attention: the date format goes like this: 1-8-2020 (not 1.8.2020)

 

4. Entering vague dates: Vague dates must be stored in the Sub-Object in Chronology

In this video, a simple example shows how to work with vague dates in Nodegoat. There are many more ways to capture vague dates in Nodegoat, see: https://nodegoat.net/guides (working with temporal data). In the example we do not know the exact start date, but we estimate that it was 5 days after the start we entered earlier (1-8-2020). So we make a statement: ‘5 days after begin start date’. Such vague dates are entered in the Sub-Object. Not as ‘Point’, but as ‘Chronology’, as shown in the video.

 

If you want to learn more about vague dates and chronological statements in nodegoat, check out this presentation by Pim van Bree and Geert Kessels (LAB1100)-

 

5. Importing Locations with CSV data: Geo coordinates must be stored in Sub-Objects

In this video you see how to create a Object Type ‘Locations’ with Object Descriptions that match the column names in the CSV-File. After creating the Object, import the sample CSV data into your Object ‘Locations’. The sample data provides Locations (40k) with geonames.org-IDs and Geo Coordinates: Location1.csv

Hint: In Nodegoat are about 130k locations from geonames.org preinstalled (Type: City). These locations can be used and extended by all users as a collaborative work.

6. Create your first relation: Relations are created in the Model to be used in Data

In this video we establish a relationship between the ‘first object’ and locations, because we want to use the locations as a georeference for the ‘first object’. In Model we select ‘Locations’ as the georeference in the sub-object of the ‘new object’. In Data we enter a location and see that it is not visualised immediately because we have to set the Visual Settings correctly first (selecting Loaction as the reference for visualisation). We’ll change the Location in the ‘first Object’ and see that we have to activate the ‘Quicksearch’ field in Model (in the Object descriptions of Location), so that we can search for a new Location in the Quicksearch field.

7. Create your first Classification (Category): Classifications are created in the Model to be used in Data.

In this video we create a classification ‘Attribute’ in the Model. Then we go to the ‘first object’ in the Model to add an Object description ‘Attribute’ that is linked to the created classification.

 

8. Expand your Data Model: Person with ‘Event Birth’.

In this video we will change ‘first Object’ to ‘Person’, adding a Classification in Model ‘Event (kind of’, linking it to the Sub-Object description of Person. Then we add the Event ‘Birth’ to the Sub-Object. This gives us a simple model for biographies that we can extend with other events (such as death, activities, etc.). In the Sub-Object we can add to each of these events dates and a location.

 

9. Change the background map

In nodegoat you can integrate background maps as you like, if the maps are available on a tile server via link. So you need only this link to the map. But where can you find such links? Google is your friend. Or this short tutorial. Go in nodegoat to the Visualisation Settings.

Then go to the Visual Settings tab. You will automatically get to the Geographical Settings.

Standard map is the Google Map with its copyright. Remove the Google Map link in the Map field and insert your new link for your map. Change the copyright as desired.

Save your Map settings:

I have compiled some links here. Don’t be afraid of the length of the links, they are just like this, sometimes longer, sometimes shorter:

Google Map

//mt{s}.googleapis.com/vt?pb=!1m5!1m4!1i{z}!2i{x}!3i{y}!4i256!2m3!1e0!2sm!3i336008092!3m14!2sen-US!3sUS!5e18!12m1!1e47!12m3!1e37!2m1!1ssmartmaps!12m4!1e26!2m2!1sstyles!2zcy5lOmd8cC5jOiNmZmY1ZjVmNSxzLmU6bHxwLnY6b2ZmLHMuZTpsLml8cC52Om9mZixzLmU6bC50LmZ8cC5jOiNmZjYxNjE2MSxzLmU6bC50LnN8cC5jOiNmZmY1ZjVmNSxzLnQ6MXxzLmU6Z3xwLnY6b2ZmLHMudDoyMXxzLmU6bC50LmZ8cC5jOiNmZmJkYmRiZCxzLnQ6MjB8cC52Om9mZixzLnQ6MnxwLnY6b2ZmLHMudDoyfHMuZTpnfHAuYzojZmZlZWVlZWUscy50OjJ8cy5lOmwudC5mfHAuYzojZmY3NTc1NzUscy50OjQwfHMuZTpnfHAuYzojZmZlNWU1ZTUscy50OjQwfHMuZTpsLnQuZnxwLmM6I2ZmOWU5ZTllLHMudDozfHAudjpvZmYscy50OjN8cy5lOmd8cC5jOiNmZmZmZmZmZixzLnQ6M3xzLmU6bC5pfHAudjpvZmYscy50OjUwfHMuZTpsLnQuZnxwLmM6I2ZmNzU3NTc1LHMudDo0OXxzLmU6Z3xwLmM6I2ZmZGFkYWRhLHMudDo0OXxzLmU6bC50LmZ8cC5jOiNmZjYxNjE2MSxzLnQ6NTF8cy5lOmwudC5mfHAuYzojZmY5ZTllOWUscy50OjR8cC52Om9mZixzLnQ6NjV8cy5lOmd8cC5jOiNmZmU1ZTVlNSxzLnQ6NjZ8cy5lOmd8cC5jOiNmZmVlZWVlZSxzLnQ6NnxzLmU6Z3xwLmM6I2ZmYzljOWM5LHMudDo2fHMuZTpsLnQuZnxwLmM6I2ZmOWU5ZTll

Google Satellite Map (without places, topography)

https://mt{s}.google.com/vt/lyrs=s&x={x}&y={y}&z={z}

Background info on Google Maps:  the standard link for Google Map tiles looks like:

https://mt.google.com/vt/lyrs=m&x={x}&y={y}&z={z}

If you want a different Google Map, just replace the letter in the URL at lyrs=

So for example lyrs=m

This will then display the standard street map.

The whole link will then look like this:

https://mt.google.com/vt/lyrs=m&x={x}&y={y}&z={z}

With these letters you can change the map:

h: shows only roads

m: is the standard roadmap

p: shows the terrain

r: is another roadmap

s: shows the satellit view

t: shows only the terrrain

y: is a hybrid map with terrain and roads

 

Grey map without places

//mt{s}.googleapis.com/vt?pb=!1m5!1m4!1i{z}!2i{x}!3i{y}!4i256!2m3!1e0!2sm!3i336008092!3m14!2sen-US!3sUS!5e18!12m1!1e47!12m3!1e37!2m1!1ssmartmaps!12m4!1e26!2m2!1sstyles!2zcy5lOmd8cC5jOiNmZmY1ZjVmNSxzLmU6bHxwLnY6b2ZmLHMuZTpsLml8cC52Om9mZixzLmU6bC50LmZ8cC5jOiNmZjYxNjE2MSxzLmU6bC50LnN8cC5jOiNmZmY1ZjVmNSxzLnQ6MXxzLmU6Z3xwLnY6b2ZmLHMudDoyMXxzLmU6bC50LmZ8cC5jOiNmZmJkYmRiZCxzLnQ6MjB8cC52Om9mZixzLnQ6MnxwLnY6b2ZmLHMudDoyfHMuZTpnfHAuYzojZmZlZWVlZWUscy50OjJ8cy5lOmwudC5mfHAuYzojZmY3NTc1NzUscy50OjQwfHMuZTpnfHAuYzojZmZlNWU1ZTUscy50OjQwfHMuZTpsLnQuZnxwLmM6I2ZmOWU5ZTllLHMudDozfHAudjpvZmYscy50OjN8cy5lOmd8cC5jOiNmZmZmZmZmZixzLnQ6M3xzLmU6bC5pfHAudjpvZmYscy50OjUwfHMuZTpsLnQuZnxwLmM6I2ZmNzU3NTc1LHMudDo0OXxzLmU6Z3xwLmM6I2ZmZGFkYWRhLHMudDo0OXxzLmU6bC50LmZ8cC5jOiNmZjYxNjE2MSxzLnQ6NTF8cy5lOmwudC5mfHAuYzojZmY5ZTllOWUscy50OjR8cC52Om9mZixzLnQ6NjV8cy5lOmd8cC5jOiNmZmU1ZTVlNSxzLnQ6NjZ8cy5lOmd8cC5jOiNmZmVlZWVlZSxzLnQ6NnxzLmU6Z3xwLmM6I2ZmYzljOWM5LHMudDo2fHMuZTpsLnQuZnxwLmM6I2ZmOWU5ZTll

Dark map for cool visualisations to impress your friends….

http://mt{s}.googleapis.com/vt?pb=!1m5!1m4!1i{z}!2i{x}!3i{y}!4i256!2m3!1e0!2sm!3i323349059!3m14!2sen-US!3sUS!5e18!12m1!1e47!12m3!1e37!2m1!1ssmartmaps!12m4!1e26!2m2!1sstyles!2zcy50OjF8cC52Om9mZixzLnQ6MnxwLnY6b2ZmLHMudDozfHAudjpvZmYscy50OjR8cC52Om9mZixzLnQ6NnxzLmU6bHxwLnY6b2ZmLHMudDo1fHMuZTpsfHAudjpvZmYscy50OjgxfHAudjpvZmYscy50OjZ8cy5lOmd8cC5sOi0xMDAscy50OjgyfHAuczotMTAwfHAubDotODM!4e0

Visualisation of the immigration of new citizens (blue) to cities in Europe in the Middle Ages.

Fig. Visualisation of the immigration of new citizens (blue) to cities in Europe in the Middle Ages.

 

Digital Atlas of the Roman Empire

https://dh.gu.se/tiles/imperium/{z}/{x}/{y}.png

See for this cool project: https://dh.gu.se/dare/

Mercator map from 1607

https://maps.georeferencer.com/georeferences/66a34667-1847-5ea6-b6a8-c81736a3425d/2018-08-26T20:22:32.883884Z/map/{z}/{x}/{y}.png?key=mpcE7jAf5llCJV0hoUfk

The example of the Mercator map refers to the Georeferencer, a service for online maps, where you can find a lot of links to historical maps. Many institutions have their own account at Georeferencer like the David Rumsey Collections with a useful overview of the referenced maps (world map):

https://www.davidrumsey.com/view/georeferenced-maps

Create an account at Georeferencer to get the link for a map provided (for example by the David Rumsey collection). Log in at Georeferencer, choose a map here:

https://www.davidrumsey.com/view/georeferenced-maps

Then go to: ‘This map’ and to ‘get Links’. Copy the link into the Map field of nodegoat.

As another example, the British Library also has an account at Georeferencer. You can find their maps here, on the interactive map:

https://britishlibrary.georeferencer.com/api/v1/density

 

With the service from allmaps.org you can create customised background maps and integrate them into nodegoat:

See for this as well: https://nodegoat.net/blog.s/68/use-any-iiif-published-map-as-a-background-in-your-geographic-visualisations

1. Upload the image of the background map to https://www.iiifhosting.com/. This will give you a link that you can then paste into https://www.iiifhosting.com/ and georeference your map. This will give you a link that you can then paste into https://editor.allmaps.org/ and use to georeference your map.

2. First you have to log in at https://www.iiifhosting.com/, then upload the image. Click on the image, which will generate a link like this https://free.iiifhosting.com/iiif/9dd9edd2bcf9bab289d68935bf0bfec866eda7492507f0bbbc2731b9d3cc95e7

3. Attention! At the end of the link you have to add the following because of the JSON-format:

/manifest.json

4. Insert this JSON link into https://editor.allmaps.org/ and start georeferencing your map.

6. Change the tab and go to ‘Results’. There you will find the link that you can insert into nodgegoat (in the Visual Settings). Here is the example of such a link: https://allmaps.xyz/maps/55936f460a611a52/{z}/{x}/{y}.png

Add multiple maps as background layers

Several background maps can be integrated as different layers. See the documentation for the settings and the video for a practical example. The video also shows self-drawn maps (areas) that can be faded in and out using the legend (Conditions). This is another function that can be created as a scenario and integrated into the visualisation as a ‘context’.

https://nodegoat.net/documentation.s/84/geographical#settings

 

10. Import your Excel data (CSV data), which have geo coordinates (longitude and latitude) into nodegoat and visualize the data on a map

In this tutorial I provide a set of test data with geocoordinates that you can easily import and visualize, like the map below. It shows positions of ships calculated from logbooks (18th – 19th century). The data are especially interesting for historical climate research.

Prerequisite is that you already have a nodegoat account. If you don’t have an account yet, ask your friend where to get one. Or your institution (university), if they provide nodegoat as a digital tool. If you are studying or working at the Faculty of Philosophy and History at the University of Bern, you can get a nodegoat account here: https://forms.gle/Gjm4682EJLsq5TCR7. Alternatively you can get a student account directly at nodegoat: https://nodegoat.net/

The following video (no sound, no comments) shows a step by step guide from scratch. The video starts with the login into your nodegoat account. The next steps are: Create a project, create an object type, download data sample from this website, import and visualize the data:

If you prefer written instructions, you can continue here. These instructions are identical to the video, but contain some background information.

Login into your nodegoat account. Import the CSV data into your already existing project or create a new one: ‘climate project’. We will import data (sample) from a interesting project about logbooks of ships which are important for weather observations. Here is the website of the project where the data is available:

https://www.historicalclimatology.com/cliwoc.htmlClimatological Database for the World’s Oceans (CLIWOC)

“The database consists of 287,114 logbooks written aboard Dutch, English, French, and Spanish sailing ships. The vast majority of these logbooks date from between 1750 and 1850, yet four ship logbooks were incorporated that predate 1750. These were centuries of European imperial expansion, and so the logbooks record the activities of sailors – both civilian and military – in oceans that span the entire globe.”

I have downloaded the following data: ‘Download as an Open Office Spreadsheet’

I opened the spreadsheet in Excel and first added a column on the far left to add to the records (rows) a unique identifier, because they don’t have any. I’ve added for this the following into field A2 in Excel which contains the first record:

Then double-click on the icon to the right of the cell and it fills the whole column with identifiers. The identifiers are very important. With these identifiers you can later update your data records in nodegoat (‘Update Existing Objects’). I always import the identifiers into nodegoat first and then update the records with additional information based on the identifiers. In the nodegoat Import web interface you can choose whether you want to create new records or update existing ones.

In nodegoat you can import 50k of data records (rows) at once. So if you want to import all the more than 200k data rows, you have to split them up. I’ve already done this by providing a test data sample here with 20k records that you can use for your import. I have prepared this data and just selected a few columns to get started: Identifier, ship name, year, longitude and latitude. You can download the test data sample here:

climate project test data (CSV)

Your Data Model in your project for this data sample should look like this:

Just add one Sub-Object with ‘year + coordinates’

We will import the data into nodegoat via web interface (you can also import data via JSON interface, we will cover this in tutorial 14).

Go to Model > Import > CSV Files, there you upload the downloaded CSV file (climate project test data).

Background: To import your data into nodegoat, the data must be available in a text file in UTF8 format. In Excel for example you can save your data as CSV data. Go to ‘Save as’ in your Excel sheet and choose CSV-UTF8 as data format. CSV means comma separated values (data). Open your CSV file with a text editor and you will see the many separators in the data.

Go to the Import Template. Map the fields of your CSV data to the fields in your data model. For the year (YR), select the Date Start field in your data model.

Now you can run your import template. You can first check a selection of the records to see if you have mapped the fields correctly. Click on Next to import the 20k data records.

Have a coffee now, you have already achieved a lot today.

After the import go to ‘Data’ and click on the Geografical Visualisation.

This is how your result should look like. Zoom in and don’t forget to play with the time slider. You will also discover ships in the desert of Africa, these are errors in the data that do not include either the longitude or latitude. So the visualization also helps you to detect such errors.

You can also make the dots on the map smaller. In the tab where the Geographical Visualization is located, go to the right of it on the ‘Visual Settings’ and then again on it. Set your dot size to 3 and visualize the data again on the map, see also the video for this.

You can update now your data records based on the identifiers. Create a CSV file with the data you want to continue importing. When importing, select ‘Update Existing Objects’ and choose your identifier to map the CSV data to the corresponding record in your nodegoat database.

 

11. Advanced Tutorial – Import of Open Data into nodegoat: Benjamin Franklin’s Post Office Records

Benjamin West: Benjamin Franklin Drawing Electricity from the Sky

Benjamin West (1738-1820): Benjamin Franklin Drawing Electricity from the Sky, Philadelphia Museum of Art, image Source: Public domain, via Wikimedia Commons

In 1743 the American Philosophical Society (APS) – the oldest learned society in the US – was founded in Philadelphia by Benjamin Franklin, John Bartram, Francis Hopkinson and others for the purpose of “promoting useful knowledge”, as you can read on tha APS website. Today, APS follows an open data strategy that encourages researchers to use and reuse the open datasets provided under a Creative Commons Attribution 4.0 License. In this tutorial, we will work with such a dataset as it is a very useful resource to show what you can do with it in nodegoat, focusing on importing and mapping the data. Citation of the dataset: Heider, Cynthia, Bayard Miller and Scott Ziegler. Post Office Book, 1748-1752. BF85f6-8. Distributed by Philadelphia: American Philosophical Society Library & Museum, 2017. https://diglib.amphilsoc.org/islandora/object/compound:11.

Franklin was not only the founder of the society,  he also became Postmaster of Philadelphia in 1737, appointed by the British Crown Post. For a later time when he was serving as a Postmaster, records of letters are still available, which are archived at the APS library and made available as Open Data, as we read on the APS website:

“Benjamin Franklin’s Post Office Records: Post Office Book, Philadelphia incoming and outgoing mail, 1748-1752. Created while Benjamin Franklin served as Postmaster of Philadelphia, these datasets reveal a wealth of previously untapped information about colonial correspondence.” (Source: APS website).

In the following tutorial, we will import into nodegoat only the outgoing letters from the post office in Philadelphia. For the tutorial there are three videos which show from scratch how the data for the outgoing letters are imported and visualised in nodegoat. The videos have no sound and comments, just observe what to do. Requirements: you need a nodegoat account and Google Sheets (or Excel, but the step by step tutorial uses Google Sheets).

The challenge in Video 1 is to reformat the dates for import into nodegoat and to convert the American date format into the European one. An additional location (Philadelphia Post Office) is added for visualisation. The tutorial starts with the data download from the APS website:
https://diglib.amphilsoc.org/data

Video 1:

In video 2, a data model is first created in nodegoat, based on the column names of the data in Google Sheets. Then the data is downloaded as CSV from Google Sheets and uploaded to nodegoat,  the data fields of the CSV file mapped with the data fields in nodegoat and the import process is started. During the import process nodegoat automatically makes assignments to the locations in the CSV data which can be accepted or rejected.

Video 2:

Video 3 shows how to work with the imported data. First a basic function is shown: if you click just on one object in nodegoat (a window with the object opens) only this object is visualised on the map (by clicking on the globe symbol). If you call up all objects, for example by selecting ‘all’ to the right of the filter symbol (screenshot below) or by scrolling between the pages (1,2,3 etc), all objects are visualised:

If you do this with our data, as in the video, by clicking on the globe symbol, the locations that we have previously assigned during the import process will be displayed on the map. Not all of these locations are correctly located, because during the import process we did not check if the location is really the right one, but simply chose a location to do the data cleansing afterwards in nodegoat. It would have been better if we had already cleaned the data in Google Sheets or Excel BEFORE the data import, which is highly recommended! But no worries: you can also clean up the data within nodegoat, as an example shows in the video. The video also shows how to add and use a database field ‘Comment’ to the data model, for example if you want to indicate that you are unsure about the location of a place. And the video shows how to create a filter query to find locations that do not have geo-coordinates.

Video 3:

12. Show observations (with dates and places) for an object in list view

This data model, which is shown here in its basic features, works for example for:
Person (Object) – biographical event (observation), i.e. for biographical data. But of course it also works for other models with which we want to describe an object precisely with observations. It works for any kind of Objects like Books, texts, artefacts, instutitions, organisations etc.

With the observations (in the example the ‘event’), we contextualise the Object in time and space by recording this information in the Sub-Object. In the following example, we use ‘event’ or ‘Ereignis’ instead of the term ‘observation’.

Example:

In this example you see the biographical events of Ludwig von Adlikon in a list (with green, blue and red events). At the top we see further information about him: first name, surname, origin etc.

What does the data model in nodegoat look like? There are two objects: Person and Event. In the example, we see the object ‘Person’. The events related to this person (and thus listed) are stored the object ‘Event’.

1) Create the objects ‘Person’ and ‘Event’ in your data model in nodegoat in the section ‘Model’.

2) Create a description ‘Person’ in the Sub-Object of the Object ‘Event’ and link this field to the object ‘Person’, like here:

Create another Sub-Object like you did for ‘Person’ to store dates and places for an Event. Call it for example Sub-Object ‘Dates / Places’. If you have done this, you can specify in the Sub-Object ‘Person’ that the date and place of this new Sub-Object should be automatically taken over in the Sub-Object ‘Person’ (important for visualisations and for storing properly the data in general). In the example here you see how it goes. ‘Datum_Location’ in the example stands for ‘Dates / Places’ and ‘Ereignis’ for ‘Event’, this here are the settings for the Date, look at  ‘Source’:

These are the settings for the Location, look at the ‘Reference’ and at the ‘Source’:

 

3) In the ‘Management’ section, you have to enable in the ‘Person’ Object that the events will be displayed in list form in the ‘Data’ section: Management > Your Project > Organise > Person > Cross-Referenced > Event > Sub-Object Descriptions > Select the checkbox

Why is this model useful? Because it is very simple, but takes into account the fundamental principle in data collection in the humanities: observations. The events from above are nothing but observations. So the events in the example from above are observations on biographical points in life of a person. Points (or periods) that you can record with a place and a date in the same Sub-Object where you store the person: everything is included in such an observation: Person, event, place (space) and time. So the information is all centrally stored in one observation. You can use this information now very well for visualisations (maps, networks, time series).Of course, we can now extend this simple model with further observations and information, or even expand it in terms of capturing correspondences.

13. Import Linked Open Data from Wikidata (Import module)

In this tutorial we will import and visualise data from Wikidata on archaeological sites in Switzerland. I got the idea for this tutorial because of an interesting student project at the University of Bern on the visualisation of archaeological data available at SPARQL endpoints, i.e. via interface. Information about this project can be found here:

https://www.iaw.unibe.ch/forschung/bern_coda_lab/projects/student_projects/sparql_hs_2019/ssdi/index_ger.html

However, this is only one way to import data into nodegoat. In tutorial no. 14 we will show how we can import the same data directly (i.e. without uploading) from Wikidata into nodegoat using the dynamic data import module.

If you want to skip the following data query from Wikidata, you can already download the CSV data here and go straight to creating the data model in nodegoat.

With the data query in Wikidata below, we first search for all archaeological sites in Switzerland, using Wikidata’s query service: https://query.wikidata.org

Go to the query with this link and run it.

We can download the result (over 1000 sites) as CSV data, i.e. as comma-separated values. We then upload this CSV data from our computer via web browser into nodegoat. But before we can upload the data for the import, we must of course first create a project in nodegoat with the same data fields that are present in the CSV data for the import. In nodegoat, we would have to create a project in the ‘Management’ section, which we call ‘Archaeological sites’.

Now we switch to the ‘Model’ section to create the three data fields of our project and add 1 Object Type that will contain all the data. We match the data fields of the Object Type to the CSV data containing the following categories: site, coord, siteLabel. We now also create these categories (= data fields) in our Object Type that we call ‘site’. We use site and siteLabel in the data model for descriptions of this Object, while coord, i.e. the geo-coordinates, are imported into the Sub-Object of the this Object in order to be able to visualise the data.

We save our Object Type and go back to ‘Management’ and to our project (Archaeological sites) to activate the created Object Type ‘site’ in our project under ‘Model’:

Now we upload the CSV data via web browser into nodegoat.

Then we create an import template in nodegoat for the data mapping of CSV data to our three data fields in nodegoat.

We start the import by clicking on ‘run’ and the result will look like this:

We visualise the data with the geographic visualisation function and get this interactive map.

Fig. Archaeological sites in Switzerland (data from Wikidata)

With the conditions in nodegoat we can colour or weight the points on the map differently. Of course, we can now add further fields and information about an archaeological site in our data model and display them on the map.

Finally, we can export the data again from nodegoat and add our additions to Wikidata. A similar service to Wikidata is offered by the LOBID project, which provides a lot of very useful data. From Wikidata, as will be shown in the next tutorial, or from LOBID, data can also be imported dynamically into nodegoat.

14. Import Linked Open Data from Wikidata (dynamic data ingestion)

First, an important note on a very useful data conversion (or cleansing) function that we can perform for each database field before import. Concrete example: a data interface outputs the Wikidata number in the following format: http://www.wikidata.org/entity/Q3324044. However, we now want to import only the number ‘Q3324044’ into nodegoat and leave out the rest. To do this, we go to Model > Linked Data > Conversions, create a new conversion there and enter the following values into the fields as shown in the illustration. We see that we simply cut off the front part of the URI and so only the number (identifier) can be imported. To see the result of the script, we click on ‘test’.

Where do we have to add this conversion to be able to apply it for the import? We go to Model > Linked Data and to our Linked Data Resource that we have defined there. At the bottom of the assignments of the database fields to be imported, we can add the conversion (see illustration, we have previously saved the conversion as ‘Convert URI to ID’). This way, only the number is imported during the dynamic data import. Of course, date formats or all other values that you can convert with JavaScript can also be imported in this way. Since JavaScript offers many possibilities for this, the conversions have great potential.

Here is an example of converting a date format:

Pro-tip: if the data from the data sources to be imported are inconsistent (which is actually always a little bit the case), you can for example perform a ‘double import’. For example, you can convert dates with the conversion before the import and import them at the same time into the date field in nodegoat the dates as text or string into an object description. After the import, the incorrect dates in the object description can be easily filtered out and we only have to clean them up and not the rest of the correct dates.

Futher information on the conversions can be found here: https://nodegoat.net/documentation.s/142/conversions

 

Now we start with tutorial no. 14. It builds on tutorial no. 13. We import the same data into nodegoat as in tutorial 13, but this time not by uploading a CSV file via web browser, but directly via nodegoat’s interface. We use the same data model as in tutorial no. 13.

The dynamic data ingestion module was developed as part of my SNSF SPARK project. Information and examples for the application of the module can be found here.

First we have to activate the module for data ingestion for our project. We go to ‘Management’ and to the project ‘Archeological sites’ > Project > Model > System > Ingestion

We need to set up two things for this in our nodegoat environment for the ingestion process:

1) Linked data rescoure (data retrieval via interface)
2) Data mapping (for importing the data)

We go to Model > Linked Data > Add Linked Data Resource

Here we define the interface. In this example, a SPARQL end point from Wikidata. Fill in the fields as follows:

Now we add the same query that we already used in tutorial no. 13. You can find the query under this link. Click on ‘test’.

In the response field we see the result of the query. Click on ‘use’  to assign the database fields from Wikidata to the database fields in nodegoat:

Save your Linked Data Resource and go to Data > Processes > Ingestion > Add Ingestion for the data mapping. Choose as Source your Linked Data Resource (Dynamic Data Ingestion). Target of the data import (ingestion) is your Object Type ‘site’. Map the database fields:

Save the ingestion process and run it. The result will look like this:

Click on Geographical Visualisation.

You will get this map as result. All archaeological sites in Switzerland available in Wikidata.

Fig. Archaeological sites in Switzerland (data from Wikidata)

The data ingestion process presented in this tutorial showed how to assemble a data collection. We can now enrich this data collection with another ingestion process using the update function. With this function we can enrich the whole or part of our dataset with further data, for example from Wikidata, from LOBID or from any other data source with a data output in JSON format. You could also use this dataset for network analysis in nodegoat if you adapt your data model accordingly.

You will find more information on the dynamic data ingestion module in the official documentation of nodegoat and in this blog post. You can also find more information and examples here: SPARK Workshop on dynamic data ingestion.

 

15. Colouring dots on a map and creating a legend for them

This tutorial builds on the previous tutorials (13 and 14). It shows how we can colour dots (= geo points) on a map and create a colour legend. The procedure described can be applied to maps in nodegoat in general.
To colour dots and to create a legend, we need the so-called ‘conditions‘ in nodegoat, which you can find on the right in the toolbar for the visualisations. With the conditions we can specifically determine which dots of the data model we want to colour on the map (or in a network) according to which criteria. In the following example, we colour in red the dots that point to a “römischer Gutshof”. Check out the user guide of nodegoat for further information about the conditions: https://nodegoat.net/documentation.s/88/conditions

We go to our data in the list view and select the conditions icon (the second icon from the right in the toolbar). In the tab we go to ‘Nodes’ (= dots). We select ‘Object’ and as title for the legend ‘Römischer Gutshof’ as well as any colour with which the dots should appear on the map. The title will appear in the legend that will be automatically created on the map.

What do we have to do now? We have to tell nodegoat which dots should be coloured red. To do this, we go to ‘Filter’ and enter the terms ‘römischer Gutshof’ in the field ‘site label’.

We save our entries and go to the globe icon in the toolbar to view the result on the map. The legend is interactive. We can click on the coloured red bar to show or hide the corresponding dots on the map. This is helpful for visual data analysis, especially when we want to display and analyse many different coloured dots on a map.

We can now expand the legend in the conditions according to any criteria of our data set. For example, in addition to the Roman manor houses, we can also display graves and other finds that are present in our data set with the legend. Note: the conditions we have created are stored temporarily. If we want to save them permanently, we open the conditions and click on the blue ‘save’ button at the top left.

We can additionally integrate another background map. For example, the ‘Digital Atlas of the Roman Empire’. The procedure for this is already described in Tutorial No. 9. Here is the short version.

In the toolbar we go to: Visualisation Settings > Visual Settings > Geographical > Map

In the field ‘Map’ we copy the following link:

https://dh.gu.se/tiles/imperium/{z}/{x}/{y}.png

Then we go back to our dataset and click on the globe symbol to display the new background map.

Fig. Archaeological sites in Switzerland (data from Wikidata)

16. Draw an area yourself (GeoJSON) and embed it as a map

In nodegoat we can integrate any areas as independent maps or as background maps for visualisations. For example, areas for dioceses, principalities, counties, historical national and sovereign borders, but also self-defined spaces, such as for archaeological excavations or to particularly highlight a specific space of an investigation. Prerequisite: the areas must be drawn as a polygon (for an area) in the format GeoJSON. The GeoJSON code is then copied into a predefined field in nodegoat. So: a simple copy / paste exercise. Once the area is saved in nodegoat, we can colour it with the conditions (edges, areas) and label it (this will be explained in a later tutorial).

Background: the JSON data format is generally used in nodegoat for the organisation (structuring) of data. GeoJSON is the geographical variant of this data format.

After we have drawn a polygon online in GeoJSON, the JSON code for this area appears in the right-hand column. We copy this code completely and then paste it into the sub-object of an object in nodegoat.

So we first create an object in nodegoat, which we call ‘Geo Space’, for example. For this we create a sub-object, which we call ‘Area’. If you don’t want to draw an area, you can download the GeoJSON code for the historical boundaries of the diocese of Minden around 1500 here (PDF) . This code is provided by the research project Germania Sacra.

In the ‘Management’ section we activate the newly created object ‘Geo Space’ in our project in order to be able to use it in the ‘Data’ section.

Then we create a new object ‘Geo Space’ in the ‘Data’ area and insert the JSON code into the field. To do this, we select ‘Geometry’ from the menu (at Location):

We save the input and visualise the area by clicking on the globe symbol in the toolbar (far left).

With the zoom bar (on the left of the display) we can zoom out and see the result, the diocese of Minden:

To include this map as a background map for a visualisation in nodegoat, we need to create a scenario in our Geo Space object.
A scenario consists of a filter query and visualisation settings. After we have created such a scenario, we can select it in the Visualisation Settings > Context and thus the map of the scenario appears as a background map.

Create a filter ‘Scenario Geo Space’. Select the Object ‘Geo Space’ and save the filter:

Create a scenario, choose the icon on the far right:

Choose a name for the Scenario and choose the Filter you have created:

Go to Visualisation Settings in the toolbar and choose your scenario as a context (background map)

17. Network analysis of correspondences in nodegoat

This tutorial builds on the data model from tutorial no. 12. This data model shows only one of many possibilities for a network analysis with nodegoat. Another model (for correspondences) is explained step by step in the guides on nodegoat.net:

https://nodegoat.net/guide.s/4/create-your-first-object-type

A special feature of this model is the use of the function of automatic data transfer from an object description to the corresponding data field in the sub-object: https://nodegoat.net/guide.s/9/add-a-related-object-type. In practice, this means that we only have to enter a person’s name once (in the object description) and it is automatically saved in the sub-object. In another tutorial we will take a closer look at this function of automatic data transfer (translation in progress…)

Let us now turn to the data model of this tutorial. In tutorial no. 12 we created a data model consisting of persons, events and locations. With these three basic objects you can tackle very different questions. You can also extend this model, for example with an additional object that you simply call ‘object’. Such an object can then be, for example, any object, such as a book, a manuscript, or it can also stand for buildings, archaeological finds, etc. With the data model Person – Event (Ereignis) – Object (Objekt) – Location you have a very comprehensive and versatile model.

For this tutorial, however, we will use the model person – event (Ereignis) – location, where we will capture correspondences as ‘event’. Correspondence’ is therefore an event type. The data model works in such a way that for the ‘event’ in the sub-object we capture the persons associated with it, in this case the correspondents. In the data model we create two sub-objects in the object ‘Event’, which we call ‘Sender’ (Absender) and ‘Recipient’ (Empfänger) (or also: ‘Addressee’). We also create a third sub-object for persons mentioned in the correspondence, for example as the sub-object ‘Person mentioned’ (Person erwähnt). If we also want to record the subjects of the correspondences in keyword form, we can create another sub-object, which we call ‘Subject’ (Gegenstand), for example.

The data model looks like this:

Object: Ereignis (event)

Sub-Objects of this Object: Absender (Sender) / Empfänger (Recipient) / Person erwähnt (Person mentioned) / Gegenstand (Subject)

We activate ‘Single’ and ‘Required’ for sender and recipient in the sub-object, as these may only occur once per event (‘Single’) and we always want to have a specification for both (‘Required’). If the sender and/or recipient are unknown, we create a corresponding ‘Person unknown’.

For the other sub-objects (person mentioned and object), which we create in the same way as sender and recipient, we do not activate the fields ‘Single’ and ‘Required’, as these sub-objects can occur several times, but do not necessarily have to.

In the tab ‘Descriptions’ we now have to specify in the data model where the object with the persons is located in order to be able to record (select) them during data entry.

To do this, we create a ‘Description’ ‘Sender’ and connect it to the Reference: Object Type: Person, as shown here. We proceed in the same way with the sub-objects ‘Recipient’ and ‘Person mentioned’.

As in every sub-object, we can now create data fields for temporal (Date) and spatial (Location) information in our model. We switch to the tab ‘Date’ and select ‘Period’. This gives us the possibility to enter 2 different dates in the date entry and thus map a time span in which the correspondence was written / sent. If we only find 1 date in a letter, then we only enter this date. It is automatically transferred to the other date field when saving. If we do not find any dates, we can still put a letter in chronological order with an approximate date. See Tutorial No. 4.

Now we select in the tab ‘Location’ the object type that contains the geo-references (with coordinates). In our example, this is the type ‘Location’, in which we store the names and coordinates of towns / cities:

Before we can select this object type here, we must of course first create it. To create the type ‘Location’ used here, you only have to give the object a name (‘Location’) and a sub-object ‘Geo coordinates’. The names can also be different and, as always in nodegoat, can be defined arbitrarily.

You now have a simple data model for the collection of correspondences. A model that you can extend according to your requirements and questions:

Person – Event (correspondence) – Location.

Make sure that you have activated these 3 object types in ‘Management’ for your project (check boxes selected). Now go to the ‘Data’ section and enter a new ‘Event’. The data entry form should look like this:

Sender’ and ‘Recipient’ appear automatically because we have defined them as ‘Required’ in the data model. You have to open ‘Person mentioned’ and ‘Subject’ by clicking on the plus sign in order to enter data there. A prerequisite for being able to enter persons is, of course, that we have created at least 1 field with the name or designation in the data model for ‘Person’. The same applies to ‘Location’, where we have also created a sub-object ‘Geo coordinates’ in the data model, as mentioned above. If these details are available in the data model for person, event and location, we can start entering data in the form for ‘Event’ shown above. We can add new persons and locations right in this form (click on ‘new’ when you have clicked in the magnifying glass field).

As a test, enter a ‘Person 1’ as sender and a ‘Person 2’ as recipient as well as 2 different locations with their geo-coordinates, which you can find at GeoNames.org (or you can also mark the location directly on the interactive map in nodegoat, which will automatically insert the geo-coordinates). See also Tutorial No. 5.

For this example, we have recorded two people: Person 1 and Person 2: Person 1 and Person 2, as well as 2 locations: Basel and Prague. We have called the event ‘Correspondence 1’.

When entering the geo-coordinates, make sure that you have selected ‘Point’ for Location. By clicking on ‘Map’ you can select the coordinates interactively on the map:

 

This is what the preliminary result of the data collection should look like (still without dates):

This correspondence is automatically visualised on the map by clicking on the globe symbol on the left in the toolbar.

With the legend (red / blue) you can show and hide the data for exploitative data analysis.

Finally, you can add dates to ‘Correspondence 1’ in the sub-object (Date). As soon as this has been done, the timeline takes over this information and you can use it to dynamically display the data on the map.

Click on the network analysis icon in the toolbar (to the right of the gobus icon) to see how the ‘Correspondence 1’ event connects the two people. This is a simple, but often applicable basic model for your network analysis, as we can extend the events as we like and, as we have seen above, enrich them with further persons (person mentioned) or also with contents (object):

 

18. Import texts from Transkribus into nodegoat (ingestion)

Requirements:

You already have an account for nodegoat and one for Transkribus. In the tutorial we work with the web version of Transkribus: https://app.transkribus.eu

You can find an introduction to Transkribus here:

 

1. in Transkribus you have uploaded a document as PDF or in image format, then you have done the layout analysis as well as the text recognition for the document. It is best to start with a printed document that Transkribus recognises well. Handwriting is a little more difficult. However, the more regular a script is (even a handwriting), the better it is recognised by the computer.

2. In nodegoat we have to configure two interfaces (called Linked Data Resource). These interfaces are used to import the texts and images from Transkribus into nodegoat. We only have to configure the interfaces once. Afterwards, they can be reused for the import of other documents.

  1. Interface: Transkribus Pages & Pictures
  2. Interface: Transkribus Texts

The import works according to the following principles:

We first import all pages and the images for the pages into a document (in Transkribus, a document consists of texts and the corresponding images) using interface 1 and ingestion 1. Then we import the texts for all pages with interface 2 and ingestion 2.

3. The interfaces only establish the connection to Transkribus. We import the contents (texts and images) in a further step with the ingestion module of nodegoat. In the ingestion module we configure in which fields in our nodegoat data model the texts and images should be imported (keywords: field assignments, data mapping).

For interface 1 we have to set up ingestion 1.

For interface 2 we have to set up ingestion 2.

In total we need to set up 2 interfaces and 2 ingestion processes for these interfaces in nodegoat.

2 x

2 x

First we create the data model for the pages, images and texts, which we import from Transkribus into nodegoat.

1. Create the data model

The data model consists of 2 objects:

Transkribus Document

Transkribus Pages, Pictures & Texts

We only need Transkribus Document to store the document ID. This is the ID of the document in Transkribus for which we want to import the texts and pictures. More on this below.

The data model for Transkribus Document:

The data model for Transkribus Pages, Pictures & Texts

Now we set up interfaces 1 and 2.

1. Interface: Transkribus Pages & Pictures

URL

https://transkribus.eu/TrpServer/rest/collections/

URL Options

?JSESSIONID=deine Session-ID (eure Session-ID, siehe die nachfolgenden Erläuterungen dazu):

Important: In order to establish a connection with Transkribus (via the interfaces), we need a current session ID from Transkribus. The session ID has a limited validity period (this is defined by Transkribus for the interface). If the time has expired, the import into nodegoat no longer works. Then you have to generate a new session ID with a query via terminal (OSX, Linux). On Windows, this can be done with a command prompt.

You will find the session ID when you log in to Transkribus via the Terminal or command prompt:

Login for OSX Terminal (copy the following lines into the terminal):

curl https://transkribus.eu/TrpServer/rest/auth/login -d “user=max.mustermann@universität_xy.ch&pw=mein Passwort”

In the result of the input (in the terminal), find the section with the session ID, then copy only the long string with numbers and digits and paste it into the nodegoat interfaces (1 and 2) (?JSESSIONID=….) und save it.

Example of a session ID as it appears in the terminal window after the query:

</loginTime><sessionId>5C83373403B08808B496E348F8AA1E40</sessionId><userAgent>

For Windows, the input for the login is identical: curl https://transkribus.eu/TrpServer/rest/auth/login -d “user=max.mustermann@universität_xy.ch&pw=mein Passwort”

To execute the input (login), you must switch to the black prompt input field in Windows, as with OSX, also paste the input there via copy / paste and execute with Enter:

Tip: You do not have to recreate the input in the OSX and Windows terminal (prompt) each time. By pressing the arrow key (up), your last entry appears again. Your entries are therefore saved in the terminals.

As soon as you have saved the current session ID, you can continue here (still in the same interface):

123456

This is your Collection ID at Transkribus (not the Document ID). You can find the Collection ID under Collections. In this example, two Collection IDs are visible:

Query

123456/[query=document][variable=id]1616757[/variable][/query]/fulldoc

The Collection ID in this example is ‘123456’. Insert your Collection ID into the query.

You also insert the document ID into the query. You can find the Document ID in Transkribus by clicking on the small ‘i’ or on the three dots and then on ‘Edit’. In the following example, the ID is 1616757.

Important: As long as you always work with the same collection in Transkribus and import texts from there into nodegoat, you do not have to change the collection ID in the Linked Data Resource. Only if you want to import texts from another collection do you have to change the collection ID in the query beforehand.

Copy the following values into the corresponding fields of your Linked Data Resource.

URI

{“pageList”:{“pages”:{“[]”:{“pageId”:””}}}}

Label

{“pageList”:{“pages”:{“[]”:{“pageNr”:””}}}}

Image

{“pageList”:{“pages”:{“[]”:{“key”:””}}}}

Transkribus page key to URL: this field can only be selected when the conversion for the images has been set up (so that they can be imported as URL). The following picture shows how this works. This conversion also only has to be configured once and can be reused afterwards.

INPUT=UDAYJZMDFGILBTRSGXEGETHV

Script

const key = INPUT;
const url = ‘https://files.transkribus.eu/iiif/2/’ + key + ‘/full/1500,/0/default.jpg’
OUTPUT = {url:url}

OUTPUT=

{
“url”: “”
}

Document ID

{“pageList”:{“pages”:{“[]”:{“docId”:””}}}}

In the next step, we configure the ingestion (field assignments) for the import of pages and pictures into the object Transkribus Pages, Pictures & Texts. To do this, we first go to the Management section of nodegoat and activate the ingestion and reconciliation for our Transkribus project (we can use the reconciliation after the import to compare a vocabulary with the texts).

 

2. Interface Transkribus Texts

The upper part with the URL is identical to the 1st interface. You have to adjust the following fields:

Query

123456/[query=document][variable=id]1612084[/variable][/query]/[query=page][variable=nr]2[/variable][/query]/text

URI

{“PcGts”:{“Page”:{“@attributes”:{“imageFilename”:””}}}}

Label

{“PcGts”:{“Page”:{“@attributes”:{“imageFilename”:””}}}}

Values Text

{“PcGts”:{“Page”:{“TextRegion”:{“[*]TextLine”:{“[]”:{“TextEquiv”:{“Unicode”:””}}}, “<“:”join:\n”}}}}

 

1. Ingestion Pages & Pictures

Important: For the ingestion to work, we have to copy the document ID of the document we want to import from Transkribus into the field of the Transkribus Document object in nodegoat. This tells the ingestion for which document in Transkribus we want to import the pages, images and texts.

(We could also import the document ID directly into nodegoat with an ingestion. But then we would have to set up an interface 3 and an ingestion 3. If we copy the ID by hand, we also make ourselves more aware of what we are doing).

In the Data section we now configure the 1st ingestion (Transkribus Pages & Pictures), which will import pages and pictures from Transkribus into nodegoat. So this 1st ingestion works together with the 1st interface that we have already configured before.

Save the ingestion and run the import. If necessary, empty the cache first (status: reset). After the import you will see the pages and pictures of your document in the object Transkribus Pages & Pictures. Now only the texts are missing.

2. Ingestion of Texts

We import these with the 2nd interface and the 2nd ingestion. The principle is that we update the already imported pages with the texts. The pages (together with the document ID) thus provide the framework that we now fill with the texts (through the import). We configure the 2nd ingestion as follows.

It is therefore important to select the Update Existing Objects mode, not Add New Objects as before for the pages and images.

Done. If everything has worked, you will now see your texts in the object Transkribus Pages & Texts. (Check whether you actually have texts in Transkribus if nothing appears).

See also the tutorial on the ingestion of Transkribus on nodegoat.net:

https://nodegoat.net/guide.s/136/ingest-transcription-data-from-transkribus

Apart from the ingestion, you can also import texts via CSV into nodegoat or insert them via copy / paste.

19. Reconcile texts in nodegoat with a vocabulary (Reconciliation module)

This tutorial complements the previous tutorial (No. 18), explaining the basic functioning of the Reconciliation module.

You can use the reconciliation module to automatically compare the texts imported from Transkribus (see tutorial no. 18) with a vocabulary that you create as an object.

The comparison with a vocabulary that you can define yourself helps you to gain an overview of the contents, especially with large amounts of text (corpus). Since you define the vocabulary yourself, i.e. with your terms, places or persons, etc., this is pattern matching, not pattern recognition. In the first case, you already have an idea of what terms of interest might appear in the texts – as in historical research, for example, where one of the core competences is to be able to evaluate source texts according to form and content. In the second case, on the other hand, an algorithm would search on its own for specific terms (depending on the type and search mode) of the algorithm.

You can also define and compare several vocabularies in nodegoat. For example, one with terms, one with places or institutions, or one with people, works, objects, etc. Basically, such a matching corresponds to a named entity matching.

To the procedure

In the data model add an object vocabulary as well as the object City, which is already present in nodegoat. We need City in the sub-object in order to georeference places that we match with the vocabulary. This allows us to immediately create a map of the places found after the reconciliation process. We can proceed in the same way with terms without georeference (coordinates), with which we can create a network.

The Sub-Object in the data model:

In the object Transkribus Pages, Pictures & Texts we also add an object description where the found terms (patterns) are to be stored. We will select this description in the Reconciliation module below.

In the management of our nodegoat environment we have to add the reconciliation module (if you did the tutorial no 18, it is already active). In the Vocabulary we now add any terms that are to be reconciled. You can see here how we add a location with coordinates (which are stored in the Object City that we use as a reference value):

Now we set up the reconciliation. Here we define which fields are to be reconciled with which vocabulary. We have various setting options. The following example shows only one of them.

Now we go to ‘run’ to start the reconciliation. We reset the status (red button) if objects are still displayed (Objects Processed), which is not the case in the following example.

We now have many more options for how the algorithm should search. Here we only select the option Auto-Save: Any Result. You can make further settings later, depending on the result, and start the reconciliation again.

After reconciliation, click on the network icon to display the network of the found terms. Click on the globe icon to create the map (in the Visual Settings you have to set where the coordinates are stored). Visual Settings:

This example of a reconciliation shows only a simple use case. You have numerous other configuration options to pursue your specific questions about texts with nodegoat.

See also the tutorial on the reconciliation of texts on nodegoat.net:

https://nodegoat.net/guide.s/145/reconcile-textual-data

 

20. Classify a nodegoat data model with the CIDOC reference model

The CIDOC reference model standardises information by assigning it to the terms of the CIDOC CRM (Conceptual Reference Model). The aim of standardisation is to simplify the exchange of information, for example between research projects. The CIDOC CRM was originally developed for the cultural heritage sector.

The CIDOC-model  consists of classes (entities) and properties that can be assigned to data or data models. This means that the CIDOC CRM data model is basically the same as the data model in nodegoat, as both concepts have an object-orientated structure. The classes of CIDOC correspond to the objects and the properties to the categories (classifications) in nodegoat. From the perspective of data collection, classes therefore correspond more to categorising data and properties more to descriptive data.

Thanks to the flexible data modelling, nodeogat can also be used to create an RDF data model with the familiar RDF structure: subject – predicate – object. The Ressource Description Framework (RDF) is a W3C standard for mapping and expressing information in the Semantic Web. Subject and object are resources that are linked to other resources or literals (number, character, date, etc.) to form statements using predicates (properties).

In the object-orientated database model in nodegoat, the RDF subject and RDF object correspond to the objects and the properties to the object descriptions. A simple data model in nodegoat to record data in the format subject – predicate – object, for example, consists of 3 objects:

SubjectPredicate (Literals)Object

In the model, 2 object descriptions for the object and for the predicate are created in the nodegoat object subject. Additional literals (number, character, date, etc.) can be entered in the object predicate, with which the objects can also be linked. This is just one possibility of an RDF-orientated data model. The literals, for example, could also be recorded in a separate object in nodegoat.

There are various ways to use the classes and properties of the CIDOC CRM in a nodegoat project for the mappings. If you want to orientate yourself on CIDOC, it is of course best to adapt the data model accordingly at the start of a project. The procedure for a dynamic assignment to a reference data set is described below. This procedure is particularly suitable for the subsequent classification of a data model with CIDOC.

First, the classes and properties of the CIDOC CRM are imported for a nodegoat project via CSV upload. You can download the CSV file here:

CIDOC__classes_properties.csv

The classes and properties are not differentiated in the CSV file. You can separate them after the import in nodegoat (or beforehand in Excel) if required. However, this is not necessary for a simple use case. The import of CSV data into nodegoat is already described in other tutorials.

To be on the safe side, always compare your classes and properties with the latest developments in CIDOC CRM, as there can always be changes in the new versions:

https://www.cidoc-crm.org/versions-of-the-cidoc-crm

In your data model, you create an object (or a category) with the name ‘CIDOC’, for example, before the import and activate this object for your project. Then import the CIDOC CSV data into this object (or into the category). However, it is advisable to create a CIDOC object because it is a central component of the project. After the import, you can search for classes or properties of CIDOC in the magnifying glass field (activate ‘Quick search’ in the data model for the searchability of the data field).

In your data model, you then create object descriptions or sub-object descriptions that refer to the object (or category) ‘CIDOC’. If necessary, you can create several descriptions for an object or a sub-object.

Now create a reference data record in which all data fields are then filled in with examples. In this data set, which you name ‘CIDOC’, for example, you select the corresponding class or property from the CIDOC object in the object descriptions.

The following example shows a basic application. A description for CIDOC was created in the ‘Person’ object in the data model. Optionally, you can create a description CIDOC reference data set and then mark your reference data set in the data area with ‘yes’. All other data sets are set to ‘no’ by default. This allows you to easily filter out your reference data set from the other data sets.

In the reference dataset, it is best to start with an assignment of the classes and refine the whole thing with the properties. You can download the reference data set as a CSV file from nodegoat, then describe it in a READM file and hand it over to digital long-term archiving together with your data that you have collected with nodegoat and also downloaded as CSV data. Optionally, you can also dynamically publish the reference dataset and the data via the JSON interface (API) integrated in nodegoat.

21. Import of a data model for correspondence networks using the nodegoat interface (API)

This data model, which you can use to record persons and correspondence, corresponds to the model described in the guides on nodegoat.net It is shown there how you can compile the model in nodegoat and record data. By importing the data model, we can start directly with data collection, which is described here: https://nodegoat.net/guide.s/6/enter-your-first-data

The data model contains the basic field functions of nodegoat, which you can customise before or during data collection. The model is configured in such a way that you can easily visualise people and correspondence on maps, in networks and in time series.

Prerequisites: You must have the API function available in the management of your nodegoat domain, which is generally the case with an institutional nodegoat installation.

First, we create an access (API client) for our nodegoat environment (domain).

Then we add a user to the API client.

By adding a user, a passkey (token) is created for this user. The passkey is our password for logging in to our nodegoat domain via the command line (terminal, prompt).

This will take you to the prompt input field (command line) in Windows:

In OSX (Mac) you have to open the ‘Terminal’ programme to access the command line.

Enter the following information in the command line to be able to log in and import the model. Copy the passkey (highlighted in red) in the command line (terminal, prompt) to the position shown here. Make sure that the quotation marks are in superscript. Be carful with the quotation marks (“”), use the ones available in Terminal / Prompt.

curl -H “Authorisation: Bearer N4zU7eX8TnC00N9D20boEAUkhSqecd8TLmbbaz5kK3123456” https://api.nodegoat.yourdomain.com/model/type -X PUT -d @

At your.domain.com (blue) we add our nodegoat domain. If you do not know this, ask the administration of your nodegoat installation.

Tip: You do not have to create the entry in the OSX (terminal) and Windows command line (prompt) each time. By pressing the arrow key (up), your last entry will appear again. Your entries are therefore saved in the terminals.

You can download the ‘Correspondence network’ model, which we use as an example for the import, here:

Correspondence_network_Template_Personal.json (zipped)

Unzip (decompress) the model (JSON file) after downloading. The file then looks like this: ‘Correspondence_network_Template_Personal.json’

Then move the file with the mouse to the position after the @ on the command line. Press ‘Enter’ to start the import of the data model. If this was successful, a corresponding message appears in the command line.

You can now activate the imported objects (3) and categories (1) of the model in the management of your nodegoat domain for any projects. For the imported city object, you can import additional locations with coordinates if required (via CSV upload), or add individual locations manually. Optionally, you can also use the pre-installed city object in nodegoat instead and adjust it accordingly in the imported data model.

 

22. Export of a data model using the nodegoat interface (API)

Data models (and data) can be fully exported from nodegoat in JSON format. An exported data model can be imported into another nodegoat installation and thus data models can be exchanged in the community.

Requirements: You must have the API function available in the management of your nodegoat domain, which is always the case with an institutional nodegoat installation.

We first activate the project, in this case ‘Archaeological sites’, for which we want to export the model. Then create an access (API client) for our nodegoat environment (domain).

Then we add a user to the API client.

By adding a user, a passkey (token) is created for this user. The passkey is our password for logging in to our nodegoat domain via the command line (terminal, prompt).

Now we change to the command line. This will take you to the prompt input field (command line) in Windows:

In OSX (Mac) you have to open the ‘Terminal’ programme to access the command line.

Enter the following information in the command line to be able to log in and export the model. Copy the passkey (highlighted in red) in the command line (terminal, prompt) to the position shown here. Make sure that the quotation marks are in superscript.

curl -H “Authorization: Bearer N4zU7eX8TnC00N9D20boEAUkhSqecd8TLmbbaz5kK3123456” https://api.nodegoat.your.domain.com/project/1234/model/template/type/123456,123457,123458,123459\?output=template

At your.domain.com (blue) we add our nodegoat domain. If you do not know this, ask the administration of your nodegoat installation. After /project/ we enter the ID of our project (pink) and the IDs of the types and classifications (green) that we want to export. You can find the IDs of your project in Management and the IDs of types and classifications in the model:

Tip: You don’t have to re-enter the command in the OSX (terminal) and Windows command line (prompt) every time. By pressing the arrow key (up), your last entry appears again. Your entries are therefore saved in the terminals.

Press ‘Enter’ to start the export. The exported data model appears in your command line. From there you can copy / paste it manually into a text file (make sure that the text file is unformatted).

Now comes the most important step: In order to import the data model, we need to make small manual adjustments in the file. This is what the header of the exported data model looks like in the command line:

1. remove the purple part (make sure it still has a bracket at the top left)
2. rename “types” to “add” (the quotation marks must be superscripted)
3. delete 1 bracket at the very end of the file (as we have also deleted 1 in the purple part)
4. make sure that at the very end of the file the text is only terminated by the bracket and not by other characters
5. pay attention to the formatting, choose a raw formatting (UTF-8).
6. you must add the extension .json to the text file that now contains the data model in order to be able to import the data model.
7. open the text file with the browser. If everything is correct, no error message is displayed, but the tree structure of the model, as shown here (you can see in the top left the “add” that we have renamed):

You can now import this data model into another nodegoat installation as described in tutorial no. 21.

The tutorial only shows one possible application of the nodegoat interface. Further information can be found in the nodegoat documentation:

https://nodegoat.net/documentation.s/59/api

https://nodegoat.net/documentation.s/96/basic-principles

https://nodegoat.net/documentation.s/98/query

https://nodegoat.net/documentation.s/103/store