Nodegoat Tutorials

nodegoat is a virtual research environment that is widely used by an international research community, especially in the humanities. nodegoat is web-based and allows the creation of data models specifically adapted to research questions.

Let’s take a look at the basics when starting a project.

The most important sections in Nodegoat are: Data – Management – Model. When you log into your new Nodegoat research environment, you must first create a project in ‘Management‘. Then you create the data model, i.e. the database fields, in ‘Model‘. After the database fields have been created in the model, they are immediately available in the ‘Data‘ section. This means that each database field of ‘Model’ also appears in the ‘Data’ section and we can immediately start filling in our research data there. Note: We must activate the database fields of ‘Model’ in our project in ‘Management’ by selecting them. This way we can control which database fields from ‘Model’ we want to display in our ‘Data’ section.

In the section of ‘Data’ we mainly work with Objects and Categories. We use Objects for the main types of our data model, for example for persons. We use Categories like attributes of these main types, for example the occupation of a person. Attention: the Objects and Categories are named differently in the ‘Data’ section than in the ‘Model’ section: Data: Objects – Categories / Model: Object Types – Classifications. See here the ‘Model’ section:

And what are the ‘Reversals’? With the ‘Reversals’ you can tag your data automatically. But look at it later when you have filled in your data (by hand or by data import (CSV) or by data ingestion).

Data modeling in Nodegoat

This is the ultra-short version on data modeling in Nodegoat. Basically, you build your model in Nodegoat with:

1) Objects

E.g. persons, events, institutions, texts, manuscripts, pictures, maps, etc.

2) Sub-objects in which you can enter dates (point, period, vague dates) and places.

The main purpose of the sub-objects is to contextualise the objects in time and space.

3) Categories

Difference between Objects and Categories? Objects are classified  with Categories. Example: An Object can be a person, and a Category the person’s profession. So easy? Yes. Objects and Sub-Objects can further have descriptions (Text, Images, Links, Relations). Sounds a bit like Excel to you? Yes. Objects are similar to columns in Excel, descriptions similar to rows. And Sub-Objects? This is special in Nodegoat, like you store in Excel Locations and Dates for a column. So if you have data in Excel or in a smiliar format, it must be quite easy to import it into Nodegoat? Yes, if your data has a clean structure and is consistent.

Are there any mandatory requirements or restrictions for data modelling in Nodegoat?

No, you can define your model individually, adapted to the research questions.

Should you adapt my model to existing standards, such as the CIDOC reference model?

That is possible, yes. Nodegoat can generate and provide standardised data and thus improve the interoperability of research data. However, it is advisable to first adapt the data model to the sources and to the research questions and to consider what new insights can be gained with this data model (and the data collected and visualised in it)? So you should ask yourself first, whether your data model is consistent and uniformly structured and, above all, whether outsiders understand what you want to represent and say with your data model and the data. A secret tip: Think from the end. What new insights do you want to present with your data and how do you want to visualise them? What should a map, a network or a time series with your data look like so that outsiders can understand it?

But where should you start? Which objects and categories should be defined and how named?

That is the key question. Objects and Categories are on the same level at the beginning. Now, by creating relationships between the two (or between Objects), we form a structure (hierarchy) that represents our data model. Secret tip: First find about 5 Objects that are important for your data model. These are the archetypes of your model, the central types. If you are interested in people, you will create an Object ‘Person’. If you are researching books and their contents, then the object ‘Book’ is central to you. Pro tip: don’t make the Objects too small. For example, if you are interested in artists, it is better to create a ‘Person’ Object and a ‘Profession’ category where you can enter ‘Artist’. This way you can enter not only artists in the ‘Person’ Object, but also patrons or people related to the artists. This gives you an overview of all the people in your project in the ‘Person’ Object. And it prevents redundancies, for example if an artist is also a patron. Pro-tip: You can of course also use ‘Object’ as a name for the type Object and use it to record a wide variety of things (archaeological finds, manuscripts, books, pictures, maps, etc.).

A question that is often asked: Can I export all the data I have entered in Nodegoat completely at any time?

Yes, either as CSV data, as ODT (Open Document Text) or via interface in JSON format (JSON export depends on the type of Nodegoat installation).

 

TUTORIALS

If you are in a hurry and looking for quick success with nodegoat, you should start right away with tutorial no. 10 (or no. 13).  Tutorial no. 10 shows how to create a basic data model, import research data (ship positions) of a climate project and visualise it on a map. A short video (no sound, no comments) shows from scratch how to do it. For the tutorial you need a nodegoat account and the test data set I provide on this website. If you don’t have an account yet, ask your friend where to get one. Or your institution (university), if they provide nodegoat as a digital tool (for example Nodegoat GO, a multi-user platform). If you are studying or working at the University of Bern, you can get a nodegoat account here: https://forms.gle/Gjm4682EJLsq5TCR7. Or get a student account directly at nodegoat: https://nodegoat.net

Overview of the tutorials:

1. Getting started: create your first Project
2. Your first visualization
3. Entering dates
4. Entering vague dates
5. Importing Locations with CSV data
6. Create your first relation
7. Create your first Classification
8. Expand your Data Model
9. Change the background map
10. Import your Excel data (CSV data)
11. Advanced Tutorial – Import of Open Data
12. Show observations (with date and places) for an object in list view
13. Import data from Wikidata (Import module)
14. Import data from Wikidata (dynamic data ingestion)
15. Colouring dots on a map and creating a legend for them
16. Draw an area yourself (GeoJSON) and embed it as a map
17. Network analysis of correspondences with nodegoat

18. The ‘Ideal Standard’ (IS) data model for nodegoat, which is suitable for many humanities research questions (in progress)

 

1. Getting started: create your first Project in ‘Management’ /  add Object Types in ‘Model’ / add Object descriptions in Model / activate the Object Types in ‘Management’ / work with the Object Types in ‘Data’

If you have a new Nodegoat account, just follow the instructions in the video after logging in to create your first project with one object and two object descriptions (the videos have no sound, no comments, just watch exactly what is done). Additionally, after logging in, there are infotexts in Nodegoat that show you what you need to do to create a new project.

 

2. Your first visualization: Locations must be stored in the Sub-Object of an Object. Create a Sub-Object ‘Location’ for your Object in ‘Model’

In this video we will add a field to store locations in our “First Object” that we created in video 1. Locations are not stored in the object description (like the name of the Object), but in the Sub-Object, as you will see in the video. You’ll also see how to change a Location:

 

3. Entering dates: Dates must be stored in the Sub-Object of an Object

In this video we will store a date for our “first Object”. Then we go to “Model” and select “Period” in the Sub-Object so that we can store two dates (start date and end date) in our “first Object”. This means that for each Object in Nodegoat you can choose a point in time or a period of time. In addition, you can also save vague dates, as we will show later in video 4. Attention: the date format goes like this: 1-8-2020 (not 1.8.2020)

 

4. Entering vague dates: Vague dates must be stored in the Sub-Object in Chronology

In this video, a simple example shows how to work with vague dates in Nodegoat. There are many more ways to capture vague dates in Nodegoat, see: https://nodegoat.net/guides (working with temporal data). In the example we do not know the exact start date, but we estimate that it was 5 days after the start we entered earlier (1-8-2020). So we make a statement: ‘5 days after begin start date’. Such vague dates are entered in the Sub-Object. Not as ‘Point’, but as ‘Chronology’, as shown in the video.

 

If you want to learn more about vague dates and chronological statements in nodegoat, check out this presentation by Pim van Bree and Geert Kessels (LAB1100)-

 

5. Importing Locations with CSV data: Geo coordinates must be stored in Sub-Objects

In this video you see how to create a Object Type ‘Locations’ with Object Descriptions that match the column names in the CSV-File. After creating the Object, import the sample CSV data into your Object ‘Locations’. The sample data provides Locations (40k) with geonames.org-IDs and Geo Coordinates: Location1.csv

Hint: In Nodegoat are about 130k locations from geonames.org preinstalled (Type: City). These locations can be used and extended by all users as a collaborative work.

6. Create your first relation: Relations are created in the Model to be used in Data

In this video we establish a relationship between the ‘first object’ and locations, because we want to use the locations as a georeference for the ‘first object’. In Model we select ‘Locations’ as the georeference in the sub-object of the ‘new object’. In Data we enter a location and see that it is not visualised immediately because we have to set the Visual Settings correctly first (selecting Loaction as the reference for visualisation). We’ll change the Location in the ‘first Object’ and see that we have to activate the ‘Quicksearch’ field in Model (in the Object descriptions of Location), so that we can search for a new Location in the Quicksearch field.

7. Create your first Classification (Category): Classifications are created in the Model to be used in Data.

In this video we create a classification ‘Attribute’ in the Model. Then we go to the ‘first object’ in the Model to add an Object description ‘Attribute’ that is linked to the created classification.

 

8. Expand your Data Model: Person with ‘Event Birth’.

In this video we will change ‘first Object’ to ‘Person’, adding a Classification in Model ‘Event (kind of’, linking it to the Sub-Object description of Person. Then we add the Event ‘Birth’ to the Sub-Object. This gives us a simple model for biographies that we can extend with other events (such as death, activities, etc.). In the Sub-Object we can add to each of these events dates and a location.

 

9. Change the background map

In nodegoat you can integrate background maps as you like, if the maps are available on a tile server via link. So you need only this link to the map. But where can you find such links? Google is your friend. Or this short tutorial. Go in nodegoat to the Visualisation Settings.

Then go to the Visual Settings tab. You will automatically get to the Geographical Settings.

Standard map is the Google Map with its copyright. Remove the Google Map link in the Map field and insert your new link for your map. Change the copyright as desired.

Save your Map settings:

I have compiled some links here. Don’t be afraid of the length of the links, they are just like this, sometimes longer, sometimes shorter:

Google Map

//mt{s}.googleapis.com/vt?pb=!1m5!1m4!1i{z}!2i{x}!3i{y}!4i256!2m3!1e0!2sm!3i336008092!3m14!2sen-US!3sUS!5e18!12m1!1e47!12m3!1e37!2m1!1ssmartmaps!12m4!1e26!2m2!1sstyles!2zcy5lOmd8cC5jOiNmZmY1ZjVmNSxzLmU6bHxwLnY6b2ZmLHMuZTpsLml8cC52Om9mZixzLmU6bC50LmZ8cC5jOiNmZjYxNjE2MSxzLmU6bC50LnN8cC5jOiNmZmY1ZjVmNSxzLnQ6MXxzLmU6Z3xwLnY6b2ZmLHMudDoyMXxzLmU6bC50LmZ8cC5jOiNmZmJkYmRiZCxzLnQ6MjB8cC52Om9mZixzLnQ6MnxwLnY6b2ZmLHMudDoyfHMuZTpnfHAuYzojZmZlZWVlZWUscy50OjJ8cy5lOmwudC5mfHAuYzojZmY3NTc1NzUscy50OjQwfHMuZTpnfHAuYzojZmZlNWU1ZTUscy50OjQwfHMuZTpsLnQuZnxwLmM6I2ZmOWU5ZTllLHMudDozfHAudjpvZmYscy50OjN8cy5lOmd8cC5jOiNmZmZmZmZmZixzLnQ6M3xzLmU6bC5pfHAudjpvZmYscy50OjUwfHMuZTpsLnQuZnxwLmM6I2ZmNzU3NTc1LHMudDo0OXxzLmU6Z3xwLmM6I2ZmZGFkYWRhLHMudDo0OXxzLmU6bC50LmZ8cC5jOiNmZjYxNjE2MSxzLnQ6NTF8cy5lOmwudC5mfHAuYzojZmY5ZTllOWUscy50OjR8cC52Om9mZixzLnQ6NjV8cy5lOmd8cC5jOiNmZmU1ZTVlNSxzLnQ6NjZ8cy5lOmd8cC5jOiNmZmVlZWVlZSxzLnQ6NnxzLmU6Z3xwLmM6I2ZmYzljOWM5LHMudDo2fHMuZTpsLnQuZnxwLmM6I2ZmOWU5ZTll

Google Satellite Map (without places, topography)

https://mt{s}.google.com/vt/lyrs=s&x={x}&y={y}&z={z}

Background info on Google Maps:  the standard link for Google Map tiles looks like:

https://mt.google.com/vt/lyrs=m&x={x}&y={y}&z={z}

If you want a different Google Map, just replace the letter in the URL at lyrs=

So for example lyrs=m

This will then display the standard street map.

The whole link will then look like this:

https://mt.google.com/vt/lyrs=m&x={x}&y={y}&z={z}

With these letters you can change the map:

h: shows only roads

m: is the standard roadmap

p: shows the terrain

r: is another roadmap

s: shows the satellit view

t: shows only the terrrain

y: is a hybrid map with terrain and roads

 

Grey map without places

//mt{s}.googleapis.com/vt?pb=!1m5!1m4!1i{z}!2i{x}!3i{y}!4i256!2m3!1e0!2sm!3i336008092!3m14!2sen-US!3sUS!5e18!12m1!1e47!12m3!1e37!2m1!1ssmartmaps!12m4!1e26!2m2!1sstyles!2zcy5lOmd8cC5jOiNmZmY1ZjVmNSxzLmU6bHxwLnY6b2ZmLHMuZTpsLml8cC52Om9mZixzLmU6bC50LmZ8cC5jOiNmZjYxNjE2MSxzLmU6bC50LnN8cC5jOiNmZmY1ZjVmNSxzLnQ6MXxzLmU6Z3xwLnY6b2ZmLHMudDoyMXxzLmU6bC50LmZ8cC5jOiNmZmJkYmRiZCxzLnQ6MjB8cC52Om9mZixzLnQ6MnxwLnY6b2ZmLHMudDoyfHMuZTpnfHAuYzojZmZlZWVlZWUscy50OjJ8cy5lOmwudC5mfHAuYzojZmY3NTc1NzUscy50OjQwfHMuZTpnfHAuYzojZmZlNWU1ZTUscy50OjQwfHMuZTpsLnQuZnxwLmM6I2ZmOWU5ZTllLHMudDozfHAudjpvZmYscy50OjN8cy5lOmd8cC5jOiNmZmZmZmZmZixzLnQ6M3xzLmU6bC5pfHAudjpvZmYscy50OjUwfHMuZTpsLnQuZnxwLmM6I2ZmNzU3NTc1LHMudDo0OXxzLmU6Z3xwLmM6I2ZmZGFkYWRhLHMudDo0OXxzLmU6bC50LmZ8cC5jOiNmZjYxNjE2MSxzLnQ6NTF8cy5lOmwudC5mfHAuYzojZmY5ZTllOWUscy50OjR8cC52Om9mZixzLnQ6NjV8cy5lOmd8cC5jOiNmZmU1ZTVlNSxzLnQ6NjZ8cy5lOmd8cC5jOiNmZmVlZWVlZSxzLnQ6NnxzLmU6Z3xwLmM6I2ZmYzljOWM5LHMudDo2fHMuZTpsLnQuZnxwLmM6I2ZmOWU5ZTll

Dark map for cool visualisations to impress your friends….

http://mt{s}.googleapis.com/vt?pb=!1m5!1m4!1i{z}!2i{x}!3i{y}!4i256!2m3!1e0!2sm!3i323349059!3m14!2sen-US!3sUS!5e18!12m1!1e47!12m3!1e37!2m1!1ssmartmaps!12m4!1e26!2m2!1sstyles!2zcy50OjF8cC52Om9mZixzLnQ6MnxwLnY6b2ZmLHMudDozfHAudjpvZmYscy50OjR8cC52Om9mZixzLnQ6NnxzLmU6bHxwLnY6b2ZmLHMudDo1fHMuZTpsfHAudjpvZmYscy50OjgxfHAudjpvZmYscy50OjZ8cy5lOmd8cC5sOi0xMDAscy50OjgyfHAuczotMTAwfHAubDotODM!4e0

Visualisation of the immigration of new citizens (blue) to cities in Europe in the Middle Ages.

Fig. Visualisation of the immigration of new citizens (blue) to cities in Europe in the Middle Ages.

 

Digital Atlas of the Roman Empire

https://dh.gu.se/tiles/imperium/{z}/{x}/{y}.png

See for this cool project: https://dh.gu.se/dare/

Mercator map from 1607

https://maps.georeferencer.com/georeferences/66a34667-1847-5ea6-b6a8-c81736a3425d/2018-08-26T20:22:32.883884Z/map/{z}/{x}/{y}.png?key=mpcE7jAf5llCJV0hoUfk

The example of the Mercator map refers to the Georeferencer, a service for online maps, where you can find a lot of links to historical maps. Many institutions have their own account at Georeferencer like the David Rumsey Collections with a useful overview of the referenced maps (world map):

https://www.davidrumsey.com/view/georeferenced-maps

Create an account at Georeferencer to get the link for a map provided (for example by the David Rumsey collection). Log in at Georeferencer, choose a map here:

https://www.davidrumsey.com/view/georeferenced-maps

Then go to: ‘This map’ and to ‘get Links’. Copy the link into the Map field of nodegoat.

As another example, the British Library also has an account at Georeferencer. You can find their maps here, on the interactive map:

https://britishlibrary.georeferencer.com/api/v1/density

 

10. Import your Excel data (CSV data), which have geo coordinates (longitude and latitude) into nodegoat and visualize the data on a map

In this tutorial I provide a set of test data with geocoordinates that you can easily import and visualize, like the map below. It shows positions of ships calculated from logbooks (18th – 19th century). The data are especially interesting for historical climate research.

Prerequisite is that you already have a nodegoat account. If you don’t have an account yet, ask your friend where to get one. Or your institution (university), if they provide nodegoat as a digital tool. If you are studying or working at the Faculty of Philosophy and History at the University of Bern, you can get a nodegoat account here: https://forms.gle/Gjm4682EJLsq5TCR7. Alternatively you can get a student account directly at nodegoat: https://nodegoat.net/

The following video (no sound, no comments) shows a step by step guide from scratch. The video starts with the login into your nodegoat account. The next steps are: Create a project, create an object type, download data sample from this website, import and visualize the data:

If you prefer written instructions, you can continue here. These instructions are identical to the video, but contain some background information.

Login into your nodegoat account. Import the CSV data into your already existing project or create a new one: ‘climate project’. We will import data (sample) from a interesting project about logbooks of ships which are important for weather observations. Here is the website of the project where the data is available:

https://www.historicalclimatology.com/cliwoc.htmlClimatological Database for the World’s Oceans (CLIWOC)

“The database consists of 287,114 logbooks written aboard Dutch, English, French, and Spanish sailing ships. The vast majority of these logbooks date from between 1750 and 1850, yet four ship logbooks were incorporated that predate 1750. These were centuries of European imperial expansion, and so the logbooks record the activities of sailors – both civilian and military – in oceans that span the entire globe.”

I have downloaded the following data: ‘Download as an Open Office Spreadsheet’

I opened the spreadsheet in Excel and first added a column on the far left to add to the records (rows) a unique identifier, because they don’t have any. I’ve added for this the following into field A2 in Excel which contains the first record:

Then double-click on the icon to the right of the cell and it fills the whole column with identifiers. The identifiers are very important. With these identifiers you can later update your data records in nodegoat (‘Update Existing Objects’). I always import the identifiers into nodegoat first and then update the records with additional information based on the identifiers. In the nodegoat Import web interface you can choose whether you want to create new records or update existing ones.

In nodegoat you can import 50k of data records (rows) at once. So if you want to import all the more than 200k data rows, you have to split them up. I’ve already done this by providing a test data sample here with 20k records that you can use for your import. I have prepared this data and just selected a few columns to get started: Identifier, ship name, year, longitude and latitude. You can download the test data sample here:

climate project test data (CSV)

Your Data Model in your project for this data sample should look like this:

Just add one Sub-Object with ‘year + coordinates’

We will import the data into nodegoat via web interface (you can also import data via JSON interface, we will cover this in tutorial 14).

Go to Model > Import > CSV Files, there you upload the downloaded CSV file (climate project test data).

Background: To import your data into nodegoat, the data must be available in a text file in UTF8 format. In Excel for example you can save your data as CSV data. Go to ‘Save as’ in your Excel sheet and choose CSV-UTF8 as data format. CSV means comma separated values (data). Open your CSV file with a text editor and you will see the many separators in the data.

Go to the Import Template. Map the fields of your CSV data to the fields in your data model. For the year (YR), select the Date Start field in your data model.

Now you can run your import template. You can first check a selection of the records to see if you have mapped the fields correctly. Click on Next to import the 20k data records.

Have a coffee now, you have already achieved a lot today.

After the import go to ‘Data’ and click on the Geografical Visualisation.

This is how your result should look like. Zoom in and don’t forget to play with the time slider. You will also discover ships in the desert of Africa, these are errors in the data that do not include either the longitude or latitude. So the visualization also helps you to detect such errors.

You can also make the dots on the map smaller. In the tab where the Geographical Visualization is located, go to the right of it on the ‘Visual Settings’ and then again on it. Set your dot size to 3 and visualize the data again on the map, see also the video for this.

You can update now your data records based on the identifiers. Create a CSV file with the data you want to continue importing. When importing, select ‘Update Existing Objects’ and choose your identifier to map the CSV data to the corresponding record in your nodegoat database.

 

11. Advanced Tutorial – Import of Open Data into nodegoat: Benjamin Franklin’s Post Office Records

Benjamin West: Benjamin Franklin Drawing Electricity from the Sky

Benjamin West (1738-1820): Benjamin Franklin Drawing Electricity from the Sky, Philadelphia Museum of Art, image Source: Public domain, via Wikimedia Commons

In 1743 the American Philosophical Society (APS) – the oldest learned society in the US – was founded in Philadelphia by Benjamin Franklin, John Bartram, Francis Hopkinson and others for the purpose of “promoting useful knowledge”, as you can read on tha APS website. Today, APS follows an open data strategy that encourages researchers to use and reuse the open datasets provided under a Creative Commons Attribution 4.0 License. In this tutorial, we will work with such a dataset as it is a very useful resource to show what you can do with it in nodegoat, focusing on importing and mapping the data. Citation of the dataset: Heider, Cynthia, Bayard Miller and Scott Ziegler. Post Office Book, 1748-1752. BF85f6-8. Distributed by Philadelphia: American Philosophical Society Library & Museum, 2017. https://diglib.amphilsoc.org/islandora/object/compound:11.

Franklin was not only the founder of the society,  he also became Postmaster of Philadelphia in 1737, appointed by the British Crown Post. For a later time when he was serving as a Postmaster, records of letters are still available, which are archived at the APS library and made available as Open Data, as we read on the APS website:

“Benjamin Franklin’s Post Office Records: Post Office Book, Philadelphia incoming and outgoing mail, 1748-1752. Created while Benjamin Franklin served as Postmaster of Philadelphia, these datasets reveal a wealth of previously untapped information about colonial correspondence.” (Source: APS website).

In the following tutorial, we will import into nodegoat only the outgoing letters from the post office in Philadelphia. For the tutorial there are three videos which show from scratch how the data for the outgoing letters are imported and visualised in nodegoat. The videos have no sound and comments, just observe what to do. Requirements: you need a nodegoat account and Google Sheets (or Excel, but the step by step tutorial uses Google Sheets).

The challenge in Video 1 is to reformat the dates for import into nodegoat and to convert the American date format into the European one. An additional location (Philadelphia Post Office) is added for visualisation. The tutorial starts with the data download from the APS website:
https://diglib.amphilsoc.org/data

Video 1:

In video 2, a data model is first created in nodegoat, based on the column names of the data in Google Sheets. Then the data is downloaded as CSV from Google Sheets and uploaded to nodegoat,  the data fields of the CSV file mapped with the data fields in nodegoat and the import process is started. During the import process nodegoat automatically makes assignments to the locations in the CSV data which can be accepted or rejected.

Video 2:

Video 3 shows how to work with the imported data. First a basic function is shown: if you click just on one object in nodegoat (a window with the object opens) only this object is visualised on the map (by clicking on the globe symbol). If you call up all objects, for example by selecting ‘all’ to the right of the filter symbol (screenshot below) or by scrolling between the pages (1,2,3 etc), all objects are visualised:

If you do this with our data, as in the video, by clicking on the globe symbol, the locations that we have previously assigned during the import process will be displayed on the map. Not all of these locations are correctly located, because during the import process we did not check if the location is really the right one, but simply chose a location to do the data cleansing afterwards in nodegoat. It would have been better if we had already cleaned the data in Google Sheets or Excel BEFORE the data import, which is highly recommended! But no worries: you can also clean up the data within nodegoat, as an example shows in the video. The video also shows how to add and use a database field ‘Comment’ to the data model, for example if you want to indicate that you are unsure about the location of a place. And the video shows how to create a filter query to find locations that do not have geo-coordinates.

Video 3:

12. Show observations (with dates and places) for an object in list view

This data model, which is shown here in its basic features, works for example for:
Person (Object) – biographical event (observation), i.e. for biographical data. But of course it also works for other models with which we want to describe an object precisely with observations. It works for any kind of Objects like Books, texts, artefacts, instutitions, organisations etc.

With the observations (in the example the ‘event’), we contextualise the Object in time and space by recording this information in the Sub-Object. In the following example, we use ‘event’ or ‘Ereignis’ instead of the term ‘observation’.

Example:

In this example you see the biographical events of Ludwig von Adlikon in a list (with green, blue and red events). At the top we see further information about him: first name, surname, origin etc.

What does the data model in nodegoat look like? There are two objects: Person and Event. In the example, we see the object ‘Person’. The events related to this person (and thus listed) are stored the object ‘Event’.

1) Create the objects ‘Person’ and ‘Event’ in your data model in nodegoat in the section ‘Model’.

2) Create a description ‘Person’ in the Sub-Object of the Object ‘Event’ and link this field to the object ‘Person’, like here:

Create another Sub-Object like you did for ‘Person’ to store dates and places for an Event. Call it for example Sub-Object ‘Dates / Places’. If you have done this, you can specify in the Sub-Object ‘Person’ that the date and place of this new Sub-Object should be automatically taken over in the Sub-Object ‘Person’ (important for visualisations and for storing properly the data in general). In the example here you see how it goes. ‘Datum_Location’ in the example stands for ‘Dates / Places’ and ‘Ereignis’ for ‘Event’, this here are the settings for the Date, look at  ‘Source’:

These are the settings for the Location, look at the ‘Reference’ and at the ‘Source’:

 

3) In the ‘Management’ section, you have to enable in the ‘Person’ Object that the events will be displayed in list form in the ‘Data’ section: Management > Your Project > Organise > Person > Cross-Referenced > Event > Sub-Object Descriptions > Select the checkbox

Why is this model useful? Because it is very simple, but takes into account the fundamental principle in data collection in the humanities: observations. The events from above are nothing but observations. So the events in the example from above are observations on biographical points in life of a person. Points (or periods) that you can record with a place and a date in the same Sub-Object where you store the person: everything is included in such an observation: Person, event, place (space) and time. So the information is all centrally stored in one observation. You can use this information now very well for visualisations (maps, networks, time series).Of course, we can now extend this simple model with further observations and information, or even expand it in terms of capturing correspondences.

13. Import data from Wikidata (Import module)

In this tutorial we will import and visualise data from Wikidata on archaeological sites in Switzerland. I got the idea for this tutorial because of an interesting student project at the University of Bern on the visualisation of archaeological data available at SPARQL endpoints, i.e. via interface. Information about this project can be found here:

https://www.iaw.unibe.ch/forschung/bern_coda_lab/projects/student_projects/sparql_hs_2019/ssdi/index_ger.html

However, this is only one way to import data into nodegoat. In tutorial no. 14 we will show how we can import the same data directly (i.e. without uploading) from Wikidata into nodegoat using the dynamic data import module.

If you want to skip the following data query from Wikidata, you can already download the CSV data here and go straight to creating the data model in nodegoat.

With the data query in Wikidata below, we first search for all archaeological sites in Switzerland, using Wikidata’s query service: https://query.wikidata.org

Go to the query with this link and run it.

We can download the result (over 1000 sites) as CSV data, i.e. as comma-separated values. We then upload this CSV data from our computer via web browser into nodegoat. But before we can upload the data for the import, we must of course first create a project in nodegoat with the same data fields that are present in the CSV data for the import. In nodegoat, we would have to create a project in the ‘Management’ section, which we call ‘Archaeological sites’.

Now we switch to the ‘Model’ section to create the three data fields of our project and add 1 Object Type that will contain all the data. We match the data fields of the Object Type to the CSV data containing the following categories: site, coord, siteLabel. We now also create these categories (= data fields) in our Object Type that we call ‘site’. We use site and siteLabel in the data model for descriptions of this Object, while coord, i.e. the geo-coordinates, are imported into the Sub-Object of the this Object in order to be able to visualise the data.

We save our Object Type and go back to ‘Management’ and to our project (Archaeological sites) to activate the created Object Type ‘site’ in our project under ‘Model’:

Now we upload the CSV data via web browser into nodegoat.

Then we create an import template in nodegoat for the data mapping of CSV data to our three data fields in nodegoat.

We start the import by clicking on ‘run’ and the result will look like this:

We visualise the data with the geographic visualisation function and get this interactive map.

Fig. Archaeological sites in Switzerland (data from Wikidata)

With the conditions in nodegoat we can colour or weight the points on the map differently. Of course, we can now add further fields and information about an archaeological site in our data model and display them on the map.

Finally, we can export the data again from nodegoat and add our additions to Wikidata. A similar service to Wikidata is offered by the LOBID project, which provides a lot of very useful data. From Wikidata, as will be shown in the next tutorial, or from LOBID, data can also be imported dynamically into nodegoat.

14. Import data from Wikidata (dynamic data ingestion)

First, an important note on a very useful data conversion (or cleansing) function that we can perform for each database field before import. Concrete example: a data interface outputs the Wikidata number in the following format: http://www.wikidata.org/entity/Q3324044. However, we now want to import only the number ‘Q3324044’ into nodegoat and leave out the rest. To do this, we go to Model > Linked Data > Conversions, create a new conversion there and enter the following values into the fields as shown in the illustration. We see that we simply cut off the front part of the URI and so only the number (identifier) can be imported. To see the result of the script, we click on ‘test’.

Where do we have to add this conversion to be able to apply it for the import? We go to Model > Linked Data and to our Linked Data Resource that we have defined there. At the bottom of the assignments of the database fields to be imported, we can add the conversion (see illustration, we have previously saved the conversion as ‘Convert URI to ID’). This way, only the number is imported during the dynamic data import. Of course, date formats or all other values that you can convert with JavaScript can also be imported in this way. Since JavaScript offers many possibilities for this, the conversions have great potential.

Here is an example of converting a date format:

Pro-tip: if the data from the data sources to be imported are inconsistent (which is actually always a little bit the case), you can for example perform a ‘double import’. For example, you can convert dates with the conversion before the import and import them at the same time into the date field in nodegoat the dates as text or string into an object description. After the import, the incorrect dates in the object description can be easily filtered out and we only have to clean them up and not the rest of the correct dates.

Futher information on the conversions can be found here: https://nodegoat.net/documentation.s/142/conversions

 

Now we start with tutorial no. 14. It builds on tutorial no. 13. We import the same data into nodegoat as in tutorial 13, but this time not by uploading a CSV file via web browser, but directly via nodegoat’s interface. We use the same data model as in tutorial no. 13.

The dynamic data ingestion module was developed as part of my SNSF SPARK project. Information and examples for the application of the module can be found here.

First we have to activate the module for data ingestion for our project. We go to ‘Management’ and to the project ‘Archeological sites’ > Project > Model > System > Ingestion

We need to set up two things for this in our nodegoat environment for the ingestion process:

1) Linked data rescoure (data retrieval via interface)
2) Data mapping (for importing the data)

We go to Model > Linked Data > Add Linked Data Resource

Here we define the interface. In this example, a SPARQL end point from Wikidata. Fill in the fields as follows:

Now we add the same query that we already used in tutorial no. 13. You can find the query under this link. Click on ‘test’.

In the response field we see the result of the query. Click on ‘use’  to assign the database fields from Wikidata to the database fields in nodegoat:

Save your Linked Data Resource and go to Data > Processes > Ingestion > Add Ingestion for the data mapping. Choose as Source your Linked Data Resource (Dynamic Data Ingestion). Target of the data import (ingestion) is your Object Type ‘site’. Map the database fields:

Save the ingestion process and run it. The result will look like this:

Click on Geographical Visualisation.

You will get this map as result. All archaeological sites in Switzerland available in Wikidata.

Fig. Archaeological sites in Switzerland (data from Wikidata)

The data ingestion process presented in this tutorial showed how to assemble a data collection. We can now enrich this data collection with another ingestion process using the update function. With this function we can enrich the whole or part of our dataset with further data, for example from Wikidata, from LOBID or from any other data source with a data output in JSON format. You could also use this dataset for network analysis in nodegoat if you adapt your data model accordingly.

You will find more information on the dynamic data ingestion module in the official documentation of nodegoat and in this blog post. You can also find more information and examples here: SPARK Workshop on dynamic data ingestion.

 

15. Colouring dots on a map and creating a legend for them

This tutorial builds on the previous tutorials (13 and 14). It shows how we can colour dots (= geo points) on a map and create a colour legend. The procedure described can be applied to maps in nodegoat in general.
To colour dots and to create a legend, we need the so-called ‘conditions‘ in nodegoat, which you can find on the right in the toolbar for the visualisations. With the conditions we can specifically determine which dots of the data model we want to colour on the map (or in a network) according to which criteria. In the following example, we colour in red the dots that point to a “römischer Gutshof”. Check out the user guide of nodegoat for further information about the conditions: https://nodegoat.net/documentation.s/88/conditions

We go to our data in the list view and select the conditions icon (the second icon from the right in the toolbar). In the tab we go to ‘Nodes’ (= dots). We select ‘Object’ and as title for the legend ‘Römischer Gutshof’ as well as any colour with which the dots should appear on the map. The title will appear in the legend that will be automatically created on the map.

What do we have to do now? We have to tell nodegoat which dots should be coloured red. To do this, we go to ‘Filter’ and enter the terms ‘römischer Gutshof’ in the field ‘site label’.

We save our entries and go to the globe icon in the toolbar to view the result on the map. The legend is interactive. We can click on the coloured red bar to show or hide the corresponding dots on the map. This is helpful for visual data analysis, especially when we want to display and analyse many different coloured dots on a map.

We can now expand the legend in the conditions according to any criteria of our data set. For example, in addition to the Roman manor houses, we can also display graves and other finds that are present in our data set with the legend. Note: the conditions we have created are stored temporarily. If we want to save them permanently, we open the conditions and click on the blue ‘save’ button at the top left.

We can additionally integrate another background map. For example, the ‘Digital Atlas of the Roman Empire’. The procedure for this is already described in Tutorial No. 9. Here is the short version.

In the toolbar we go to: Visualisation Settings > Visual Settings > Geographical > Map

In the field ‘Map’ we copy the following link:

https://dh.gu.se/tiles/imperium/{z}/{x}/{y}.png

Then we go back to our dataset and click on the globe symbol to display the new background map.

Fig. Archaeological sites in Switzerland (data from Wikidata)

16. Draw an area yourself (GeoJSON) and embed it as a map

In nodegoat we can integrate any areas as independent maps or as background maps for visualisations. For example, areas for dioceses, principalities, counties, historical national and sovereign borders, but also self-defined spaces, such as for archaeological excavations or to particularly highlight a specific space of an investigation. Prerequisite: the areas must be drawn as a polygon (for an area) in the format GeoJSON. The GeoJSON code is then copied into a predefined field in nodegoat. So: a simple copy / paste exercise. Once the area is saved in nodegoat, we can colour it with the conditions (edges, areas) and label it (this will be explained in a later tutorial).

Background: the JSON data format is generally used in nodegoat for the organisation (structuring) of data. GeoJSON is the geographical variant of this data format.

After we have drawn a polygon online in GeoJSON, the JSON code for this area appears in the right-hand column. We copy this code completely and then paste it into the sub-object of an object in nodegoat.

So we first create an object in nodegoat, which we call ‘Geo Space’, for example. For this we create a sub-object, which we call ‘Area’. If you don’t want to draw an area, you can download the GeoJSON code for the historical boundaries of the diocese of Minden around 1500 here (PDF) . This code is provided by the research project Germania Sacra.

In the ‘Management’ section we activate the newly created object ‘Geo Space’ in our project in order to be able to use it in the ‘Data’ section.

Then we create a new object ‘Geo Space’ in the ‘Data’ area and insert the JSON code into the field. To do this, we select ‘Geometry’ from the menu (at Location):

We save the input and visualise the area by clicking on the globe symbol in the toolbar (far left).

With the zoom bar (on the left of the display) we can zoom out and see the result, the diocese of Minden:

To include this map as a background map for a visualisation in nodegoat, we need to create a scenario in our Geo Space object.
A scenario consists of a filter query and visualisation settings. After we have created such a scenario, we can select it in the Visualisation Settings > Context and thus the map of the scenario appears as a background map.

Create a filter ‘Scenario Geo Space’. Select the Object ‘Geo Space’ and save the filter:

Create a scenario, choose the icon on the far right:

Choose a name for the Scenario and choose the Filter you have created:

Go to Visualisation Settings in the toolbar and choose your scenario as a context (background map)

17. Network analysis of correspondences in nodegoat

This tutorial builds on the data model from tutorial no. 12. This data model shows only one of many possibilities for a network analysis with nodegoat. Another model (for correspondences) is explained step by step in the guides on nodegoat.net:

https://nodegoat.net/guide.s/4/create-your-first-object-type

A special feature of this model is the use of the function of automatic data transfer from an object description to the corresponding data field in the sub-object: https://nodegoat.net/guide.s/9/add-a-related-object-type. In practice, this means that we only have to enter a person’s name once (in the object description) and it is automatically saved in the sub-object. In another tutorial we will take a closer look at this function of automatic data transfer (translation in progress…)

Let us now turn to the data model of this tutorial. In tutorial no. 12 we created a data model consisting of persons, events and locations. With these three basic objects you can tackle very different questions. You can also extend this model, for example with an additional object that you simply call ‘object’. Such an object can then be, for example, any object, such as a book, a manuscript, or it can also stand for buildings, archaeological finds, etc. With the data model Person – Event (Ereignis) – Object (Objekt) – Location you have a very comprehensive and versatile model.

For this tutorial, however, we will use the model person – event (Ereignis) – location, where we will capture correspondences as ‘event’. Correspondence’ is therefore an event type. The data model works in such a way that for the ‘event’ in the sub-object we capture the persons associated with it, in this case the correspondents. In the data model we create two sub-objects in the object ‘Event’, which we call ‘Sender’ (Absender) and ‘Recipient’ (Empfänger) (or also: ‘Addressee’). We also create a third sub-object for persons mentioned in the correspondence, for example as the sub-object ‘Person mentioned’ (Person erwähnt). If we also want to record the subjects of the correspondences in keyword form, we can create another sub-object, which we call ‘Subject’ (Gegenstand), for example.

The data model looks like this:

Object: Ereignis (event)

Sub-Objects of this Object: Absender (Sender) / Empfänger (Recipient) / Person erwähnt (Person mentioned) / Gegenstand (Subject)

We activate ‘Single’ and ‘Required’ for sender and recipient in the sub-object, as these may only occur once per event (‘Single’) and we always want to have a specification for both (‘Required’). If the sender and/or recipient are unknown, we create a corresponding ‘Person unknown’.

For the other sub-objects (person mentioned and object), which we create in the same way as sender and recipient, we do not activate the fields ‘Single’ and ‘Required’, as these sub-objects can occur several times, but do not necessarily have to.

In the tab ‘Descriptions’ we now have to specify in the data model where the object with the persons is located in order to be able to record (select) them during data entry.

To do this, we create a ‘Description’ ‘Sender’ and connect it to the Reference: Object Type: Person, as shown here. We proceed in the same way with the sub-objects ‘Recipient’ and ‘Person mentioned’.

As in every sub-object, we can now create data fields for temporal (Date) and spatial (Location) information in our model. We switch to the tab ‘Date’ and select ‘Period’. This gives us the possibility to enter 2 different dates in the date entry and thus map a time span in which the correspondence was written / sent. If we only find 1 date in a letter, then we only enter this date. It is automatically transferred to the other date field when saving. If we do not find any dates, we can still put a letter in chronological order with an approximate date. See Tutorial No. 4.

Now we select in the tab ‘Location’ the object type that contains the geo-references (with coordinates). In our example, this is the type ‘Location’, in which we store the names and coordinates of towns / cities:

Before we can select this object type here, we must of course first create it. To create the type ‘Location’ used here, you only have to give the object a name (‘Location’) and a sub-object ‘Geo coordinates’. The names can also be different and, as always in nodegoat, can be defined arbitrarily.

You now have a simple data model for the collection of correspondences. A model that you can extend according to your requirements and questions:

Person – Event (correspondence) – Location.

Make sure that you have activated these 3 object types in ‘Management’ for your project (check boxes selected). Now go to the ‘Data’ section and enter a new ‘Event’. The data entry form should look like this:

Sender’ and ‘Recipient’ appear automatically because we have defined them as ‘Required’ in the data model. You have to open ‘Person mentioned’ and ‘Subject’ by clicking on the plus sign in order to enter data there. A prerequisite for being able to enter persons is, of course, that we have created at least 1 field with the name or designation in the data model for ‘Person’. The same applies to ‘Location’, where we have also created a sub-object ‘Geo coordinates’ in the data model, as mentioned above. If these details are available in the data model for person, event and location, we can start entering data in the form for ‘Event’ shown above. We can add new persons and locations right in this form (click on ‘new’ when you have clicked in the magnifying glass field).

As a test, enter a ‘Person 1’ as sender and a ‘Person 2’ as recipient as well as 2 different locations with their geo-coordinates, which you can find at GeoNames.org (or you can also mark the location directly on the interactive map in nodegoat, which will automatically insert the geo-coordinates). See also Tutorial No. 5.

For this example, we have recorded two people: Person 1 and Person 2: Person 1 and Person 2, as well as 2 locations: Basel and Prague. We have called the event ‘Correspondence 1’.

When entering the geo-coordinates, make sure that you have selected ‘Point’ for Location. By clicking on ‘Map’ you can select the coordinates interactively on the map:

 

This is what the preliminary result of the data collection should look like (still without dates):

This correspondence is automatically visualised on the map by clicking on the globe symbol on the left in the toolbar.

With the legend (red / blue) you can show and hide the data for exploitative data analysis.

Finally, you can add dates to ‘Correspondence 1’ in the sub-object (Date). As soon as this has been done, the timeline takes over this information and you can use it to dynamically display the data on the map.

Click on the network analysis icon in the toolbar (to the right of the gobus icon) to see how the ‘Correspondence 1’ event connects the two people. This is a simple, but often applicable basic model for your network analysis, as we can extend the events as we like and, as we have seen above, enrich them with further persons (person mentioned) or also with contents (object):