Starting point: You have data stored in a vector point data layer with attributes attached to each point, however, the points are overlapping.

For this tutorial I used QGIS 2.12.3 Lyon, Ubuntu.

Task: A heatmap would be a solution, however, you want to count the overlapping points in a defined raster in order to make the map more clear and to be able to do further analysis of the data. In other words, you want to realise a point clustering.

If you want to make comparisons among specific attributes, please refer to the optional step at the end of this tutorial.

Starting point: Point layer (overlapping points) with attributes

Starting point: Point layer (overlapping points) with attributes


Goal: Point cluster map with informations about the overlapping points

Goal: Point cluster map with informations about the overlapping points

1 First step: Create a vector grid as base for our analysis

Choose Vector -> Research Tools -> Vector Grid

Depending on your project settings and your specific aim you may choose different parameters here. In my case I  the took the extend of my point layer and set the grid distance to 0,001.

Vector grid dialog

Vector grid dialog

You should get something like this (I set the grid transparency to 50%):

The selected grid fits well to the specific data set

The selected grid fits well to the specific data set

2 Second step: Count the points with the polygon grid

Now choose Vector -> Analysis Tools -> Points in Polygons

The tool will count the points in each polygon raster cell and it will aggregate the attributes attached to the points. You can choose various attributes that will be aggregated to the new shape layer. Unfortunately you can only choose very basic statistics to aggregate the data, e.g. the mean value.

Points in Polygon dialog

Points in Polygon dialog

If you click OK, you will get another grid of polygons (I called it pointcount9, however, this time the polygons contain the aggregated information of the points. If you label the polygons with the field PNTCNT You should get something like this (Layer properties -> Label -> Label with):

Points counted in each cell. I labeled the polygons with the field "PNTCNT" (pointcount)

Points counted in each cell. I labeled the polygons with the field “PNTCNT” (pointcount)

3 Third step: Convert the polygons to points (centroids)

To display the data as a point cluster, you have to create/convert the polygon centers as a new points layer.

Choose Vector -> Geometry Tools -> Polygon centroids

Please remember to select the last generated polygon layer (pointcount). I called the layer centroids.

Polygon centroids dialog

Polygon centroids dialog

You will get this:

Polygon centroids

Polygon centroids layer

4 Forth step: Display the centroids layer as a cluster map

The layer generated in the last stepcontains all the attributes of the pointcount layer. We can use this attributes to make the map more understandable.

  • First hide all the other layers except the basemap and the centroids layer.
  • Next is to label the points with the PNTCNT (pointcount) attribute. This will display the quantity of points (from the original dataset). Please set the PLACEMENT of the labels to offset from point. Alternatively you could label the mean values of the attributes of the point cloud.
  • Then set the size of the points in relation to the PNTCNT attribute. You have to go to Layer properties -> Style -> Then click the small € symbol next to Size and choose Size assistant. Set to “Flannery” will avoid that the circles are perceived as to big in relation to the count:
Size assistant dialog

Size assistant dialog

Finally you get something like this:

Final map with point clustering

Final map with point clustering

Optional step (would be the first step): Split the dataset

If you want to compare the dataset in such a cluster map based on a specific attribute, you have to first “split” the dataset.

Choose Vector -> Data Management Tools -> Split vector layer
The attribute you select as the Unique ID field will be the attribute among which the layer will be splitted. In my case the result are three diferent shape files. Each one containing only the points that have the “farbcode” of 1, 2 or 3.

Split vector layer dialog

Split vector layer dialog

Now add the (three) layers by choosing Layer -> add Layer -> Vector layer

If you want to compare the layers as a cluster point map, you have to repeat step 2–4 for each of the three created layers.


There is no doubt that the transcription of audio files is one of the most exhausting tasks in qualitative research – better: was!

Certainly, there are several speech recognition software providers, however, I found the Google Speech API in Chrome Browser most helpful.

The files have to be perfectly recorded without surrounding sounds or echo (I used a high quality ZOOM audio recorder). There are currently over 40 languages supported.

Data privacy: Please be aware that you have no control about how Google uses your audio files. Please use this only if you have the agreement of your interview partners to publish their statements and/or the information is public (see


1. Open your mp3 file in VLC Player

2. Connect your audio line out with your audio line in (or microphone in) with a 3,5mm cable bridge (depending on your sound card this might lead to heavy system damage, so please be careful!)

Connect your line-out with your microphone-in via a simple cable bridge

Connect your line-out with your microphone-in via a simple cable bridge

3. Open your audio settings

4. Play your audio file and adjust your audio input levels so that there are no overmodulations (in UBUNTU the input level is displayed in a LED-style, witch is really helpful)

Ubuntu provides perfect sound level settings

Ubuntu provides perfect sound level settings



Speech to text with Google Speech and VLC (via cable bridge)

1. Open Google Speech API Demo page on in a Chrome Browser window.

2. Set your language

3. Start recording

4. Play your mp3 file

5. Be astonished and wonder what you can do with your free time now



With the recently relased version of the package RGL (V. 0.92.879) there is a new option to publish 3D plots as an interactive WebGL graphic.
Nearly every 3D plot you set up in a RGL window can be exportet via a very easy command. Just plot as usual into your rgl device and then use the command “writeWebGL()”.

Click to open the TimeSpace Cube WebGL Plot

Time Space Cube sample:

Dependable on your hardware and software specifications (graphics have to support a newer OpenGL version) you’ll be able to open my try in your browser window:

Click here to open my WebGL sample in a new window (~3MB)

Here the command to export your RGL window to a webpage:

browseURL(paste(“file://”, writeWebGL(dir=file.path(tempdir(), “webGL”), width=700), sep=””))

To get a quick impression about the temporal stay of places it is helpful to generate a plot of the trackpoints spatial density (intensity).

combined plot of spatial intensity

Spatial intensity 2D/3D

As the 3d visualisation has both advatages and disadvantages, a combination with a 2D plot is useful to interpret the data. The data used in this example is a gps record of the “everyday life” of a test person.

Code snippet:

## 3d/2D Plot density with spatstat density.ppp

bb_utm <- qbbox(lat = tripdata_utm_num[,2], lon = tripdata_utm_num[,1]) #Boundingbox
Rect <- owin(c(bb_utm$lonR[1]-500, bb_utm$lonR[2]+1800), c(bb_utm$latR[1]-500, bb_utm$latR[2]+500))
P_all <- ppp(tripdata_utm_num[,1], tripdata_utm_num[,2], window=Rect)   #  opt: marks=datetime
PPP_all <- as.ppp(P_all)
den <- density(PPP_all, sigma = 70)

cutted_den <- den  # Wertebereich abschneiden für Extreme
cutvalue <- 0.0020   # Erfahrungswert
cutted_den$v[cutted_den$v>=cutvalue] <-cutvalue

png(paste(plotpath,proband,”intensity_overview_spatstat_v3.png”, sep=””),2400,1200);  # Plot Dichte 3D
layout(matrix(c(1,2), 2, 2, byrow = TRUE), widths=c(1,1))
persp(cutted_den , col=”grey”, d=1, ticktype=”detailed”, cex.lab=4, cex.axis=4, zlim=c(range(cutted_den$v)[1],cutvalue), phi=30, theta=-20, xlab=”longitude”, ylab=”latitude”, zlab=”density”, main=paste(proband, “Räumliche Intensität”))
couleurs  <- tail(topo.colors(trunc(1.4 * 100)),100)
couleurs[1] <- “#0400ff”
plot(cutted_den , col=couleurs, ticktype=”detailed”, xlab=”longitude”, ylab=”latitude”, cex.lab=5, cex.axis=5, main=paste(proband, “Räumliche Intensität 2D”))
#points(SP_UTM@coords[,1], SP_UTM@coords[,2], cex=0.05, col=”grey”)
points(SP_wp_UTM@coords[,1], SP_wp_UTM@coords[,2], cex=2, col=”red”)
text(SP_wp_UTM@coords[,1], SP_wp_UTM@coords[,2], labels=daten_wp$wpOpen, cex=1, adj=c(0,-1), col=”red”)

# If you use any code, please refer to or cite this site (see about geolabs)

Hägerstrand time-space cube with R


With the rgl package it’s possible to interact with the 3d visualization of the timespace tracks.

Code example:
plot3d(lon,lat,timedate, xlim=range(lon), ylim=range(lat), zlim=range(timedate), ticktype=”detailed”, xlab=”longitude”, ylab=”latitude”, zlab=”Date”, col= as.POSIXlt(daten[,”Date”])$mday, type=”l”, main=plottitle)

In the posted example individual waypoints were added by drawing vertical lines.

Here comes another option to analyze a TimeSpace-Track with R. A lattice cloud plots every recorded trackpoint into a 3d-time-space-cube. As the data (planar point pattern) is marked with the daytime, cluster of everyday routines become visible.

Here the direct comparison between a function of density and the time-space-cloud.

Time space Clowd

spatstat density plot

Code example:

cloud(time_hours ~ PPP_selection$x * PPP_selection$y, data = daten, zlim = c(23,0), xlim = c(653000,643000), screen = list(z = 160, x = 120), panel.aspect = 0.75, xlab = “Longitude”, ylab = “Latitude”, zlab = “Time”, scales = list(z = list(arrows = FALSE, distance = 2), x = list(arrows =FALSE, distance = 2), y = list(arrows = FALSE, distande = 2)),)

This examle is inspired by: (Figure 6.2)

Beside the visualisation of TimeSpace Tracks, I’m trying to find a way to analyze GPX-Tracks with statistical software. This are the first results with R (The R Project for Statistical Computing):

GPS track analized with R package "trip"


density plot 3D

^This graph is a result of the analysis with the package trip (Spatial analysis of animal track data). Unfortunatelly i’m do not understand witch scale is used by the package.

^Trackpoints as a function of density.

Since there is a trackpoint recorded every 10 sec., it is possible to interpretate the density of the trackpoints as time-spend.

This is a two day track. The highest peak in the right corner is my home (Nuremberg). The peaks in the backstage are both university in Erlangen. The path on the rigth side I did with my bicycle, the left one with the train.

But how to examine specific areas?

trackdata density plot 3D


^1500 m arround my house in the city center.

With clickppp() from the spatstat package it’s possible to choose e.g. a point with the mouse:

####### Example Code:
plot(tripdata_utm) # plots the recorded trackpoints (converted to UTM)
P_center <- clickppp(n=1, win=Rect, add=TRUE, main=NULL, hook=NULL) # Select a point in the plot with the mouse
center <-
D <- disc(radius = 1500, centre = c(center[,1], center[,2])) # create a disc window
P_selection <- ppp(tripdata_utm_num[,1], tripdata_utm_num[,2], window=D) # reduce the data with the window

density plot 2D


^Another function of density (2D).

qqcout plot


^Trackpoints as a function of time.

Here the trackpoints are divided by a grid and counted. Since the device records the position every 10 sec. The qqcount can be clearly interpreted as time-spend.

The next step is to add this data to a gis layer.

%d bloggers like this: