Tag: Prometheus

Showing Webcal Calendar Events in Grafana


I'm running a Grafana at home, with a dashboard giving me an overview over my day. It contains information like public transport departures or the guest WiFi's password. But the most important part is a list of my upcoming appointments. Now, iCalendar files served via HTTP is not something Grafana understands out of the box. To work around this, I wrote a small service that scrapes the calendar endpoints and exposes the events as metrics in a Prometheus-compatible API.

How it works

Consider the following iCalendar file, served at an HTTP endpoint:

PRODID:-//ACME//NONSGML Rocket Powered Anvil//EN
DESCRIPTION:An example event
DESCRIPTION:Another example event

The service retrieves this calendar from the endpoint, parses it and extracts a list of events together with metadata from it. It then serves the data in a Prometheus-compatible time series API. Clients can request all upcoming events using the following call:

GET /api/v1/query?query=events

To which the service returns the time series of events:

  "status": "success",
  "data": {
    "resultType": "vector",
    "result": [
        "metric": {
          "__name__": "event",
          "calendar": "0",
          "uid": "20190603T032500CEST-foo",
          "summary": "Foo",
          "description": "An example event"
        "value": [
        "metric": {
          "__name__": "event",
          "calendar": "1",
          "uid": "20190603T032500CEST-bar",
          "summary": "Bar",
          "description": "Another example event"
        "value": [


Since a Prometheus label can't be used multiple times, event categories can't be easily mapped to them. Thus, event categories are currently not exported. If someone has an idea how to model categories in the output, while keeping it easy to query and manage, feel free to contact me.

Grafana uses a hardcoded 1+1 query to test Prometheus data sources, so the API currently has a special check for that and returns 2, as expected by Grafana.


The project, which I named iCalendar Timeseries Server, can be found on Gitlab. Each release comes with Python Wheel and Debian packages.

Bringing Swiss Public Transport Departures to Grafana

de en

The Swiss Railways (SBB) provide a collection of static data sets and dynamic APIs at opentransportdata.swiss. One endpoint provides a list of departures or arrivals for a given train, bus or tram station.

In this blogpost, I'm showing you how I'm using this API to get a list of upcoming departures for the station next to my home, and how do get this list into Grafana.


The XML API is documented in the "API Cookbook". A request looks like this:

POST /trias HTTP/1.1
Host: https://api.opentransportdata.swiss
Authorization: TOKEN
Content-Type: text/xml

<?xml version="1.0" encoding="UTF-8"?>
    <siri:RequestTimestamp>  NOW  </siri:RequestTimestamp>
            <StopPointRef>  BPUIC  </StopPointRef>
          <DepArrTime>  NOW  </DepArrTime>
          <NumberOfResults>  N_RESULTS  </NumberOfResults>

This request is fairly minimal; it is limited to a single station, and without further information such as previous and following stops. You only need to fill in the following arguments to make this work for your station of choice:

  • TOKEN: API Token, need to register an account.
  • NOW (2x): Current time in ISO-8601 form.
  • BPUIC: Numeric ID of the station ("Betriebspunkt"), can be looked up in the DiDok dataset.
  • N_RESULTS: Maximal number of results to return.

Prometheus Ingestion

XML is a bit... let's say, uncomfortable to handle in Bash scripts, so I resorted to using the xsltproc tool to transform the API response into something easily iterable; the XSLT document i came up with looks like this and generates CSV content:

<?xml version="1.0" encoding="utf-8"?>
  <xsl:output method="text" />
  <xsl:template match="/">
    <xsl:for-each select="//trias:StopEvent">
      <xsl:value-of select="trias:Service/trias:PublishedLineName/trias:Text"/>
      <xsl:value-of select="trias:Service/trias:DestinationText/trias:Text"/>
      <xsl:value-of select="trias:ThisCall/trias:CallAtStop/trias:ServiceDeparture/trias:TimetabledTime"/>
      <xsl:value-of select="trias:ThisCall/trias:CallAtStop/trias:ServiceDeparture/trias:EstimatedTime"/>

Each line in the result represents a stop at the station, with the following fields:

  1. Number of the train or bus line
  2. Name of the destination
  3. Scheduled departure time
  4. Estimated/actual departure time

This format is quite easy to handle in Bash; let's parse the ISO-8601 timestamps, compute the delay for each stop and then emit the results in Prometheus collector format:

# TYPE sbb_station_departure gauge
# HELP sbb_station_departure Departures from a train or bus station
# TYPE sbb_station_delay gauge
# HELP sbb_station_delay Departure delay
sbb_station_departure{line="26",planned="1580875380000",destination="Erstfeld"} 1580875380000
sbb_station_delay{line="26",planned="1580875380000",destination="Erstfeld"} 0
sbb_station_departure{line="36",planned="1580875980000",destination="Zürich HB"} 1580875980000
sbb_station_delay{line="36",planned="1580875980000",destination="Zürich HB"} 0

I'm using the Textfile Collector feature of the Prometheus Node Exporter to ingest this document into Prometheus.

Display in Grafana

I'm showing this data in a table panel in Grafana, using two queries: one for the scheduled departure, one for the delay. Here, you can filter the departures by destination, if not already done in your script:

min without (__name__) (sbb_station_departure{destination=~".*Zürich.*"})


min without (__name__) (sbb_station_delay{destination=~".*Zürich.*"})

And finally, after some styling, the result looks like this:

Screenshot of a Grafana table panel with a train schedule

The code can be found on Gitlab.

Monitoring Freifunk Nodes With Prometheus

de en

Updated 2020-03-07: We now collect the number of connected clients as well as whether the node is online.

We recently installed a Freifunk node from Freifunk Dreiländereck (FF3L) in our hackerspace. While changing network configuration during the testing phase, the node went offline without us noticing. Since we're using Prometheus for monitoring our space's infrastructure, I went ahead and hacked together a solution which I want to present here:


Instead of monitoring the node directly, I decided to tap into the status information already collected by the Freifunk community. They are publishing some interesting statistics, however we only really cared about one information: Whether FF3L currently considers the node to be online and reachable.

FF3L publishes these status information at the following endpoint:


Many (if not all?) Freifunk communities provide such an endpoint, though with some you may have to search for a while to find it.


The nodes endpoint yields the information of all nodes at once. Unfortunately, I didn't find a way to reduce the request to specific nodes; if someone knows more about this, don't hesitate to tell me. (As far as I can tell there are multiple implementations of this endpoint, some of which appear to support filters.)

This is how a single node object from the API response looks like:

  "nodeinfo": {
    "software": {
      "firmware": {
        "base": "gluon-v2019.1",
        "release": "v2019.1.0+001"
    "network": {
    "location": {
      "latitude": ...,
      "longitude": ...
    "system": {
      "role": "node",
      "site_code": "ff3l",
      "domain_code": "3land"
    "node_id": "...",
    "hostname": "...",
  "flags": {
    "online": true
  "statistics": {
    "uptime": 626614.08,
    "clients": 2,
  "lastseen": "...",
  "firstseen": "..."

We were especially interest in the .flags.online and .statistics.clients fields; our implementation extracts nothing but these two field. The fields .nodeinfo.node_id and .nodeinfo.hostname are suitable for filtering for your own nodes.

Using a bit of "curl|jq magic", we can create a shell script for parsing the data and converting it into a format understood by Prometheus. We added the script as a Textfile Collector to an existing Prometheus Node Exporter instance. The output then looks like this:

# HELP freifunk_node_online 1 if the Freifunk node is online, 0 otherwise
# TYPE freifunk_node_online gauge
# HELP ff3l_node_clients Number of clients connected to the node
# TYPE ff3l_node_clients gauge
freifunk_node_online{node="<node0_id>",hostname="<node0_hostname>"} 1
freifunk_node_clients{node="<node0_id>",hostname="<node0_hostname>"} 2
freifunk_node_online{node="<node1_id>",hostname="<node1_hostname>"} 0
freifunk_node_clients{node="<node1_id>",hostname="<node1_hostname>"} 0

The resulting script can be found on Gitlab.