Blog Articles

Automatically Rotating Guest WiFi Passwords With hostapd

en

I like to have control over who gets on my networks and who doesn't.

To obtain this level of control in my home network, I'm running a separate WiFi for guests, which among other things separates guest devices from my private infrastructure.

Authorization in hostapd

The most simple way of configuring WPA2-PSK authorization in hostapd is a static passphrase:

# /etc/hostapd/hostapd.conf
wpa=2
wpa_key_mgmt=WPA-PSK
wpa_passphrase=Nobody expects the Spanish Inquisition!

So far, so good - but once a person knows this passphrase, they can get on my WiFi all the time, and they could share the passphrase with other people. This way, I lose control over who gets on my networks.

hostapd also supports device-specific passphrases, configured in a separate file:

# /etc/hostapd/hostapd.conf
wpa=2
wpa_key_mgmt=WPA-PSK
wpa_psk_file=/etc/hostapd/hostapd.wpa_psk

Now, how should this file look like? The hostapd "documentation" is a bit shady in this regard, and only mentions (PSK,MAC address) pairs; the exact format is not mentioned. However, multiple sources on the internet seem to agree on this format:

# /etc/hostapd/hostapd.wpa_psk
ma:ca:dd:re:ss:00 The Passphrase For Device A
ma:ca:dd:re:ss:01 The Passphrase For Device B

And, most important, some sources also mention that the MAC address 00:00:00:00:00:00 can be used as a wildcard, so the associated passphrase works for all devices. This alone does not give us any advantage over the hardcoded passphrase. However, having the passphrase in a separate file makes automated rotation extremely easy. By doing this, I have a fairly good control over who can access my guest WiFi when the passphrase is rotated frequently through a cronjob.

To take things a step further, we can decouple the passphrases rotation rate from how long a passphrase remains valid. As it turns out, the wildcard MAC address can be used multiple times, and all wildcard passphrases are accepted. This allows us to do the following:

  • Generate a new passphrase once a day
  • Add the new passphrase as a wildcard entry to the wpa_psk file
  • Remove all but the seven newest entries from the file
  • Reload hostapd

So, this gives us a new passphrase every day, and each passphrase remains valid for a week.

Giving the Passphrase to Guests

I'm using qrencode to generate a QR code with the latest passphrase, and display the result, together with its plaintext form, in a Grafana HTML panel:

qrencode \
  -t PNG --size=6 --output=/var/www/html/wifi-guest.png \
  "WIFI:S:${SSID};T:WPA2;P:${PASSPHRASE};;"

cat > /var/www/html/wifi-guest.html <<EOF
  <!-- Timestamp for browser cache circumvention -->
  <img src="/wifi-guest.png?$(date +%s)" />
  <br/><br/><br/>
  <h3><tt>${SSID}</tt></h3>
  <h1><tt>${PASSPHRASE}</tt></h1>
EOF

And the result looks like this:

Screenshot of a Grafana Panel with QR code, WiFi SSID and
Passphrase

Bringing Swiss Public Transport Departures to Grafana

de en

Update: The API endpoint used here has been deprecated, and a new endpoint is available. The updated script can be found on Gitlab.

The Swiss Railways (SBB) provide a collection of static data sets and dynamic APIs at opentransportdata.swiss. One endpoint provides a list of departures or arrivals for a given train, bus or tram station.

In this blogpost, I'm showing you how I'm using this API to get a list of upcoming departures for the station next to my home, and how do get this list into Grafana.

The API

The XML API is documented in the "API Cookbook". A request looks like this:

POST /trias HTTP/1.1
Host: https://api.opentransportdata.swiss
Authorization: TOKEN
Content-Type: text/xml

<?xml version="1.0" encoding="UTF-8"?>
<Trias
    version="1.1"
    xmlns="http://www.vdv.de/trias"
    xmlns:siri="http://www.siri.org.uk/siri"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <ServiceRequest>
    <siri:RequestTimestamp>  NOW  </siri:RequestTimestamp>
    <siri:RequestorRef>EPSa</siri:RequestorRef>
    <RequestPayload>
      <StopEventRequest>
        <Location>
          <LocationRef>
            <StopPointRef>  BPUIC  </StopPointRef>
          </LocationRef>
          <DepArrTime>  NOW  </DepArrTime>
        </Location>
        <Params>
          <NumberOfResults>  N_RESULTS  </NumberOfResults>
          <StopEventType>departure</StopEventType>
          <IncludePreviousCalls>false</IncludePreviousCalls>
          <IncludeOnwardCalls>false</IncludeOnwardCalls>
          <IncludeRealtimeData>true</IncludeRealtimeData>
        </Params>
      </StopEventRequest>
    </RequestPayload>
  </ServiceRequest>
</Trias>

This request is fairly minimal; it is limited to a single station, and without further information such as previous and following stops. You only need to fill in the following arguments to make this work for your station of choice:

  • TOKEN: API Token, need to register an account.
  • NOW (2x): Current time in ISO-8601 form.
  • BPUIC: Numeric ID of the station ("Betriebspunkt"), can be looked up in the DiDok dataset.
  • N_RESULTS: Maximal number of results to return.

Prometheus Ingestion

XML is a bit... let's say, uncomfortable to handle in Bash scripts, so I resorted to using the xsltproc tool to transform the API response into something easily iterable; the XSLT document i came up with looks like this and generates CSV content:

<?xml version="1.0" encoding="utf-8"?>
<xsl:stylesheet
    version="1.0"
    xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
    xmlns:trias="http://www.vdv.de/trias">
  <xsl:output method="text" />
  <xsl:template match="/">
    <xsl:for-each select="//trias:StopEvent">
      <xsl:value-of select="trias:Service/trias:PublishedLineName/trias:Text"/>
      <xsl:text>;</xsl:text>
      <xsl:value-of select="trias:Service/trias:DestinationText/trias:Text"/>
      <xsl:text>;</xsl:text>
      <xsl:value-of select="trias:ThisCall/trias:CallAtStop/trias:ServiceDeparture/trias:TimetabledTime"/>
      <xsl:text>;</xsl:text>
      <xsl:value-of select="trias:ThisCall/trias:CallAtStop/trias:ServiceDeparture/trias:EstimatedTime"/>
      <xsl:text>&#x0A;</xsl:text>
    </xsl:for-each>
  </xsl:template>
</xsl:stylesheet>

Each line in the result represents a stop at the station, with the following fields:

  1. Number of the train or bus line
  2. Name of the destination
  3. Scheduled departure time
  4. Estimated/actual departure time

This format is quite easy to handle in Bash; let's parse the ISO-8601 timestamps, compute the delay for each stop and then emit the results in Prometheus collector format:

# TYPE sbb_station_departure gauge
# HELP sbb_station_departure Departures from a train or bus station
# TYPE sbb_station_delay gauge
# HELP sbb_station_delay Departure delay
sbb_station_departure{line="26",planned="1580875380000",destination="Erstfeld"} 1580875380000
sbb_station_delay{line="26",planned="1580875380000",destination="Erstfeld"} 0
sbb_station_departure{line="36",planned="1580875980000",destination="Zürich HB"} 1580875980000
sbb_station_delay{line="36",planned="1580875980000",destination="Zürich HB"} 0

I'm using the Textfile Collector feature of the Prometheus Node Exporter to ingest this document into Prometheus.

Display in Grafana

I'm showing this data in a table panel in Grafana, using two queries: one for the scheduled departure, one for the delay. Here, you can filter the departures by destination, if not already done in your script:

min without (__name__) (sbb_station_departure{destination=~".*Zürich.*"})

and

min without (__name__) (sbb_station_delay{destination=~".*Zürich.*"})

And finally, after some styling, the result looks like this:

Screenshot of a Grafana table panel with a train schedule

The code can be found on Gitlab.

Monitoring Freifunk Nodes With Prometheus

de en

Updated 2020-03-07: We now collect the number of connected clients as well as whether the node is online.

We recently installed a Freifunk node from Freifunk Dreiländereck (FF3L) in our hackerspace. While changing network configuration during the testing phase, the node went offline without us noticing. Since we're using Prometheus for monitoring our space's infrastructure, I went ahead and hacked together a solution which I want to present here:

Idea

Instead of monitoring the node directly, I decided to tap into the status information already collected by the Freifunk community. They are publishing some interesting statistics, however we only really cared about one information: Whether FF3L currently considers the node to be online and reachable.

FF3L publishes these status information at the following endpoint:

https://map.freifunk-3laendereck.net/data/nodes.json

Many (if not all?) Freifunk communities provide such an endpoint, though with some you may have to search for a while to find it.

Implementation

The nodes endpoint yields the information of all nodes at once. Unfortunately, I didn't find a way to reduce the request to specific nodes; if someone knows more about this, don't hesitate to tell me. (As far as I can tell there are multiple implementations of this endpoint, some of which appear to support filters.)

This is how a single node object from the API response looks like:

{
  "nodeinfo": {
    "software": {
      "firmware": {
        "base": "gluon-v2019.1",
        "release": "v2019.1.0+001"
      },
      ...
    },
    "network": {
      ...
    },
    "location": {
      "latitude": ...,
      "longitude": ...
    },
    "system": {
      "role": "node",
      "site_code": "ff3l",
      "domain_code": "3land"
    },
    "node_id": "...",
    "hostname": "...",
    ...
  },
  "flags": {
    "online": true
  },
  "statistics": {
    "uptime": 626614.08,
    "clients": 2,
    ...
  },
  "lastseen": "...",
  "firstseen": "..."
}

We were especially interest in the .flags.online and .statistics.clients fields; our implementation extracts nothing but these two field. The fields .nodeinfo.node_id and .nodeinfo.hostname are suitable for filtering for your own nodes.

Using a bit of "curl|jq magic", we can create a shell script for parsing the data and converting it into a format understood by Prometheus. We added the script as a Textfile Collector to an existing Prometheus Node Exporter instance. The output then looks like this:

# HELP freifunk_node_online 1 if the Freifunk node is online, 0 otherwise
# TYPE freifunk_node_online gauge
# HELP ff3l_node_clients Number of clients connected to the node
# TYPE ff3l_node_clients gauge
freifunk_node_online{node="<node0_id>",hostname="<node0_hostname>"} 1
freifunk_node_clients{node="<node0_id>",hostname="<node0_hostname>"} 2
freifunk_node_online{node="<node1_id>",hostname="<node1_hostname>"} 0
freifunk_node_clients{node="<node1_id>",hostname="<node1_hostname>"} 0

The resulting script can be found on Gitlab.