Blog Articles

Cisco 7900 series IP Phone Logo Converter

en

Last week we discovered a box full of old Cisco 7900 series IP phones hidden deep in a pile of boxes in our hackerspace. Of course we tried to get them up and running and figure out how to configure them.

After a bit of research and reading manuals, we learned that the phones can be configured via TFTP using a binary config file format. When we learned that the phones can display a custom 88x27 monochrome image instead of the default Cisco logo, we made it our top priority to get the image customization working.

What we learned was that the images had to be served alongside the configuration via TFTP in a proprietary format. The tool to convert images into this format comes bundled with every firmware release for the phone. However, we quickly encountered some limitations:

  • The converter, called bmp2logo.exe, was a proprietary, Windows only piece of software. Luckily, the tool ran in Wine without any problems (as long as you don't mess with the input file header).
  • The tool required the input file to be in a very specific format, namely a 88x27 pixel, monochrome, 1 bit per pixel BMP file.
  • While the image is drawn as dark pixels on a light background, the input file had to be white on black. Effectively, the colors were inverted.

We also learned that each image had to be tagged with a serial number. The phone uses this serial number to decide whether a new image has to be loaded from the TFTP server. If the serial number is the same as the number of the previous image, the new image is not loaded. So the serial number needs to be incremented by at least 1 for each new image.

To get around the limitations of the converter tool, I attempted to figure out the file format and write a free and cross-platform converter without these limitations.

Reverse Engineering the File Format

So let's have a look at a hexdump of one of those image files converted with bmp2logo.exe:

Hexdump of a converted image file

We know that the image consists of 88 x 27 = 2376 pixels. And since the input to bmp2logo.exe has to be a 1-bit-per-pixel uncompressed image, let's just assume that the same holds true for the output. This would give us a payload size of 297 bytes, so with a file size of 304 bytes, there should be a 11 byte header.

The first two bytes were always the same, no matter what image was converted. So it should be fairly safe to assume that they are a magic number. The next two bytes were always different for different serial numbers or different images. Also, if the serial number was changed by one, these two bytes would also change in a fairly consistent manner, so this appeared to be some some sort of checksum. Let's just ignore that for now and put it aside for later.

The next three bytes were all zeros, followed by a byte representing the serial number. Or so i thought at first, until I passed a serial number of -1 to bmp2logo.exe, and got ff ff ff ff as these four bytes, so this pretty clearly is a 32-bit representation of this number. The next two bytes were pretty obvious as well, they are the height and width of the image, 27 and 88 respectively.

The last byte of the header seems to represent the number of bytes that comprise a single row. I'm not entirely sure about this, but it made the most sense, especially if we assume that there are other phones out there which may support grayscale images, and need a greater color depth and consequently more bits per row.

Now, on to the last part: The checksum. I was a bit lost here, so I just tried various ways of compressing data into 16 bits: Adding them, xor'ing them, swapping bytes around before adding them, always wrapping the results to 16 bits. In the end it was a typo that brought me to the solution: Instead of sum = (sum + swapped) & 0xffff, I wrote sum = (sum + swapped) % 0xffff. So instead of performing addition mod 65536, i was performing addition mod 65535 by accident. The result of this turned out to be just the binary inverse of the intended checksum, so let's add a final sum ^= 0xffff before returning the result and call it a day.

File Format

With all that information, and verifying that the payload did indeed match the uncompressed monochrome image, we finally got our file format:

+----+----+----+----+----+----+----+----+
| 10h 60h |  CHKSUM |       SERIAL      |
+----+----+----+----+----+----+----+----+
| H  | W  | RW |                        |
+----+----+----+                        |
|                                       |
:               PAYLOAD                 :
:                                       :
|                                       |
|                                       |
+----+----+----+----+----+----+----+----+

To summarize, the file consists of an 11 byte header, followed by the payload. The header consists of:

  1. The magic number 0x10 0x60
  2. A 16 bit checksum
  3. The 32 bit serial number in big endian
  4. H, the height of the image in pixels, 1 byte. Since the image is 27px high, this is 27, or 0x1b
  5. W, the width of the image in pixels, 1 byte. Since the image is 88px wide, this is 88, or 0x58
  6. RW, the number of bytes in a single row, 1 byte. Since the image is 88px wide and each byte holds 8 pixels at once, this is 11, or 0x0b

Following the header is the image payload, with each pixel expressed as a single bit, in row-major order starting in the top left corner.

A Python implementation of the checksum algorithm could like this:

sum = 0
# Iterate the file starting at byte 4 (directly after the checksum)
# Iterate two bytes at a time (a and b), stick them together in
# reverse order, and add to the total sum mod 0xffff.
for a, b in zip(data[4::2], data[5:2]):
    sum = (sum + ((b << 8) | a)) % 0xffff
# Flip all bits in the result
sum ^= 0xffff

Implementation

Now that we know how such an image file is composed, we can write a tool to generate these files. And here's the result:

Cisco 7900 phone screen with a cat instead of the Cisco
logo

You can find the converter script on Gitlab. It can be used in exactly the same way as the original bmp2logo.exe converter:

  1. ./ciscologo.py <serial> <infile.something> <outfile.dat>
  2. Copy <outfile.dat> to your TFTP server
  3. Update the phone configuration to point to the new file
  4. Recompile the configuration file and copy it to your TFTP server
  5. Reboot your phone

Remember to increment the serial number every time you change the image, or the phone won't pick up the new file.

Scaling and Slicing PDF Documents with pdfjam and mutool

en

I recently encountered the following challenge: I had a PDF document consisting of multiple A4 pages, each of which I needed to print scaled up to A2. However, I only had an A4 printer available, and the printer driver was not able to perform the scaling and/or slicing on its own.

After a long search, and lots of disappointments (since neither of Libre Office, Inkscape and Evince could do what I wanted, or I simply couldn't find the feature), I came up with the following solution:

$ pdfjam -o scaled.pdf --a2paper input.pdf
$ mutool poster -x 2 -y 2 scaled.pdf sliced.pdf

The first command takes the input file, scales it up to A2 size, and writes it to an intermediate file. The second command slices each page from the intermediate file into 2 slices both vertically and horizontally, totaling in 4 slices per page, which again results in A4-sized slices the printer can print.

Automated Debian package building with Gitlab CI and Reprepro

en

I previously deployed some ad-hoc services using ugly and difficult to maintain solutions, such as binaries manually extracted from container images and copied into /usr/local/bin. Of course, this is a lot of manual work for each upgrade of the service, and if you upgrade often, a lot of repetitive manual work. Or, in other words, the perfect kind of work to be automated.

So I decided to package the software in question for Debian, which is the Linux distribution I run these services on, and to automate the process of building the packages and publishing them to a repository. There's three different "categories" of software I wanted to have in my repository:

  • Binaries published as build artifacts in releases of their upstream repository. An example for this is Gitea. The binaries can be automatically downloaded and put into a package.

  • Software released as container images only. An example for this is the Drone CI Server. This is a bit more tricky, as the binaries must be extracted from the image before it can be put into a package.

  • Software already released as Debian packages, but not available through a repository. An example for this is a project of my own, the iCalendar Timeseries Server, where the CI pipeline automatically builds Debian packages and adds them as build artifacts to releases. Here, the already-built package just needs to be fetched and added to the repository.

For creating and maintaining the repository, I'm using Reprepro, as it is extremely simple to use, and generally is rather uncomplex and lightweight. It does however come with some limitations; I especially encountered the problem that Reprepro will only keep the latest version of a package in the repo index, so older versions won't be available to clients.

For automating the packaging and repository build process, I'm using Gitlab CI. The pipeline is running each night, building the latest stable version for each package, adding the packages to the in-container Reprepro repository, and then synchronize the repository to a web server of mine.

The source repository with the package build scripts and pipeline configuration is available on Gitlab. Though, before using this as a template, be advised that the packages built by this pipeline are not exceptionally high-quality. They are also usually built to fit my personal needs, so don't expect ready-to-use packages.

Recording IPTV Using ffmpeg

en

I don't usually watch TV. But from time to time there is something interesting on the programme, such as debates on local politics. Unfortunately, those usually run at a time of day where I'm not able (or more likely not willing) to tune in an pay attention to an hour of political discourse. So I want to record them and watch later instead.

My ISP provides its IPTV programme as MPEG-TS streams via multicast UDP. And they even link a M3U playlist of all stations on their website, so you can basically watch TV with any client whatsover, as long as it speaks IGMP and understands MPEG-TS video streams. This makes recording very easy, as this is supported by a lot of multimedia processing software, including ffmpeg.

The playlist consists of a list of TV stations, each of which is represented by its own multicast group and an UDP port. So let's just take the first station and see what ffmpeg finds in there:

Input #0, mpegts, from 'udp://239.77.0.77:5000':
  Duration: N/A, start: 41892.675600, bitrate: N/A
  Program 9038 
    Metadata:
      service_name    : SRF 1 HD
      service_provider: Schweizer Radio und Fernsehen
    Stream #0:0[0x50]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p(tv, bt709, progressive), 1280x720 [SAR 1:1 DAR 16:9], 50 fps, 50 tbr, 90k tbn, 100 tbc
    Stream #0:1[0x51](deu): Audio: mp2 ([3][0][0][0] / 0x0003), 48000 Hz, stereo, fltp, 192 kb/s (clean effects)
    Stream #0:2[0x52](eng): Audio: mp2 ([3][0][0][0] / 0x0003), 48000 Hz, stereo, fltp, 192 kb/s (clean effects)
    Stream #0:3[0x5b](deu): Audio: ac3 ([6][0][0][0] / 0x0006), 48000 Hz, 5.1(side), fltp, 448 kb/s (clean effects)
    Stream #0:4[0x6e](deu,deu): Subtitle: dvb_teletext ([6][0][0][0] / 0x0006)
    Stream #0:5[0x70]: Unknown: none ([5][0][0][0] / 0x0005)
    Stream #0:6[0x72]: Unknown: none ([12][0][0][0] / 0x000C)

We can see that the MPEG-TS stream contains multiple indiviual streams, which are listed in the output above. Now, I don't know what's up with 0:5 and 0:6, or why ffmpeg doesn't understand them. Anyway, I only need the video and one audio channel. Let's just pick the first two, and record an one hour TV show:

ffmpeg -f mpegts -i udp://239.77.0.77:5000 -map 0:0 -map 0:1 -c copy -t 3600 recording.mkv

To break it down:

  • -f mpegts tells ffmpeg that the input is an MPEG transport stream.
  • -i udp://239.77.0.77:5000 tells ffmpeg to join the specified multicast group and receive the MPEG-TS stream on UDP port 5000.
  • -map 0:0 -map 0:1 only extracts the streams 0:0 (the H.264 video stream) and 0:1 (the German audio channel in MP2)
  • -c copy causes the input stream to be demuxed only, and the selected streams to be written to the output without CPU-intensive decoding and reencoding.
  • -t 3600 terminates the stream after one hour, when the show is over.
  • recording.mkv is the output filename. The container format (here MKV) is deduced from the filename.

So the whole ffmpeg command takes the original stream as input, demultiplexes it to get the individual media streams, then throws out all but one video and one audio stream, multiplexes them into a Matroska container, which is then written to disk.

Extracting 3D Models From CesiumJS - Part 1: Terrain Map Scraping

en
This article is part of a series:
  1. Extracting 3D Models From CesiumJS - Part 1: Terrain Map Scraping

CesiumJS is a open source JavaScript framework for rendering 2D and 3D maps - everything from a local area to whole planets - in a web browser using WebGL. In the past few weeks I've been working on obtaining 3D model data in a situation where the only easily available way of accessing the data is through a CesiumJS based viewer. As far as I know, Cesium deals with two different kinds of 3D data: On one side, there's 3D models used for small-scale objects like buildings, on the other side there's terrain maps.

Addressing Terrain Tiles

To get started with the terrain map, I needed to figure out how to obtain terrain data for a certain geographical region. Luckily, this is fairly well documented. CesiumJS uses its own solution called quantized-mesh.

quantized-mesh supports different "zoom levels", for which the whole globe is divided into more and more single "tiles". At level 0, there are only two tiles: the first tile covers the western hemisphere, the second tile covers the eastern hemisphere. With each increase in zoom level, each tile is split into 4, each new tile containing a quadrant of the previous tile. Each tile can then be identified by its zoom level, a x coordinate and a y coordinate. x starts at 0, representing -180° longitude, and when incrementing x, you go eastward until you reach the tile ending at +180° longitude, at x=2^(z+1). y=0 starts at the south pole at -90° latitude, going north, and reaches +90° latitude at y=2^z. No +1 in the exponent here, since for full coverage, x needs to cover the full 360° longitude, while y only needs to cover half as much for a total of 180° latitude.

Using these three variables, z, x and y, the quantized-mesh specification defines an URL template for addressing an individual tile via HTTP:

http://example.com/tiles/<z>/<x>/<y>.terrain

This can be a little hard to imagine, so I attempted to visualize the first two zoom levels in Figure 1.

Zoom level 0: 2 tiles, each a hemisphere Zoom level 1: 8 tiles, each an octant
Figure 1: Tiles at zoom level 0 and 1: At level 0, there are 2 tiles, each tile covering a hemisphere. At level 1, there are 8 tiles in total. Each tile from level 0 was divided into 4 tiles

If you're familiar with OpenStreetMap, you may recognize this way of dividing the globe into tiles and addressing individual tiles. An OpenStreetMap tile URL looks like this: https://tile.openstreetmap.org/14/8537/5725.png. This similarity is not accidental; in fact, the quantized-mesh tiling schema was designed to follow the Tile Map Service standard's tiling schema.

All of the above assumes our data source uses a WGS84 projection and the TMS tiling schema. quantized-mesh supports other configurations as well, where there is only a single tile at level 0, or with the x and y coordinates swapped. You can find more information in the documentation.

Mapping Geographical Regions to Terrain Tiles

Now that we know how quantized-mesh tiles are addressed, let's find out which tiles we actually need. In my use case, I wanted to obtain all tiles in a bounding box defined by lower and upper latitudes and longitudes. Converting x and y to coordinates is quite easy:

lat = -90 + y * 180 / (2**z)
lon = -180 + x * 180 / (2**z)

So to get the ranges for x and y, we solve those equations for x and y and add proper rounding:

x_min = floor( lon_min * (2**z)/180 + 2**z     )
x_max = ceil(  lon_max * (2**z)/180 + 2**z     )
y_min = floor( lat_min * (2**z)/180 + 2**(z-1) )
y_max = ceil(  lat_max * (2**z)/180 + 2**(z-1) )

The resulting ranges, with x_max and y_max being exclusive upper bounds, are then formulated as

tiles_x = range(x_min, x_max)
tiles_y = range(y_min, y_max)

So once we know at which zoom level our tiles are available, we can compute which tiles to download in order to fully cover our region.

Scraping Terrain Tiles

Now all that's left to to is to figure out how to actually download the individual tiles. Your web browser's inspector can be a great help here. Open the Cesium-based application in your web browser, and open the inspector's network tab. As you move around in the map, you will most likely see a lot of requests, which can be grouped into the following categories:

  • Image tiles, addressed in the same way as the terrain tiles. I didn't need these for my use case, but you should be able to scrape them the same way I'm scraping the terrain tiles.
  • 3D models, with file extensions of b3dm, glb and cmpt. I'll take a look at some of those in later articles.
  • The terrain tiles, with a file extension of terrain. These are the ones we want to obtain.

As you zoom in and out of the map, you'll see that the zoom levels in the requests to image and terrain tiles will increase or decrease. Since I wanted the most detailed tiles, I just continued with the maximal zoom level that was available in the application i was working with.

Now just right click on one of the terrain requests, and navigate to Copy / Copy as cURL. You'll get something like this:

curl 'https://maps.example.com/terrain/tiles/12/12345/12345.terrain' \
  -H 'Origin: https://maps.example.com' \
  -H 'Accept-Encoding: gzip, deflate, br' \
  -H 'User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; ...)' \
  -H 'Accept: application/vnd.quantized-mesh,application/octet-stream;q=0.9' \
  -H 'Referer: https://maps.example.com/' \
  --compressed

At least the Accept header is usually required, and the quantized-mesh specification recommends setting it, especially since it's used to tell the server which extensions to the quantized-mesh standard are supported by the client, if any. Some other headers, especially the Origin, Referer and User-Agent may be required as a "soft form of access control", depending on the server's configuration. I found it worked best to just keep the entire curl request as-is, and only modify the z, x and y parameters in the URL.

Knowing the supported values for z and our ranges for x and y, we can now easily script the download of the individual files. A small hint regarding the filenames: Use all three parameters, z, x and y in the output filename, otherwise you end up overwriting the same files over and over again.

Once the download is done, we are left with a bunch of binary .terrain files. The next article will cover how to parse them.