Sunday, February 5, 2023

The BfArM database of essential drugs in short supply is lacking an API. So I implemented one.

The BfArM database of essential drugs in short supply is lacking an API. So I implemented one in about 12 hours. The source code can be found on GitHub.

tl;dr 

The state of digitalization in Germany is in dire straits, especially so when looking at the healtcare system and govenment/administration. If APIs exist, they are often difficult to discover and/or undocumented (The private bund.dev project tries to collect and document them in a central repository).

Yet I was surprised when I, inspired by an an article about the shortage of essential drugs in Germany, took a closer look at the official governmental database on this matter.

This database is hosted by the "Bundesinstitut für Arzneimittel und Medizinprodukte (BfArM)", and can be found here.

It can be accessed in two ways:

  1. As filterable dynamic HTML/JavaScript table rendered using JSF.
  2. As CSV download.

No API for M2M communication or the like. At least I could not find one. There is also no documentation about the data itself. 

Regarding data quality, I noticed the following issues:

  1. The CSV file is encoded in ISO-8859-1 (Latin1) and not UTF-8. While this is not uncommon it is a bit unexpected, since ISO-8859-1 only covers the first 256 Unicode characters. The file encoding is not documented.
  2. The CSV is actually not comma, but tab separated.
  3. Data not available is not only NULL, it is also encoded as "N/A", "n.a.", "-", and "'-", respectively. There might me more undocumented encodings.
  4. "*" encodes "Altdatenübernahme war nicht möglich", meaning that older data could not be transferred. This is documented in the legend of the table, but not what it actually means.
  5. The update frequency of the data on display is not documented.

Having worked with databases for about 30 years now, this looks like this data comes directly from some kind of manually curated data set to me. There is obviously no decent data standardization process in place.

But whining alone doesn't help, so I decided to implement the missing API on a tiny server hosted in Germany. It took me about 12 hours of my private time, including implementing basic data sanitization.

The most difficult part was to automate the CSV download, since the submit button calls some JavaScript function, and thus can't be called using a HTTP request or a scraping library like beautifulsoup. I'm now using a remote controlled headless Firefox via selenium. That the HTML name attribute frequently changes does not especially help, either.

The demo API is available under https://18.185.116.7:8443/docs. You will be warned because of the self-signed SSL certificate. This is ok, AWS did not want to register a domain, so no Let's Encrypt. But this just swaps SSL with SSH semantics anyway. Since it is for demonstration purposes only, and runs on a small server, there is a rate limiter in place. Resource names and data are in German, like in the source system.

The source code can be found on GitHub. Maybe somebody at BfArM realizes that it does not cost a fortune to implement an API on top of what they already have, and finds my example useful to build on.

Public data (I assume it is public, there is no need to register) is updated daily at 11:00 UTC. In complicance with Article 4 of the GDPR, personal information (Client IP address, Telephone number and E-Mail address) is not replicated, displayed, or stored. As mentioned before, the API is hosted in Germany.

I'm not intending to run this forever, and outages are possible at any time.

Tuesday, December 27, 2022

pg_sentinel - Update

In 2016 I released pg_sentinel as a proof-of-concept for implementing a sentinel value sensor deep in the PostgreSQL sever. This does not compile since PostgreSQL >= 12.x because of changes in the internal API. So I adapted my old code to make it work again.

As a bonus, it does not need SPI any more.

Saturday, July 23, 2022

After 18 years, pgchem::tigress retires

To whom it may concern.

Today I will retire pgchem::tigress, the PostgreSQL chemoinformatics extension based on OpenBabel, after 18 years of service. This decision is based on three main reasons:

  1. I have not touched the GIST-Index Code for at least eight years, but beginning with PostgreSQL 14.x it started to cause SIGSEVs when building the index on molecules, and I'm unable to find the cause.
  2. OpenBabel 3.x made changes in their API that would require me to rewrite functions or to disable them. And those changes are not very well documented.
  3. Since my recent brush with death, I have decided that there are better ways to spend my time, than chasing Signal 11s. Especially since the RDKit cartridge has come a long way, and is more powerful than pgchem::tigress ever was.

This decision was not easy, since building pgchem::tigress was a part of my life. It was the first open source ever released by Bayer AG (at least in Germany). It also is the foundation of my PhD thesis. 

The code will remain public as long as there is a way to publish it.

Friday, May 22, 2020

Native (PostgreSQL only) streaming data tables

If you want to see (and analyze) only a window of data over some continuous data stream in PostgreSQL, one way is to use a specialized tool like the PipelineDB extension. But if you can't do that, e.g. because you are stuck with AWS RDS or for some other reason, streaming data tables, or continuous views, can be implemented with pretty much PostgreSQL alone.

The basic idea is to have a table that allows for fast INSERT operations, is aggressively VACUUMed, and has some key that can be used to prune outdated entries. This table is fed with the events from the data stream and regularly pruned. Voilà: a streaming data table.

We have done some testing with two approaches on an UNLOGGED table, prune on every INSERT, and pruning at reqular intervals. UNLOGGED is not a problem here, since a view on a data stream can be considered pretty much as ephemeral.

The timed variant is about 5x - 8x faster on INSERTs. And if you balance the timing and the pruning interval right, the window size is almost as stable.

The examples are implemented in Python3 with psycopg2. Putting an index on the table can help or hurt performance, INSERT might get slower but pruning with DELETE faster, depending on the size and structure of the data. Feel free to experiment. In our case, a vanilla BRIN index did just fine.

Instead of using an external scheduler for pruning, like the Python daemon thread in the stream_timed_cleanup.py example, other scheduling mechanisms can be of course used, e.g. pg_cron, or a scheduled Lambda on AWS, or similar.

Feel free to experiment and improve...

Tuesday, May 19, 2020

MQTT as transport for PostgreSQL events

MQTT has become a de-facto standard for the transport of messages between IoT devices. As a result, a plethora of libraries and MQTT message brokers have become available. Can we use this to transport messages originating from PostgreSQL?

Aa message broker we use Eclipse Mosquitto which is dead simple to set up if you don't have to change the default settings. Such a default installation is neither secure nor highly available, but for our demo it will do just fine. The event generators are written in Python3 with Eclipse paho mqtt for Python.

There are at least two ways to generate events from a PostgreSQL database, pg_recvlogical and NOTIFY / LISTEN. Both have their advantages and shortcomings.

pg_recvlogical:

  • Configured on server and database level
  • Generates comprehensive information about everything that happens in the database
  • No additional programming neccessary
  • Needs plugins to decode messages, e.g. into JSON
  • Filtering has to be done later, e.g. by the decoder plugin
NOTIFY / LISTEN:
  • Configured on DDL and table level
  • Generates exactly the information and format you program into the triggers
  • Filtering can be done before sending the message
  • Needs trigger programming
  • The message size is limited to 8000 bytes
Examples for both approaches can be found here. The NOTIFY / LISTEN example lacks a proper decoder but this makes be a good excercise to start with. The pg_recvlogical example needs the wal2json plugin, which can be found here and the proper setup, which is also explained in the Readme. Please note, that the slot used in the example is mqtt_slot, not test_slot:


pg_recvlogical -d postgres --slot mqtt_slot --create-slot -P wal2json

Otherwise, setup.sql should generate all objects to run both examples.

Saturday, April 25, 2020

It looks like pgchem::tigress just got a major upgrade

With the Release of PostgreSQL 12.x and OpenBabel 3.x, I decided to see if pgchem::tigress would still compile. Well, it took some minor changes, but YES, it does!

And - it seems like OpenBabel now handles E/Z and enantiomer stereochemistry correctly, at least in SMILES notation. This is a major step forward, but I have to do some more checks before the next release...

Sunday, March 15, 2020

Authenticate PostgreSQL users against the Amazon AWS Cognito service

I was asked recently if PostgreSQL could authenticate login users against AWS Cognito.  Since PostgreSQL allows PAM authentication, I was pretty sure it could.

But an (admittedly not exhaustive) search on the web did not produce any PAMs for Cognito.

So I wrote one, using pam-pythonboto3, warrant and pyJWT:


It is designed primarily for PostgreSQL and pgbouncer, so it only supports pam_sm_authenticate and pam_sm_acct_mgmt, and all the work is done in pam_sm_authenticate. Because calling Cognito is comparatively slow, I didn't want to call it twice.

The necessary pam.d config is:

#%PAM-1.0
# Information for PostgreSQL process with the 'pam' option.
auth required  pam_python.so cognito_PAM.py aws_region user_pool_id client_id 
account required pam_python.so cognito_PAM.py

If you use PAM authentication, passwords are sent in cleartext, so transport layer encryption, e.g. SSL/TLS, between client and server becomes mandatory!

I think it does the correct dance of authentication with Cognito and supports USER_SRP_AUTH, but if you see any problem, please raise a paw.