Broadcasting row updates with PostgreSQL triggers
Here’s some SQL that might be useful if you are working on an application that is backed by PostgreSQL.
I’m going to show you how to get the set of changed columns inside of a UPDATE trigger handler that can then
be used to send out a NOTIFY. This notification could then be picked up with a PostgreSQL LISTEN and easily passed on
to browsers connected to your application via a web socket. This allows your clients to stay updated of any
changes to any rows without resorting to inefficient polling.
We are going to exploit the fact that every UPDATE trigger function in PostgreSQL receives both the previous
row and the current row as arguments. We are going to start with defining the actual TRIGGER:
|
|
Like a lot of SQL this reads quite plainly and basically says that for every updated row in the table foobar
the function notify_foobar_updated will be called. This special TRIGGER function will receive a copy of the old
and the updated row. This is automatic, we don’t need to “pass” these arguments on.
We can then derive the actual changeset by joining the previous table row in OLD with the updated row in NEW
in notify_foobar_updated.
Here is the trigger handler:
|
|
The result of the above SELECT query joining OLD and NEW is assembled together with the row id into a JSON object and broadcast
to listening clients with the NOTIFY equivalent function pg_notify as a JSON-marshalled text string on the channel foobar_updated.
It’s not any more complicated than this.
It is worth noting that it is possible to skip the step deriving the changeset and just broadcast a complete JSON representation
of the updated row by marshalling NEW, however keep in mind that PostgreSQL NOTIFY
has a payload limit of 8000 bytes by default. If the payload is any bigger the call will fail. If you know that your rows are smaller than that then you can
just broadcast the new object right away.
Of course the actual changeset could potentially also be larger than 8000 bytes, in which case the individual changes should probably be broadcast separately, however I will leave this for you the reader to handle.
It should also be mentioned that some solutions to the problem of deriving the changeset relies on the hstore extension
which work by taking the difference of NEW and OLD after converting them to the hstore type. This did work for me, with the exception
that my jsonb fields became marshalled as strings in the resulting JSON object. This was a problem as I needed to reach into these
objects reliably and so I could not use a solution based on hstore.
If you are using node.js then a library such as pg-listen can be used to listen to all table updates. Let’s take a look at a working example.
First we create a simple table and run the previously shown SQL:
|
|
And here is how to use pg-listen in node.js:
|
|
That’s all for now. Try to insert some values in foobar and run some updates. You can probably guess what is going to happen. Rigging this with
your web socket to send these updates to connected browsers should be pretty easy.
Here are two ideas for improvement:
- Trigger for
INSERTandDELETEqueries too - Generalize the trigger handle to work with any table