Mastodon timelines for teams
Mastodon timelines for teams
Welcome to episode 17 of the Steampipe+Mastodon series, in which we introduce a new subplot: timeline history. So far, the examples I’ve shown and discussed work with current timelines. We’ve seen SQL queries that fetch results from real-time calls to the Mastodon API, and Steampipe dashboards that display those results. But Steampipe isn’t just an API siphon, it’s also a Postgres database. As such it supports the transient tables created by Steampipe’s foreign data wrapper and plugins, but also enables you to create your own native tables as well. And you can use those native tables to accumulate data from the transient foreign tables.
Because saving and searching Mastodon data is a controversial topic in the fediverse—none of us wants to recapitulate Big Social—I’ve focused thus far on queries that explore recent Mastodon flow, of which there are plenty more to write. But nobody should mind me remembering my own home timeline, so a few weeks ago I made a tool to read it hourly and add new toots to a Postgres table.
Before you can add any toots to a table, of course, you’ve got to create that table. Here’s how I made this one.
create table mastodon_home_timeline as select * from mastodon_toot_home limit 200
Once created, the table can be updated with new toots like so.
with data as ( select account, -- more -- columns username from mastodon_toot_home limit 200 ) insert into mastodon_home_timeline ( account, -- more -- columns username ) select * from data where id not in ( select t.id from mastodon_home_timeline t )
To run that query from a crontab, on a machine where Steampipe is installed, save it as mastodon_home_timeline.sql
, then schedule it.
15 * * * * cd /home/jon/mastodon; steampipe query mastodon_home_timeline.sql
That’s it! Now the number reported by select count(*) from mastodon_home_timeline
is growing hourly.
I’ve only been collecting toots for a couple of weeks, and haven’t yet begun to explore that data yet; we’ll see what happens when we get there. Meanwhile, though, I want to show how such exploration can be a team exercise.
A friend of mine, whom I’ll call Elvis, shares my interest in teasing out connections among people, servers, and hashtags. He could capture his own timeline using the method shown here. But since we’ll be looking at this data together, we agreed that I’ll gather both our timelines. To enable that, he shared a (revokable) Mastodon API token that I’ve used to configure Steampipe with credentials for both our accounts.
connection "mastodon_social_jon" { plugin = "mastodon" server = "https://mastodon.social" access_token = "..." } connection "mastodon_social_elvis" { plugin = "mastodon" server = "https://mastodon.social" access_token = "..." }
Steampipe’s foreign data wrapper turns each of these named connections into its own Postgres schema. Athough we happen to share the same home server, by the way, we needn’t. A team collaborating like this could pool timelines from mastodon.social and hachyderm.io and fosstodon.org and any other Mastodon-API-compatible server.
(You can do the same thing with AWS or Slack or GitHub or other kind of account by defining multiple connections. Steampipe makes API calls concurrently across parallel connections.)
With this configuration I can read my timeline like so.
select * from mastodon_social_jon.mastodon_toot_home limit 200
And Elvis’s like so.
select * from mastodon_social_elvis.mastodon_toot_home limit 200
If I want to query both in real time, for example to count the combined total, I can use a SQL UNION. Or I can define an umbrella connection that aggregates these two.
connection "all_mastodon" { plugin = "mastodon" type = "aggregator" connections = [ "mastodon_social_jon", "mastodon_social_elvis" ] } connection "mastodon_social_jon" { plugin = "mastodon" server = "https://mastodon.social" access_token = "..." } connection "mastodon_social_elvis" { plugin = "mastodon" server = "https://mastodon.social" access_token = "..." }
Now the query select * from all_mastodon.mastodon_toot_home limit 200
makes API calls on behalf of both accounts—in parallel—and combines the results. When we follow the resulting URLs in order to reply or boost, we’ll do so as individual identities. And we’ll be able to use Steampipe queries and dashboards in that same single-user mode. But we’ll also be able to pool our timelines and point our queries and dashboards at the combined history.
Will that prove interesting? Useful? That remains to be seen. I think it’s one of many experiments worth trying as the fediverse sorts itself out. And I see Steampipe as one laboratory in which to run such experiments. With SQL as the abstraction over APIs, aggregation of connections, and dashboards as code, you have all the ingredients needed to iterate rapidly, at low cost, toward shared Mastodon spaces tailored for teams or groups.
This series:
- Autonomy, packet size, friction, fanout, and velocity
- Mastodon, Steampipe, and RSS
- Browsing the fediverse
- A Bloomberg terminal for Mastodon
- Create your own Mastodon UX
- Lists and people on Mastodon
- How many people in my Mastodon feed also tweeted today?
- Instance-qualified Mastodon URLs
- Mastodon relationship graphs
- Working with Mastodon lists
- Images considered harmful (sometimes)
- Mapping the wider fediverse
- Protocols, APIs, and conventions
- News in the fediverse
- Mapping people and tags in Mastodon
- Visualizing Mastodon server moderation
- Mastodon timelines for teams