Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

How to use sqlc + SQLite + jsonb? #3953

Unanswered
brandur asked this question in Q&A
May 3, 2025 · 1 comments · 3 replies
Discussion options

Hey all,

I've been trying to use the combination of sqlc + SQLite. I've run into a number of smaller issues so far, but have managed to work through most of them via various workarounds. One that I've not been able to find a workaround for (despite trying everything I could think of), is the use of jsonb, and I'm wondering whether I'm missing something or if it's fairly broken right now (I suspect the latter, but wanted to gut check it).

I have a table with a jsonb field like (see metadata):

CREATE TABLE river_queue (
 name text PRIMARY KEY NOT NULL,
 created_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
 metadata blob NOT NULL DEFAULT (jsonb('{}')),
 paused_at timestamp,
 updated_at timestamp NOT NULL
);

I've inserting to it using this query (this is an INSERT, but the same applies for a SELECT * and every other type of query):

-- name: QueueCreateOrSetUpdatedAt :one
INSERT INTO /* TEMPLATE: schema */river_queue (
 created_at,
 metadata,
 name,
 paused_at,
 updated_at
) VALUES (
 coalesce(cast(sqlc.narg('now') AS text), datetime('now', 'subsec')),
 jsonb(cast(@metadata AS blob)),
 @name,
 cast(sqlc.narg('paused_at') AS text),
 coalesce(cast(sqlc.narg('updated_at') AS text), cast(sqlc.narg('now') AS text), datetime('now', 'subsec'))
) ON CONFLICT (name) DO UPDATE
SET
 updated_at = EXCLUDED.updated_at
RETURNING *;

I'm running a test where I insert a row and then assert on the result:

t.Run("InsertsANewQueueWithDefaultUpdatedAt", func(t *testing.T) {
	t.Parallel()
	exec, bundle := setup(ctx, t)
	metadata := []byte(`{"foo": "bar"}`)
	now := time.Now().UTC()
	queue, err := exec.QueueCreateOrSetUpdatedAt(ctx, &riverdriver.QueueCreateOrSetUpdatedAtParams{
		Metadata: metadata,
		Name: "new-queue",
		Now: &now,
		Schema: bundle.schema,
	})
	require.NoError(t, err)
	require.WithinDuration(t, now, queue.CreatedAt, bundle.driver.TimePrecision())
	require.Equal(t, metadata, queue.Metadata)
	require.Equal(t, "new-queue", queue.Name)
	require.Nil(t, queue.PausedAt)
	require.WithinDuration(t, now, queue.UpdatedAt, bundle.driver.TimePrecision())
})

It fails with an error like this:

riverdrivertest.go:3030:
 Error Trace: /Users/brandur/Documents/projects/river/internal/riverinternaltest/riverdrivertest/riverdrivertest.go:3030
 Error: Not equal:
 expected: []byte{0x7b, 0x22, 0x66, 0x6f, 0x6f, 0x22, 0x3a, 0x20, 0x22, 0x62, 0x61, 0x72, 0x22, 0x7d}
 actual : []byte{0x8c, 0x37, 0x66, 0x6f, 0x6f, 0x37, 0x62, 0x61, 0x72}
 Diff:
 --- Expected
 +++ Actual
 @@ -1,3 +1,3 @@
 -([]uint8) (len=14) {
 - 00000000 7b 22 66 6f 6f 22 3a 20 22 62 61 72 22 7d |{"foo": "bar"}|
 +([]uint8) (len=9) {
 + 00000000 8c 37 66 6f 6f 37 62 61 72 |.7foo7bar|
 }
 Test: TestDriverRiverSQLite/QueueCreateOrSetUpdatedAt/InsertsANewQueueWithDefaultUpdatedAt

The problem is that the data is actually coming back as SQLite's jsonb binary format rather than usable JSON.

According to SQLite's docs on jsonb, the format is meant to be internal only, so even if we could deserialize it in our Go code, we should make no attempt to do that:

JSONB is not intended as an external format to be used by applications. JSONB is designed for internal use by SQLite only. Programmers do not need to understand the JSONB format in order to use it effectively. Applications should access JSONB only through the JSON SQL functions, not by looking at individual bytes of the BLOB.

So what should ideally be happening here is that jsonb is run through json(...) before being sent back through sqlc, but I don't know how to pull this off.

I thought about replacing RETURNING * with RETURNING sqlc.embed(river_job), json(metadata) AS metadata, but this isn't great (it breaks all our normal sqlc conventions), and leads to even more problems: (1) sqlc + SQLite doesn't support sqlc.embed here, and (2) sqlc + SQLite doesn't support renaming columns here with AS so you end up with a bunch of position results.

I'm not sure what the best approach here would be, but possibly something like sqlc recognizing a jsonb "sentinel" type in a table definition (there is no jsonb type; these are all stored as blob), and knows to json(...) it before returning.

Any thoughts/ideas?

You must be logged in to vote

Replies: 1 comment 3 replies

Comment options

I'm not sure what the best approach here would be, but possibly something like sqlc recognizing a jsonb "sentinel" type in a table definition (there is no jsonb type; these are all stored as blob), and knows to json(...) it before returning.

This is the first time we've run into an issue where the value returned from selecting the column isn't correct. First, I'll say it's a pretty strange decision by SQLite to even return this internal-only bytes representation, but it is what is. Second, a solution to this could be generalized to allow for a whole host of features. I think it's a mistake to try and design that now.

Instead, the only decent solution I see is to recognize when a SQLite blob columns is stored as jsonb and always generate json(col) as col instead of col when expanding *.

You must be logged in to vote
3 replies
Comment options

Thans Kyle. Yeah, I agree about the SQLite design and moreover, I'm amazed I haven't been able to find more resources on best practices for working around this kind of thing. They say SQLite is the most commonly distributed DB in the world, but if that's true, based on the lack of good developer information available, 99.99% of those deployments must be using SQLite as a flat file. The ratio of good Postgres information to good SQLite information on a site like Stack Overflow is about 100:1.

Instead, the only decent solution I see is to recognize when a SQLite blob columns is stored as jsonb and always generate json(col) as col instead of col when expanding *.

That's kind of where I'm landing too. If we were to make this change, would that need some kind of compatibility flag for sqlc for users who might have been using things as they were on the old way, or can we get away without one in this case because the old way really didn't have any feasible path to really working in any functional way?

Comment options

The existing behavior is so broken that I doubt anyone is using it. What we usually do in this case is fix the default behavior and then add a flag to turn it off.

Comment options

Opened: #3968

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet

AltStyle によって変換されたページ (->オリジナル) /