> The sqlite_sequence table is the most underappreciated debugging tool in SQLite. It tracks the highest auto-increment value ever assigned for each table — even if that row was subsequently lost.
> WorkQueueTask.count returns ~300 (current rows). The sequence shows 3,700+ (every task ever created). If those numbers diverge unexpectedly, something deleted rows it shouldn't have.
Or it means that SQLite is exhibiting some of its "maybe I will, maybe I won't" behavior [0]:
> Note that "monotonically increasing" does not imply that the ROWID always increases by exactly one. One is the usual increment. However, if an insert fails due to (for example) a uniqueness constraint, the ROWID of the failed insertion attempt might not be reused on subsequent inserts, resulting in gaps in the ROWID sequence. AUTOINCREMENT guarantees that automatically chosen ROWIDs will be increasing but not that they will be sequential.
> No ILIKE. PostgreSQL developers reach for WHERE name ILIKE '%term%' instinctively. SQLite throws a syntax error. Use WHERE LOWER(name) LIKE '%term%' instead.
You should not be reaching for ILIKE, functions on predicates, or leading wildcards unless you're aware of the impacts those have on indexing.
> json_extract returns native types. json_extract(data, '$.id') returns an integer if the value was stored as a number. Comparing it to a string silently fails. Always CAST(json_extract(...) AS TEXT) when you need string comparison.
If you're using strings embedded in JSON as predicates, you're going to have a very bad time when you get more than a trivial number of rows in the table.
> The technical fix was embarrassingly simple: stop pushing to main every ten minutes.
Wait, you push straight to main?
> We added a rule — batch related changes, avoid rapid-fire pushes. It's in our CLAUDE.md (the governance file that all our AI agents follow):
> Avoid rapid-fire pushes to main — 11 pushes in 2h caused overlapping Kamal deploys with concurrent SQLite access.
Wait, you let _Claude_ push your e-commerce code straight to main which immediately results in a production deploy?
show comments
infamia
SQLite has a ".backup" command that you should always use to backup a SQLite DB. You're risking data loss/corruption using "cp" to backup your database as prescribed in the article.
> Would We Choose SQLite Again? Yes. For a single-server deployment with moderate write volume, SQLite eliminates an entire category of infrastructure complexity. No connection pool tuning. No database server upgrades. No replication lag.
These are weird reasons. You can just install Postgres or MySQL locally too. Connection pool tuning certainly isn't anything you have to worry about for a moderate write volume. You don't ever need to upgrade the database if you don't want to, since you're not publicly exposing it. There's obviously no replication lag if you're not replicating, which you wouldn't be with a single server.
The reason you don't usually choose SQLite for the web is future-proofing. If you're totally sure you'll always stay single-server forever, then sure, go for it. But if there's even a tiny chance you'll ever need to expand to multiple web servers, then you'll wish you'd chosen a client-server database from the start. And again, you can run Postgres/MySQL locally, on even the tiniest cheapest VPS, basically just as easily as using SQLite.
show comments
cadamsdotcom
The fix appears to nicely asking the forgetful unreliable agent to please (very closely pretty please!) follow the deploy instructions (and also please never hallucinate or mess up, because statistics tells us an entity with no long term memory and no incentive to get everything right will do the job right 99.99999999% of the time, which is good enough to run an eshop) not deploy too often per hour.
With one simple instruction the system (99.9999% of the time) gains the handy property that “only” two processes end up with the database files open at once.
Thanks for the vibes!
show comments
jp0001
I took three weeks off from tech, read books from last century, and travelled Europe. Coming back, reading LLM generated content and code feels like nails on a chalkboard. Taste, it does not have taste.
show comments
adobrawy
If the problem is excessive deployments via GitHub Actions, why not use concurrency control on GitHub Actions ( https://docs.github.com/en/actions/how-tos/write-workflows/c... ) instead of relying on agent randomness and the hope that it won't make the same mistake again? Am I missing something?
jmull
Redis, four dbs, container orchestration for a site of this modest scope… generated blog posts.
Our AI future is a lot less grand than I expected.
show comments
sgbeal
> json_extract returns native types. json_extract(data, '$.id') returns an integer if the value was stored as a number. Comparing it to a string silently fails. Always CAST(json_extract(...) AS TEXT) when you need string comparison.
please consider writing it yourself. quirks in human writing is infinitely more interesting than a next-token-predicted 500 word piece
show comments
kristiandupont
SQLite is a rock solid piece of software that offers a great value prop: in-process database. For locally running apps (desktop or mobile), this makes perfect sense.
However, I genuinely don't see the appeal when you are in a client/server environment. Spinning up Postgres via a container is a one-liner and equally simple for tests (via testcontainers or pglite). The "simple" type system of SQLite feels like nothing but a limitation to me.
mattrighetti
I see tons of articles like this, and I have no doubt sqlite proved to be a great piece of software in production environments, but what I rarely find discussed is that we lack tools that enable you to access and _maintain_ SQLite databases.
It's so convenient to just open Datagrip and have a look at all my PostgreSQL instances; that's not possible with sqlite AFAIK (not even SSH tunnelling?). If something goes wrong, you have to SSH into the machine and use raw SQL. I know there are some cool front-end interfaces to inspect the db but it requires more setup than you'd expect.
I think that most people give up on sqlite for this reason and not because of its performance.
show comments
politelemon
> embarrassingly simple
This is becoming the new overused LLM goto expression for describing basic concepts.
jszymborski
The LLM prose are grating read. I promise, you'd do a better job yourself.
show comments
heikkilevanto
I use SqLite for a small hobby project, fine for that. Wanted to read the article to see why I should not, but it attacked me with a "subscribe" popup, so I stopped there. The comments here seem to be based on daydreaming on scaling to a lot of users who need 24/7 uptime, which is not always the case.
PunchyHamster
> Yes. For a single-server deployment with moderate write volume, SQLite eliminates an entire category of infrastructure complexity. No connection pool tuning. No database server upgrades. No replication lag.
None of these is needed if you run sqlite sized workloads...
I like SQLite but right tools for right jobs... tho data loss is most likely code bug
jmull
I don't know if it's just me, but this whole post seems to have time traveled forward from about 3-4 days ago.
It's not just a repost. The thread includes a comment I made at the time which now from "1 hour ago".
Makes me wonder if it's an honest bug or someone has hacked the hacker news front page to sell their t-shits, mugs, and AI starter kits.
show comments
rienbdj
A well designed system shouldn’t drop orders?
If you perform at least once processing then use Stripe idempotency keys you avoid such issues?
nop_slide
I still haven't figured out a good way to due blue/green sqlite deploys on fly.io. Is this just a limitation of using sqlite or using Fly? I've been very happy with sqlite otherwise, rather unsure how to do a cutover to a new instance.
Anyone have some docs on how to cutover gracefully with sqlite on other providers?
show comments
faangguyindia
I've a busy app, i just deploy to canary. And use loadbalancer to move 5% traffic to it, i observe how it reacts and then rollout the canary changes to all.
how hard and complex is it to roll out postgres?
show comments
siruwastaken
Am I the only one finding this article highly suspect? It seems like the errors made are so basic, i.e. using the wrong SQL dialect for the db system in use, and there orders were apparently only at 17?
leosanchez
> Backups are cp production.sqlite3 backup.sqlite3
I use gobackup[0] as another container in compose.yml file which can backup to multiple locations.
> The sqlite_sequence table is the most underappreciated debugging tool in SQLite. It tracks the highest auto-increment value ever assigned for each table — even if that row was subsequently lost. > WorkQueueTask.count returns ~300 (current rows). The sequence shows 3,700+ (every task ever created). If those numbers diverge unexpectedly, something deleted rows it shouldn't have.
Or it means that SQLite is exhibiting some of its "maybe I will, maybe I won't" behavior [0]:
> Note that "monotonically increasing" does not imply that the ROWID always increases by exactly one. One is the usual increment. However, if an insert fails due to (for example) a uniqueness constraint, the ROWID of the failed insertion attempt might not be reused on subsequent inserts, resulting in gaps in the ROWID sequence. AUTOINCREMENT guarantees that automatically chosen ROWIDs will be increasing but not that they will be sequential.
> No ILIKE. PostgreSQL developers reach for WHERE name ILIKE '%term%' instinctively. SQLite throws a syntax error. Use WHERE LOWER(name) LIKE '%term%' instead.
You should not be reaching for ILIKE, functions on predicates, or leading wildcards unless you're aware of the impacts those have on indexing.
> json_extract returns native types. json_extract(data, '$.id') returns an integer if the value was stored as a number. Comparing it to a string silently fails. Always CAST(json_extract(...) AS TEXT) when you need string comparison.
If you're using strings embedded in JSON as predicates, you're going to have a very bad time when you get more than a trivial number of rows in the table.
0: https://sqlite.org/autoinc.html
> The technical fix was embarrassingly simple: stop pushing to main every ten minutes.
Wait, you push straight to main?
> We added a rule — batch related changes, avoid rapid-fire pushes. It's in our CLAUDE.md (the governance file that all our AI agents follow):
> Avoid rapid-fire pushes to main — 11 pushes in 2h caused overlapping Kamal deploys with concurrent SQLite access.
Wait, you let _Claude_ push your e-commerce code straight to main which immediately results in a production deploy?
SQLite has a ".backup" command that you should always use to backup a SQLite DB. You're risking data loss/corruption using "cp" to backup your database as prescribed in the article.
https://sqlite.org/cli.html#special_commands_to_sqlite3_dot_...
> Would We Choose SQLite Again? Yes. For a single-server deployment with moderate write volume, SQLite eliminates an entire category of infrastructure complexity. No connection pool tuning. No database server upgrades. No replication lag.
These are weird reasons. You can just install Postgres or MySQL locally too. Connection pool tuning certainly isn't anything you have to worry about for a moderate write volume. You don't ever need to upgrade the database if you don't want to, since you're not publicly exposing it. There's obviously no replication lag if you're not replicating, which you wouldn't be with a single server.
The reason you don't usually choose SQLite for the web is future-proofing. If you're totally sure you'll always stay single-server forever, then sure, go for it. But if there's even a tiny chance you'll ever need to expand to multiple web servers, then you'll wish you'd chosen a client-server database from the start. And again, you can run Postgres/MySQL locally, on even the tiniest cheapest VPS, basically just as easily as using SQLite.
The fix appears to nicely asking the forgetful unreliable agent to please (very closely pretty please!) follow the deploy instructions (and also please never hallucinate or mess up, because statistics tells us an entity with no long term memory and no incentive to get everything right will do the job right 99.99999999% of the time, which is good enough to run an eshop) not deploy too often per hour.
With one simple instruction the system (99.9999% of the time) gains the handy property that “only” two processes end up with the database files open at once.
Thanks for the vibes!
I took three weeks off from tech, read books from last century, and travelled Europe. Coming back, reading LLM generated content and code feels like nails on a chalkboard. Taste, it does not have taste.
If the problem is excessive deployments via GitHub Actions, why not use concurrency control on GitHub Actions ( https://docs.github.com/en/actions/how-tos/write-workflows/c... ) instead of relying on agent randomness and the hope that it won't make the same mistake again? Am I missing something?
Redis, four dbs, container orchestration for a site of this modest scope… generated blog posts.
Our AI future is a lot less grand than I expected.
> json_extract returns native types. json_extract(data, '$.id') returns an integer if the value was stored as a number. Comparing it to a string silently fails. Always CAST(json_extract(...) AS TEXT) when you need string comparison.
More simply:
vs:llm generated article.
please consider writing it yourself. quirks in human writing is infinitely more interesting than a next-token-predicted 500 word piece
SQLite is a rock solid piece of software that offers a great value prop: in-process database. For locally running apps (desktop or mobile), this makes perfect sense.
However, I genuinely don't see the appeal when you are in a client/server environment. Spinning up Postgres via a container is a one-liner and equally simple for tests (via testcontainers or pglite). The "simple" type system of SQLite feels like nothing but a limitation to me.
I see tons of articles like this, and I have no doubt sqlite proved to be a great piece of software in production environments, but what I rarely find discussed is that we lack tools that enable you to access and _maintain_ SQLite databases.
It's so convenient to just open Datagrip and have a look at all my PostgreSQL instances; that's not possible with sqlite AFAIK (not even SSH tunnelling?). If something goes wrong, you have to SSH into the machine and use raw SQL. I know there are some cool front-end interfaces to inspect the db but it requires more setup than you'd expect.
I think that most people give up on sqlite for this reason and not because of its performance.
> embarrassingly simple
This is becoming the new overused LLM goto expression for describing basic concepts.
The LLM prose are grating read. I promise, you'd do a better job yourself.
I use SqLite for a small hobby project, fine for that. Wanted to read the article to see why I should not, but it attacked me with a "subscribe" popup, so I stopped there. The comments here seem to be based on daydreaming on scaling to a lot of users who need 24/7 uptime, which is not always the case.
> Yes. For a single-server deployment with moderate write volume, SQLite eliminates an entire category of infrastructure complexity. No connection pool tuning. No database server upgrades. No replication lag.
None of these is needed if you run sqlite sized workloads...
I like SQLite but right tools for right jobs... tho data loss is most likely code bug
I don't know if it's just me, but this whole post seems to have time traveled forward from about 3-4 days ago.
It's not just a repost. The thread includes a comment I made at the time which now from "1 hour ago".
Makes me wonder if it's an honest bug or someone has hacked the hacker news front page to sell their t-shits, mugs, and AI starter kits.
A well designed system shouldn’t drop orders?
If you perform at least once processing then use Stripe idempotency keys you avoid such issues?
I still haven't figured out a good way to due blue/green sqlite deploys on fly.io. Is this just a limitation of using sqlite or using Fly? I've been very happy with sqlite otherwise, rather unsure how to do a cutover to a new instance.
Anyone have some docs on how to cutover gracefully with sqlite on other providers?
I've a busy app, i just deploy to canary. And use loadbalancer to move 5% traffic to it, i observe how it reacts and then rollout the canary changes to all.
how hard and complex is it to roll out postgres?
Am I the only one finding this article highly suspect? It seems like the errors made are so basic, i.e. using the wrong SQL dialect for the db system in use, and there orders were apparently only at 17?
> Backups are cp production.sqlite3 backup.sqlite3
I use gobackup[0] as another container in compose.yml file which can backup to multiple locations.
[0]: https://gobackup.github.io/
NIH syndrome, almost mental health issues.
could have used firebird embedded, also a simple deployment such as sqllite, but better concurrency and more complete system, also a tad faster
Just use a 4gb server and install Postgres
Slopcoded article for a Slopcoded website
If Nico send him an email. The AI CEO should take his offer.