> I’ve largely covered sqlc’s objective benefits and features, but more subjectively, it just feels good and fast to work with. Like Go itself, the tool’s working for you instead of against you, and giving you an easy way to get work done without wrestling with the computer all day.
I've been meaning to write a blog post about sqlc myself, and when I get to it, I'll probably quote this line. sqlc is that rare tool that just "feels right". I think that feeling comes from a combination of things. It's fast. It uses an idiomatic Go approach (code generation, instead of e.g. reflection) to solve the problem at hand, so it feels at home in the Go ecosystem. As noted in the article, it allows you to check that your SQL is valid at compile-time, saving you from discovering errors at runtime, and eliminating the need for certain types of tests.
But perhaps most of all, sqlc lets you just write SQL. After using sqlc, using a more conventional ORM almost seemed like a crazy proposition. Why would someone author an ORM, painstakingly creating Go functions that just map to existing SQL features? Such a project is practically destined to be perpetually incomplete, and if one day it is no longer maintained, migration will be painful. And why add to your code a dependency on such a project, when you could use a tool like sqlc that is so drastically lighter, and brings nearly all the benefits?
sqlc embraces the idea that the right tool for talking to a relational database is the one we've had all along, the one which every engineer already knows: SQL. I look forward to using it in more projects.
show comments
sa46
I agree whole-heartedly that writing SQL feels right. Broadly speaking, you can take the following approaches to mapping database queries to Go code:
- Write SQL queries, parse the SQL, generate Go from the queries (sqlc, pggen).
- Write SQL schema files, parse the SQL schema, generate active records based on the tables (gorm)
- Write Go structs, generate SQL schema from the structs, and use a custom query DSL (proteus).
- Write custom query language (YAML or other), generate SQL schema, queries, and Go query interface (xo).
- Skip generated code and use a non-type-safe query builder (squirrel, goqu).
I prefer writing SQL queries so that app logic doesn't depend on the the database table structure.
I started off with sqlc but ran into limitations with more complex queries. It's quite difficult to infer what a SQL query will output even with a proper parse tree. sqlc also didn't work with generated code.
I wrote pggen with the idea that you can just execute the query and have Postgres tell you what the output types and names will be. Here's the original design doc [1] that outlines the motivations. By comparison, sqlc starts from the parse tree, and has the complex task of computing the control flow graph for nullability and type outputs.
Author of sqlc here. Just wanted to say thanks to everyone in this thread. It's been a really fun project to work on the last two years. Excited to get to work on adding support for more databases and programming languages.
show comments
robmccoll
I've used https://github.com/xo/xo, extended it with some custom functions for templating, extended the templates themselves, and can now generate CRUD for anything in the database, functions for common select queries based on the indices that exist in the database, field filtering and scanning, updates for subsets of fields including some atomic operations, etc. The sky is the limit honestly. It has allowed me to start with something approximating a statically generated ORM and extend it with any features I want as time goes on. I also write .extra.go files along side the generated .xo.go files to extend the structs that are generated with custom logic and methods to convert data into response formats.
I like the approach of starting with the database schema and generating code to reflect that. I define my schema in sql files and handle database migrations using https://github.com/golang-migrate/migrate.
If you take this approach, you can mostly avoid exposing details about the SQL driver being used, and since the driver is mostly used by a few templates, swapping drivers doesn't take much effort.
It allows drop in replacement of SQLite that is in pure Go - no CGO or anything required for compilation, while still having everything implemented from SQLite.
Insert speed is a bit lacking (about ~6x slower in my experience compared to the CGO sqlite3 package), but its good enough for me.
show comments
grantwu
I was really really excited when I saw the title because I've been having a lot of difficulties with other Go SQL libraries, but the caveats section gives me pause.
Needing to use arrays for the IN use case (see https://github.com/kyleconroy/sqlc/issues/216) and the bulk insert case feel like large divergences from what "idiomatic SQL" looks like. It means that you have to adjust how you write your queries. And that can be intimidating for new developers.
The conditional insert case also just doesn't look particularly elegant and the SQL query is pretty large.
sqlc also just doesn't look like it could help with very dynamic queries I need to generate - I work on a team that owns a little domain-specific search engine. The conditional approach could in theory with here, but it's not good for the query planner: https://use-the-index-luke.com/sql/where-clause/obfuscation/...
show comments
sdevonoes
I think I'm missing something, but I don't get sqlc. Let's say I want to "get a list of authors", so in sqlc I would write:
-- name: ListAuthors :many
SELECT * FROM authors
ORDER BY name;
and in Go I can then say `authors, err := queries.ListAuthors(ctx)`. This is cool. Now, if I want to "get a list of American authors" I would write:
-- name: ListAuthorsByNationality :many
SELECT * FROM authors
WHERE nationality = $1;
and in Go I can then say `americanAuthors, err := queries.ListAuthorsByNationality(ctx, "American")`. Now, if I want to "get a list of American authors that are dead", I would have to write:
-- name: ListDeadAuthorsByNationality :many
SELECT * FROM authors
WHERE nationality = $1 AND dead = 1;
... I like the idea of getting Go structs that represent table rows, but I don't want to keep a record of every query variation I may need to execute in Go code. I want to write in Go:
without having to write manually the N potential sql queries that the above code may represent.
show comments
nicoburns
I'm still waiting for a compile-to-sql language in the vein of coffeescript or typescript. It seems like there is so much that could be improved with some very simple syntax sugar: variables, expression fragments and even real basics like trailing commas.
show comments
jonbodner
If you are looking for a way to map SQL queries to type safe Go functions, take a look at my library Proteus: https://github.com/jonbodner/proteus
Proteus generates functions at runtime, avoiding code generation. Performance is identical to writing SQL mapping code yourself. I spoke about its implementation at GopherCon 2017: https://www.youtube.com/watch?v=hz6d7rzqJ6Q
Andys
sqlc is a great code generator that seems to work miracles.
It uses the official postgres parser to know all the types of your tables and queries, and can generate perfect Go structs from this.
It even knows your table and field types just from reading your migrations, tracking changes perfectly, no need to even pg_dump a schema definition.
I also found it works fine with cockroachdb.
show comments
ramenmeal
We just use something like github.com/Masterminds/squirrel in combination with something like github.com/fatih/structs (it's archived, but it's easy code to write) to help with sql query generation, and use github.com/jmoiron/sqlx for easier scanning. I guess it's a little trickier when trying to use postgres specific commands, but we haven't run into many problems.
draebek
This looks really cool to me, because I love to write SQL.
Except for that `UPDATE` statement. That... is a problem.
I attempted to make something similar to this except the opposite direction at a previous job. It was called Pronto: https://github.com/CaperAi/pronto/
It allowed us to store and query Protos into MongoDB. It wasn't perfect (lots of issues) but the idea was rather than specifying custom models for all of our DB logic in our Java code we could write a proto and automatically and code could import that proto and read/write it into the database. This made building tooling to debug issues very easy and make it very simple to hide a DB behind a gRPC API.
The tool automated the boring stuff. I wish I could have extended this to have you define a service in a .proto and "compile" that into an ORM DAO-like thing automatically so you never need to worry about manually wiring that stuff ever again.
cedricvanrompay
I was expecting the article to contain a note about SQLBoiler (https://github.com/volatiletech/sqlboiler) and why they didn't use it, but it doesn't. So I was expecting SQLBoiler to be heavily mentioned in the comments, but it's not the case.
Note that SQLBoiler does not seem to be compatible with `pgx`.
[edit: grammar]
hankchinaski
i have been working with ORM and plain SQL for the past 10 years in a bunch of languages and libraries (php, java, javascript, Go). The issue i have with ORM and other libraries that supposedly reduce the work for you is that it's black magic. You will encounter yourself one day having to dig into the source code of the library to tackle nasty bugs or add new features. It's exhausting. When I started using Go I mostly used plain SQL queries. I took on the manual endeavour to map and hydrate my objects. Sure, it's more manual work, but abstractions have a cost too. That bill might have to be paid one day. One way or the other. Personally, I am never looking back. But every one of us has has a different use case. Therefore ymmv
bjt
This looks better than typical ORMs, but still not giving me what I want.
I want query objects to be composable, and mutable. That lets you do things like this: http://btubbs.com/postgres-search-with-facets-and-location-a.... sqlc would force you to write a separate query for each possible permutation of search features that the user opts to use.
Why are we, as an industry, still okay with constructing SQL queries with string concatenation?
Sytten
As an alternative I suggest people to look at https://github.com/go-jet/jet. I had a good experience working with it and the author is quite responsive.
It really feels like writing SQL but you are writing typesafe golang which I really enjoy doing.
Would love to hear if any others of comparable or better quality exist for js/ts
show comments
Something1234
His codeblocks have broken horizontal scroll on mobile.
Other than that I like it a lot. I built some codegen stuff in the past for test automation and it's really quite nice because it reduces a lot of user errors.
JulianMorrison
Looks very similar to the annoyingly named "mybatis". I approve of the principle: SQL should be separated from code, because SQL needs to be written, or at least tuned, by someone with database expertise. There are often deeply subtle decisions on how to phrase things that affect which indexes get used and so on, and can make orders of magnitude difference to how quickly a query executes.
This is also why ORMs that write the query for you are unhelpful. You're stuck trying to control how a machine makes SQL.
stayfrosty420
>ORMs also have the problem of being an impedance mismatch compared to the raw SQL most people are used to, meaning you’ve got the reference documentation open all day looking up how to do accomplish things when the equivalent SQL would’ve been automatic. Easier queries are pretty straightforward, but imagine if you want to add an upsert or a CTE.
How does this make sense? Most ORMs will give yo a way to execute raw sql which you marshal into a struct the way yo would with a lower level library.
show comments
sam0x17
> However, without generics, Go’s type system can only offer so much
I was reading the whole article waiting to see this line, and the article did not disappoint. This is still the main reason I will stick with Rust or Crystal (depending on the use-case) and avoid Go if I can for the foreseeable future. Generics are just a must these days for non-trivial software projects. It's a shame too because Go has so much promise in other respects.
show comments
jakoblorz
For a full featured "go generate(d)" ORM try https://entgo.io/
Seems rather similar, with the main difference being that you define your schema in a specific go package, from which the ORM is generated. The nice thing is that you can import this package later again to reuse something like default values etc
crescentfresh
> A big downside of vanilla database/sql or pgx is that SQL queries are strings
What's wrong with strings? The argument in the article is that they cannot be compile-time checked, but I'm confused as to the solution to that problem ("you need to write exhaustive test coverage to verify them"). Is this saying that if they weren't strings you wouldn't need test coverage?
show comments
KhalPanda
I stumbled across this library a few months ago and also really liked the approach. Unfortunately I had to drop it temporarily due to this issue (https://github.com/kyleconroy/sqlc/pull/983)... which I now see is solved. Taking another look. :)
jakearmitage
How does it deal with mapping relationships? For example, a Many-to-Many between Posts and Tags, or a Many-to-One like Posts and Comments?
show comments
scrubs
Talk about JIT on target article ... I'll play with this at the office to tomorrow. I've got plans for it
show comments
ggktk
Unfortunately this tool only does static analysis on your SQL. I prefer tools like sqlx (the Rust one), which gets types by running your queries against an actual database. It feels more bulletproof and futureproof than the approach that sqlc is taking.
earthboundkid
I wouldn’t say I’m all in, but this is what I use at work, and I think it’s better than the existing alternatives I could be using instead.
ethanpailes
I'm a big fan of the database first code generator approach to talking to an SQL database, so much so that I wrote pggen[1] (not to be confused with pggen[2], as far as I can tell a sqlc fork, which I just recently learned about).
I'm a really big partisan of this approach, but I think I'd like to play the devil's advocate here and lay out some of the weaknesses of both a database first approach in general and sqlc in particular.
All database first approaches struggle with SQL metaprogramming when compared with a query builder library or an ORM. For the most part, this isn't an issue. Just writing SQL and using parameters correctly can get you very far, but there are a few times when you really need it. In particular, faceted search and pagination are both most naturally expressed via runtime metaprogramming of the SQL queries that you want to execute.
Another drawback is poor support from the database for this kind of approach. I only really know how postgres does here, and I'm not sure how well other databases expose their queries. When writing one of these tools you have to resort to tricks like creating temporary views in order infer the argument and return types of a query. This is mostly opaque to the user, but results in weird stuff bubbling up to the API like the tool not being able to infer nullability of arguments and return values well and not being able to support stuff like RETURNING in statements. sqlc is pretty brilliant because it works around this by reimplementing the whole parser and type checker for postgres in go, which is awesome, but also a lot of work to maintain and potentially subtlety wrong.
A minor drawback is that you have to retrain your users to write `x = ANY($1)` instead of `x IN ?`. Most ORMs and query builders seem to lean on their metaprogramming abilities to auto-convert array arguments in the host language into tuples. This is terrible and makes it really annoying when you want to actually pass an array into a query with an ORM/query builder, but it's the convention that everyone is used to.
There are some other issues that most of these tools seem to get wrong, but are not impossible in principle to deal with for a database first code generator. The biggest one is correct handling of migrations. Most of these tools, sqlc included, spit out the straight line "obvious" go code that most people would write to scan some data out of a db. They make a struct, then pass each of the field into Scan by reference to get filled in. This works great until you have a query like `SELECT * FROM foos WHERE field = $1` and then run `ALTER TABLE foos ADD COLUMN new_field text`. Now the deployed server is broken and you need to redeploy really fast as soon as you've run migrations. opendoor/pggen handles this, but I'm not aware of other database first code generators that do (though I could definitely have missed one).
sqlc looks very interesting and compelling. A similar library I like but haven't had the chance to really use is goyesql: https://github.com/knadh/goyesql
It also allows just writing SQL in a file, reminds me a bit of JDBI in Java.
didip
wow, thanks for mentioning sqlc (and pggen below). The ergonomics is exactly what I have been looking for.
I've dreamt of writing such libraries but alas, never found the time to actually do it. But, now I can just use one of them!
busymom0
Does anyone have a similar recommendation for Rust?
From the article:
> I’ve largely covered sqlc’s objective benefits and features, but more subjectively, it just feels good and fast to work with. Like Go itself, the tool’s working for you instead of against you, and giving you an easy way to get work done without wrestling with the computer all day.
I've been meaning to write a blog post about sqlc myself, and when I get to it, I'll probably quote this line. sqlc is that rare tool that just "feels right". I think that feeling comes from a combination of things. It's fast. It uses an idiomatic Go approach (code generation, instead of e.g. reflection) to solve the problem at hand, so it feels at home in the Go ecosystem. As noted in the article, it allows you to check that your SQL is valid at compile-time, saving you from discovering errors at runtime, and eliminating the need for certain types of tests.
But perhaps most of all, sqlc lets you just write SQL. After using sqlc, using a more conventional ORM almost seemed like a crazy proposition. Why would someone author an ORM, painstakingly creating Go functions that just map to existing SQL features? Such a project is practically destined to be perpetually incomplete, and if one day it is no longer maintained, migration will be painful. And why add to your code a dependency on such a project, when you could use a tool like sqlc that is so drastically lighter, and brings nearly all the benefits?
sqlc embraces the idea that the right tool for talking to a relational database is the one we've had all along, the one which every engineer already knows: SQL. I look forward to using it in more projects.
I agree whole-heartedly that writing SQL feels right. Broadly speaking, you can take the following approaches to mapping database queries to Go code:
- Write SQL queries, parse the SQL, generate Go from the queries (sqlc, pggen).
- Write SQL schema files, parse the SQL schema, generate active records based on the tables (gorm)
- Write Go structs, generate SQL schema from the structs, and use a custom query DSL (proteus).
- Write custom query language (YAML or other), generate SQL schema, queries, and Go query interface (xo).
- Skip generated code and use a non-type-safe query builder (squirrel, goqu).
I prefer writing SQL queries so that app logic doesn't depend on the the database table structure.
I started off with sqlc but ran into limitations with more complex queries. It's quite difficult to infer what a SQL query will output even with a proper parse tree. sqlc also didn't work with generated code.
I wrote pggen with the idea that you can just execute the query and have Postgres tell you what the output types and names will be. Here's the original design doc [1] that outlines the motivations. By comparison, sqlc starts from the parse tree, and has the complex task of computing the control flow graph for nullability and type outputs.
[1]: https://docs.google.com/document/d/1NvVKD6cyXvJLWUfqFYad76CW...
Disclaimer: author of pggen (https://github.com/jschaf/pggen), inspired by sqlc
Author of sqlc here. Just wanted to say thanks to everyone in this thread. It's been a really fun project to work on the last two years. Excited to get to work on adding support for more databases and programming languages.
I've used https://github.com/xo/xo, extended it with some custom functions for templating, extended the templates themselves, and can now generate CRUD for anything in the database, functions for common select queries based on the indices that exist in the database, field filtering and scanning, updates for subsets of fields including some atomic operations, etc. The sky is the limit honestly. It has allowed me to start with something approximating a statically generated ORM and extend it with any features I want as time goes on. I also write .extra.go files along side the generated .xo.go files to extend the structs that are generated with custom logic and methods to convert data into response formats.
I like the approach of starting with the database schema and generating code to reflect that. I define my schema in sql files and handle database migrations using https://github.com/golang-migrate/migrate.
If you take this approach, you can mostly avoid exposing details about the SQL driver being used, and since the driver is mostly used by a few templates, swapping drivers doesn't take much effort.
As an aside - for anyone working with databases in Go, check out https://pkg.go.dev/modernc.org/sqlite
It allows drop in replacement of SQLite that is in pure Go - no CGO or anything required for compilation, while still having everything implemented from SQLite.
Insert speed is a bit lacking (about ~6x slower in my experience compared to the CGO sqlite3 package), but its good enough for me.
I was really really excited when I saw the title because I've been having a lot of difficulties with other Go SQL libraries, but the caveats section gives me pause.
Needing to use arrays for the IN use case (see https://github.com/kyleconroy/sqlc/issues/216) and the bulk insert case feel like large divergences from what "idiomatic SQL" looks like. It means that you have to adjust how you write your queries. And that can be intimidating for new developers.
The conditional insert case also just doesn't look particularly elegant and the SQL query is pretty large.
sqlc also just doesn't look like it could help with very dynamic queries I need to generate - I work on a team that owns a little domain-specific search engine. The conditional approach could in theory with here, but it's not good for the query planner: https://use-the-index-luke.com/sql/where-clause/obfuscation/...
I think I'm missing something, but I don't get sqlc. Let's say I want to "get a list of authors", so in sqlc I would write:
and in Go I can then say `authors, err := queries.ListAuthors(ctx)`. This is cool. Now, if I want to "get a list of American authors" I would write: and in Go I can then say `americanAuthors, err := queries.ListAuthorsByNationality(ctx, "American")`. Now, if I want to "get a list of American authors that are dead", I would have to write: ... I like the idea of getting Go structs that represent table rows, but I don't want to keep a record of every query variation I may need to execute in Go code. I want to write in Go: without having to write manually the N potential sql queries that the above code may represent.I'm still waiting for a compile-to-sql language in the vein of coffeescript or typescript. It seems like there is so much that could be improved with some very simple syntax sugar: variables, expression fragments and even real basics like trailing commas.
If you are looking for a way to map SQL queries to type safe Go functions, take a look at my library Proteus: https://github.com/jonbodner/proteus
Proteus generates functions at runtime, avoiding code generation. Performance is identical to writing SQL mapping code yourself. I spoke about its implementation at GopherCon 2017: https://www.youtube.com/watch?v=hz6d7rzqJ6Q
sqlc is a great code generator that seems to work miracles.
It uses the official postgres parser to know all the types of your tables and queries, and can generate perfect Go structs from this.
It even knows your table and field types just from reading your migrations, tracking changes perfectly, no need to even pg_dump a schema definition.
I also found it works fine with cockroachdb.
We just use something like github.com/Masterminds/squirrel in combination with something like github.com/fatih/structs (it's archived, but it's easy code to write) to help with sql query generation, and use github.com/jmoiron/sqlx for easier scanning. I guess it's a little trickier when trying to use postgres specific commands, but we haven't run into many problems.
This looks really cool to me, because I love to write SQL.
Except for that `UPDATE` statement. That... is a problem.
Looks like there is an open discussion about this on the project: https://github.com/kyleconroy/sqlc/discussions/1149
I attempted to make something similar to this except the opposite direction at a previous job. It was called Pronto: https://github.com/CaperAi/pronto/
It allowed us to store and query Protos into MongoDB. It wasn't perfect (lots of issues) but the idea was rather than specifying custom models for all of our DB logic in our Java code we could write a proto and automatically and code could import that proto and read/write it into the database. This made building tooling to debug issues very easy and make it very simple to hide a DB behind a gRPC API.
The tool automated the boring stuff. I wish I could have extended this to have you define a service in a .proto and "compile" that into an ORM DAO-like thing automatically so you never need to worry about manually wiring that stuff ever again.
I was expecting the article to contain a note about SQLBoiler (https://github.com/volatiletech/sqlboiler) and why they didn't use it, but it doesn't. So I was expecting SQLBoiler to be heavily mentioned in the comments, but it's not the case.
If you want to see a (slightly heated) debate about `sqlc` versus SQLBoiler with their respective creators: https://www.reddit.com/r/golang/comments/e9bvrt/sqlc_compile...
Note that SQLBoiler does not seem to be compatible with `pgx`.
[edit: grammar]
i have been working with ORM and plain SQL for the past 10 years in a bunch of languages and libraries (php, java, javascript, Go). The issue i have with ORM and other libraries that supposedly reduce the work for you is that it's black magic. You will encounter yourself one day having to dig into the source code of the library to tackle nasty bugs or add new features. It's exhausting. When I started using Go I mostly used plain SQL queries. I took on the manual endeavour to map and hydrate my objects. Sure, it's more manual work, but abstractions have a cost too. That bill might have to be paid one day. One way or the other. Personally, I am never looking back. But every one of us has has a different use case. Therefore ymmv
This looks better than typical ORMs, but still not giving me what I want.
I want query objects to be composable, and mutable. That lets you do things like this: http://btubbs.com/postgres-search-with-facets-and-location-a.... sqlc would force you to write a separate query for each possible permutation of search features that the user opts to use.
I like the "query builder" pattern you get from Goqu. https://github.com/doug-martin/goqu
So they are using a full blown relational database to use it like they are reading files on a share. Amazing indeed.
From the docs and online comments, SQLC doesn't support join. I am amazed by the number of comments and nobody point this out.
It's not the main thrust of the article, but this snippet of code struck out at me:
> err := conn.QueryRow(ctx, `SELECT ` + scanTeamFields + ` ...)
Why are we, as an industry, still okay with constructing SQL queries with string concatenation?
As an alternative I suggest people to look at https://github.com/go-jet/jet. I had a good experience working with it and the author is quite responsive.
It really feels like writing SQL but you are writing typesafe golang which I really enjoy doing.
We're using a very similar lib for typescript: https://github.com/adelsz/pgtyped
Would love to hear if any others of comparable or better quality exist for js/ts
His codeblocks have broken horizontal scroll on mobile.
Other than that I like it a lot. I built some codegen stuff in the past for test automation and it's really quite nice because it reduces a lot of user errors.
Looks very similar to the annoyingly named "mybatis". I approve of the principle: SQL should be separated from code, because SQL needs to be written, or at least tuned, by someone with database expertise. There are often deeply subtle decisions on how to phrase things that affect which indexes get used and so on, and can make orders of magnitude difference to how quickly a query executes.
This is also why ORMs that write the query for you are unhelpful. You're stuck trying to control how a machine makes SQL.
>ORMs also have the problem of being an impedance mismatch compared to the raw SQL most people are used to, meaning you’ve got the reference documentation open all day looking up how to do accomplish things when the equivalent SQL would’ve been automatic. Easier queries are pretty straightforward, but imagine if you want to add an upsert or a CTE.
How does this make sense? Most ORMs will give yo a way to execute raw sql which you marshal into a struct the way yo would with a lower level library.
> However, without generics, Go’s type system can only offer so much
I was reading the whole article waiting to see this line, and the article did not disappoint. This is still the main reason I will stick with Rust or Crystal (depending on the use-case) and avoid Go if I can for the foreseeable future. Generics are just a must these days for non-trivial software projects. It's a shame too because Go has so much promise in other respects.
For a full featured "go generate(d)" ORM try https://entgo.io/ Seems rather similar, with the main difference being that you define your schema in a specific go package, from which the ORM is generated. The nice thing is that you can import this package later again to reuse something like default values etc
> A big downside of vanilla database/sql or pgx is that SQL queries are strings
What's wrong with strings? The argument in the article is that they cannot be compile-time checked, but I'm confused as to the solution to that problem ("you need to write exhaustive test coverage to verify them"). Is this saying that if they weren't strings you wouldn't need test coverage?
I stumbled across this library a few months ago and also really liked the approach. Unfortunately I had to drop it temporarily due to this issue (https://github.com/kyleconroy/sqlc/pull/983)... which I now see is solved. Taking another look. :)
How does it deal with mapping relationships? For example, a Many-to-Many between Posts and Tags, or a Many-to-One like Posts and Comments?
Talk about JIT on target article ... I'll play with this at the office to tomorrow. I've got plans for it
Unfortunately this tool only does static analysis on your SQL. I prefer tools like sqlx (the Rust one), which gets types by running your queries against an actual database. It feels more bulletproof and futureproof than the approach that sqlc is taking.
I wouldn’t say I’m all in, but this is what I use at work, and I think it’s better than the existing alternatives I could be using instead.
I'm a big fan of the database first code generator approach to talking to an SQL database, so much so that I wrote pggen[1] (not to be confused with pggen[2], as far as I can tell a sqlc fork, which I just recently learned about).
I'm a really big partisan of this approach, but I think I'd like to play the devil's advocate here and lay out some of the weaknesses of both a database first approach in general and sqlc in particular.
All database first approaches struggle with SQL metaprogramming when compared with a query builder library or an ORM. For the most part, this isn't an issue. Just writing SQL and using parameters correctly can get you very far, but there are a few times when you really need it. In particular, faceted search and pagination are both most naturally expressed via runtime metaprogramming of the SQL queries that you want to execute.
Another drawback is poor support from the database for this kind of approach. I only really know how postgres does here, and I'm not sure how well other databases expose their queries. When writing one of these tools you have to resort to tricks like creating temporary views in order infer the argument and return types of a query. This is mostly opaque to the user, but results in weird stuff bubbling up to the API like the tool not being able to infer nullability of arguments and return values well and not being able to support stuff like RETURNING in statements. sqlc is pretty brilliant because it works around this by reimplementing the whole parser and type checker for postgres in go, which is awesome, but also a lot of work to maintain and potentially subtlety wrong.
A minor drawback is that you have to retrain your users to write `x = ANY($1)` instead of `x IN ?`. Most ORMs and query builders seem to lean on their metaprogramming abilities to auto-convert array arguments in the host language into tuples. This is terrible and makes it really annoying when you want to actually pass an array into a query with an ORM/query builder, but it's the convention that everyone is used to.
There are some other issues that most of these tools seem to get wrong, but are not impossible in principle to deal with for a database first code generator. The biggest one is correct handling of migrations. Most of these tools, sqlc included, spit out the straight line "obvious" go code that most people would write to scan some data out of a db. They make a struct, then pass each of the field into Scan by reference to get filled in. This works great until you have a query like `SELECT * FROM foos WHERE field = $1` and then run `ALTER TABLE foos ADD COLUMN new_field text`. Now the deployed server is broken and you need to redeploy really fast as soon as you've run migrations. opendoor/pggen handles this, but I'm not aware of other database first code generators that do (though I could definitely have missed one).
Also the article is missing a few more tools in this space. https://github.com/xo/xo. https://github.com/gnormal/gnorm.
[1]: https://github.com/opendoor/pggen [2]: https://github.com/jschaf/pggen
sqlc looks very interesting and compelling. A similar library I like but haven't had the chance to really use is goyesql: https://github.com/knadh/goyesql
It also allows just writing SQL in a file, reminds me a bit of JDBI in Java.
wow, thanks for mentioning sqlc (and pggen below). The ergonomics is exactly what I have been looking for.
I've dreamt of writing such libraries but alas, never found the time to actually do it. But, now I can just use one of them!
Does anyone have a similar recommendation for Rust?
Java is awful and slow.
Invent Go.
Waiting for generics....
5 year later
Go looks like Java. Back to square one :)