This is the wrong way to do it. As software architects, you need to learn to appropriate the correct usage of algorithms and AI. Using AI for building everything is not just a waste of tokens, it also is an exercise in futility.
Here is how I solved this problem:
1. There is already a knowledgebase of almost all APIs (the ones that are useful to the average Joe anyway) in either Swagger.json or Postman.json format. This is totally upto you as to what format you prefer.
2. Write a generator (I use Elixir) to infer which format 1. uses and generate your API modules using a code generator. There are plenty, or you can even write your own using simple File.write!
3. In the rare occurence you coming across a shitty API with only scattered documentation across outdated static pages online, only then use an LLM + browser to automate it to write it into the format listed in 1. (Swagger.json or Postman.json)
Throwing an LLM at everything is just inefficient lazy work.
show comments
groby_b
Pardon me if I misread, but wouldn't that be better served by a ready-made library (with, if you must AI, some futzing to account for call signature)?
What is the value add of having the AI rebuild code over and over, individually for each project using it?
show comments
mellosouls
Nango claims to be fully open source but the documentation seems to imply the self-hosted version is a small subset:
Ofc that may well be my misreading but it seems important in the context of the claim and the analysis using OpenCode.
Perhaps they could clarify and/or revisit the docs.
yojo
The TL;DR dos not seem to match the rest of the article.
They claim the agents reliably generated a week’s worth of dev work for $20 in tokens, then go on to list all the failure modes and debugging they had to do to get it to work, and conclude with “Agents are not ready to autonomously ship every integration end-to-end.”
Generally a good write up that matches my experience (experts can make systems that can guide agents to do useful work, with review), but the first section is pretty misleading.
cpursley
If you're using Elixir (or don't mind running a separate Elixir service), we've built what is effectively a clone of the oAuth part of Nango (formally Pizzly). Drop into any Elixir project and get full oAuth management out of the box, and it's compatible with all of the Nango provider strategies:
A lot of these smells like skill issue on the model. So many are completely non-issues if using Claude Opus 4.5+
The idea of assigning a code-owner agent per directory is really interesting. A2A (read: message passing and self-updating AGENTS.md files) might really shine there in some way.
This is the wrong way to do it. As software architects, you need to learn to appropriate the correct usage of algorithms and AI. Using AI for building everything is not just a waste of tokens, it also is an exercise in futility.
Here is how I solved this problem:
1. There is already a knowledgebase of almost all APIs (the ones that are useful to the average Joe anyway) in either Swagger.json or Postman.json format. This is totally upto you as to what format you prefer.
2. Write a generator (I use Elixir) to infer which format 1. uses and generate your API modules using a code generator. There are plenty, or you can even write your own using simple File.write!
3. In the rare occurence you coming across a shitty API with only scattered documentation across outdated static pages online, only then use an LLM + browser to automate it to write it into the format listed in 1. (Swagger.json or Postman.json)
Throwing an LLM at everything is just inefficient lazy work.
Pardon me if I misread, but wouldn't that be better served by a ready-made library (with, if you must AI, some futzing to account for call signature)?
What is the value add of having the AI rebuild code over and over, individually for each project using it?
Nango claims to be fully open source but the documentation seems to imply the self-hosted version is a small subset:
https://nango.dev/docs/guides/platform/free-self-hosting/con...
Ofc that may well be my misreading but it seems important in the context of the claim and the analysis using OpenCode.
Perhaps they could clarify and/or revisit the docs.
The TL;DR dos not seem to match the rest of the article.
They claim the agents reliably generated a week’s worth of dev work for $20 in tokens, then go on to list all the failure modes and debugging they had to do to get it to work, and conclude with “Agents are not ready to autonomously ship every integration end-to-end.”
Generally a good write up that matches my experience (experts can make systems that can guide agents to do useful work, with review), but the first section is pretty misleading.
If you're using Elixir (or don't mind running a separate Elixir service), we've built what is effectively a clone of the oAuth part of Nango (formally Pizzly). Drop into any Elixir project and get full oAuth management out of the box, and it's compatible with all of the Nango provider strategies:
https://github.com/agoodway/tango
A lot of these smells like skill issue on the model. So many are completely non-issues if using Claude Opus 4.5+
The idea of assigning a code-owner agent per directory is really interesting. A2A (read: message passing and self-updating AGENTS.md files) might really shine there in some way.