I like the looks of this, and the idea behind it, but TypeScriot via Deno is an audited language with a good security model, a good type system, and sandboxing in an extremely well-hardened runtime. It's also a language that LLMs are exceptionally well-trained on. What does Mog offer that's meaningfully superior in an agent context?
I see that Deno requires a subprocess which introduces some overhead, and I might be naive to think so, but that doesn't seem like it would matter much when agent round-trip and inference time is way, way longer than any inefficiency a subprocess would introduce. (edit: I realized in some cases the round-trip time may be negligible if the agent is local, but inference is still very slow)
I admittedly do prefer the syntax here, but I'm more so asking these questions from a point of pragmatism over idealism. I already use Deno because it's convenient, practical, and efficient rather than ideal.
show comments
mhink
One nitpick I noticed:
> String Slicing
> You can extract a substring using bracket syntax with a range: s[start:end]. Both start and end are byte offsets. The slice includes start and excludes end.
Given that all strings are UTF-8, I note that there's not a great way to iterate over strings by _code point_. Using byte offsets is certainly more performant, but I could see this being a common request if you're expecting a lot of string manipulation to happen in these programs.
Other than that, this looks pretty cool. Unlike other commenters, I kinda like the lack of operator precedence. I wouldn't be surprised if it turns out to be not a huge problem, since LLMs generating code with this language would be pattern-matching on existing code, which will always have explicit parentheses.
Retr0id
> Compiled to native code for low-latency plugin execution – no interpreter overhead, no JIT, no process startup cost.
If you're running the compiled code in-process, how is that not JIT? And isn't that higher-latency than interpreting? Tiered-JIT (a la V8) solves exactly this problem.
Edit: Although the example programs show traditional AOT compile/execute steps, so "no process startup cost" is presumably a lie?
show comments
mkl
> it's intended to minimize foot-guns to lower the error rate when generating Mog code. This is why Mog has no operator precedence: non-associative operations have to use parentheses, e.g. (a + b) * c.
Almost all the code LLMs have been trained on uses operator precedence, so no operator precedence seems like a massive foot-gun.
show comments
rapind
For me this is Gleam. Fairly small lang, type safe, compiled, NO NULLS (very important IMO), good FFI, code is readable, and... you get the BEAM!
Agents can pretty much iterate on their own.
The most important thing for me, at least for now (and IMO the foreseeable future) is being able to review and read the output code clearly. I am the bottleneck in the agent -> human loop, so optimizing for that by producing clear and readable code is a massive priority. Gleam eliminates a ton of errors automatically so my reviews are focused on mostly business logic (also need to explicitly call out redundant code often enough).
I could see an argument for full on Erlang too, but I like the static typing.
show comments
lukasb
On a quick scan, what it's missing is data tainting. We've had that tech for a while and it's perfectly suited to the age of prompt injection.
saithound
> When asking people to write code in a language, these restrictions could be onerous. But LLMs don't care, and the less expressivity you trust them with, the better.
But LLMs very much do care. They are measurably worse when writing code in languages with non-standard or non-existent operator precedence. This is not surprising given how they learn programmming.
roxolotl
I’m still waiting for someone to build a good lisp harness. Stick an agent in a lisp repl and they can change literally anything they want easily.
show comments
zelphirkalt
Argument 1 ("Syntax Only an AI Could Love") sounds dubious. I am probably not alone in being paranoid enough, to always put those parentheses, even if I am 90% sure, that there is operator precedence. In lispy languages the ambiguity never even arises, and I put many parentheses, and I like it that way, because it enables great structural editing of code. No implicit type coercion has also been part of normal for-human programming languages (see SMLNJ for example).
> There's also less support in Mog for generics, and there's absolutely no support for metaprogramming, macros, or syntactic abstraction.
OK that does immediately make it boring, I give them that much.
phren0logy
I am disappointed at the amount of negativity here. HN generally loves an experimental domain-specific language, no matter how janky. To be clear, I don't know if this is janky, but the knee-jerk anti-AI sentiment is not intellectually stimulating.
Garlef
Awesome!
A few questions:
- Is there a list of host languages?
- Can it live in the browser? (= is JS one of the host languages?)
show comments
dude250711
A thing for the current thing.
Would have been a blockchain language 10 years ago.
JosephjackJR
Ran into the same thing. SQLite works until you need cold start recovery or WAL contention with concurrent agents. Built a dedicated memory layer for agent workloads - happy to share: https://github.com/RYJOX-Technologies/Synrix-Memory-Engine
show comments
libre-man
Don't know if others have this issue, but for me I can't scroll on Firefox.
show comments
OSaMaBiNLoGiN
Doesn't need to be its own language.
show comments
FireInsight
I looked at the brainrotty name[1] and instantly assumed AI slop, but I'm glad the website was upfront about that.
I like the looks of this, and the idea behind it, but TypeScriot via Deno is an audited language with a good security model, a good type system, and sandboxing in an extremely well-hardened runtime. It's also a language that LLMs are exceptionally well-trained on. What does Mog offer that's meaningfully superior in an agent context?
I see that Deno requires a subprocess which introduces some overhead, and I might be naive to think so, but that doesn't seem like it would matter much when agent round-trip and inference time is way, way longer than any inefficiency a subprocess would introduce. (edit: I realized in some cases the round-trip time may be negligible if the agent is local, but inference is still very slow)
I admittedly do prefer the syntax here, but I'm more so asking these questions from a point of pragmatism over idealism. I already use Deno because it's convenient, practical, and efficient rather than ideal.
One nitpick I noticed:
> String Slicing > You can extract a substring using bracket syntax with a range: s[start:end]. Both start and end are byte offsets. The slice includes start and excludes end.
Given that all strings are UTF-8, I note that there's not a great way to iterate over strings by _code point_. Using byte offsets is certainly more performant, but I could see this being a common request if you're expecting a lot of string manipulation to happen in these programs.
Other than that, this looks pretty cool. Unlike other commenters, I kinda like the lack of operator precedence. I wouldn't be surprised if it turns out to be not a huge problem, since LLMs generating code with this language would be pattern-matching on existing code, which will always have explicit parentheses.
> Compiled to native code for low-latency plugin execution – no interpreter overhead, no JIT, no process startup cost.
If you're running the compiled code in-process, how is that not JIT? And isn't that higher-latency than interpreting? Tiered-JIT (a la V8) solves exactly this problem.
Edit: Although the example programs show traditional AOT compile/execute steps, so "no process startup cost" is presumably a lie?
> it's intended to minimize foot-guns to lower the error rate when generating Mog code. This is why Mog has no operator precedence: non-associative operations have to use parentheses, e.g. (a + b) * c.
Almost all the code LLMs have been trained on uses operator precedence, so no operator precedence seems like a massive foot-gun.
For me this is Gleam. Fairly small lang, type safe, compiled, NO NULLS (very important IMO), good FFI, code is readable, and... you get the BEAM!
Agents can pretty much iterate on their own.
The most important thing for me, at least for now (and IMO the foreseeable future) is being able to review and read the output code clearly. I am the bottleneck in the agent -> human loop, so optimizing for that by producing clear and readable code is a massive priority. Gleam eliminates a ton of errors automatically so my reviews are focused on mostly business logic (also need to explicitly call out redundant code often enough).
I could see an argument for full on Erlang too, but I like the static typing.
On a quick scan, what it's missing is data tainting. We've had that tech for a while and it's perfectly suited to the age of prompt injection.
> When asking people to write code in a language, these restrictions could be onerous. But LLMs don't care, and the less expressivity you trust them with, the better.
But LLMs very much do care. They are measurably worse when writing code in languages with non-standard or non-existent operator precedence. This is not surprising given how they learn programmming.
I’m still waiting for someone to build a good lisp harness. Stick an agent in a lisp repl and they can change literally anything they want easily.
Argument 1 ("Syntax Only an AI Could Love") sounds dubious. I am probably not alone in being paranoid enough, to always put those parentheses, even if I am 90% sure, that there is operator precedence. In lispy languages the ambiguity never even arises, and I put many parentheses, and I like it that way, because it enables great structural editing of code. No implicit type coercion has also been part of normal for-human programming languages (see SMLNJ for example).
> There's also less support in Mog for generics, and there's absolutely no support for metaprogramming, macros, or syntactic abstraction.
OK that does immediately make it boring, I give them that much.
I am disappointed at the amount of negativity here. HN generally loves an experimental domain-specific language, no matter how janky. To be clear, I don't know if this is janky, but the knee-jerk anti-AI sentiment is not intellectually stimulating.
Awesome!
A few questions:
- Is there a list of host languages?
- Can it live in the browser? (= is JS one of the host languages?)
A thing for the current thing.
Would have been a blockchain language 10 years ago.
Ran into the same thing. SQLite works until you need cold start recovery or WAL contention with concurrent agents. Built a dedicated memory layer for agent workloads - happy to share: https://github.com/RYJOX-Technologies/Synrix-Memory-Engine
Don't know if others have this issue, but for me I can't scroll on Firefox.
Doesn't need to be its own language.
I looked at the brainrotty name[1] and instantly assumed AI slop, but I'm glad the website was upfront about that.
[1] https://www.merriam-webster.com/slang/mog
is this project like vibe coded slop from a zoomer or nah?
Wow, we've brought mogging to the programming world. Nothing is safe from looksmaxxing it seems.
How is Mog different than Mojo?
Its disheartening to see these crop up after spending 25 years through trial and error learning how to write programming languages.
Please think twice before releasing these, if you're going to do it come up with at least one original idea that nobody else has done before.
Why didn't you just call it "bad rust copy"?