From the title alone I tought it will be another FORTH interpreter implementation article, but I was happy to see someone actually using it for anything besides proving their interpreter with a Fibonacci calculation.
show comments
codr7
My own baby started out as a Forth dialect, but now sits somewhere between Logo and Common Lisp on the complexity scale. Forth is a good starting point imo, you don't waste any time on non essentials.
The observation that concatenative programming languages have nearly ideal properties for efficient universal learning on silicon is very old. You can show that the resource footprint required for these algorithms to effectively learn a programming language is much lower than other common types of programming models. There is a natural mechanical sympathy with the theory around universal learning. It was my main motivation to learn concatenative languages in the 1990s.
This doesn't mean you should write AI in these languages, just that it is unusually cheap and easy for AI to reason about code written in these languages on silicon.
show comments
haolez
Diffusion text models to the rescue! :)
rescrv
Looking to discuss with people about whether LLMs would do better if the language had properties similar to postfix-notation.
From the title alone I tought it will be another FORTH interpreter implementation article, but I was happy to see someone actually using it for anything besides proving their interpreter with a Fibonacci calculation.
My own baby started out as a Forth dialect, but now sits somewhere between Logo and Common Lisp on the complexity scale. Forth is a good starting point imo, you don't waste any time on non essentials.
https://gitlab.com/codr7/shik
The observation that concatenative programming languages have nearly ideal properties for efficient universal learning on silicon is very old. You can show that the resource footprint required for these algorithms to effectively learn a programming language is much lower than other common types of programming models. There is a natural mechanical sympathy with the theory around universal learning. It was my main motivation to learn concatenative languages in the 1990s.
This doesn't mean you should write AI in these languages, just that it is unusually cheap and easy for AI to reason about code written in these languages on silicon.
Diffusion text models to the rescue! :)
Looking to discuss with people about whether LLMs would do better if the language had properties similar to postfix-notation.