Just tried it. really cool, and a fun tech demo with rcli. I filed a bug report; not everything is loading properly when installed via homebrew.
Quick request: unsloth quants; bit per bit usually better. Or more generally UI for huggingface model selections. I understand you won't be able to serve everything, but I want to mix and match!
Also - grounding:
"open safari" (safari opens, voice says: "I opened safari")
"navigate to google.com in safari" (nothing happens, voice says: "I navigated to google.com")
Anyway, really fun.
show comments
stingraycharles
I’m a bit confused by what you’re offering. Is it a voice assistant / AI as described on your GitHub? Or is it more general purpose / LLM ?
How does the RAG fit in, a voice-to-RAG seems a bit random as a feature?
I don’t mean to come across as dismissive, I’m genuinely confused as to what you’re offering.
show comments
jonhohle
If I send a Portfile patch, would you consider MacPorts distribution?
show comments
rushingcreek
Very cool, congrats! I'm curious how you were able to achieve this given Apple's many undocumented APIs. Does it use private Neural Engine APIs or fully public Metal APIs?
Either way, this is a tremendous achievement and it's extremely relevant in the OpenClaw world where I might not want to have sensitive information leave my computer.
tiku
Personally I'm so disappointed about the state of local AI. Only old models run "decent" but decent is way to slow to be usable.
RationPhantoms
This doesn't work on any of the methods I've tried.
show comments
DetroitThrow
Wow, this is such a cool tool, and love the blog post. Latency is killer in the STT-LLM-TTS pipeline.
Before I install, is there any telemetry enabled here or is this entirely local by default?
show comments
alfanick
I'm not looking for STT->AI->TTS, I'm looking for truly good voice-to-text experience* on Linux (and others). Siri/iOS-Dictation is truly good when it comes to understanding the speech. Something this level on Linux (and others) would be great, yeah always listening, maybe sending the data somewhere, but give me UX - hidden latency, optimizing for first chars recognized - a good (virtual) input device.
Based on the demo video, the TTS sounds like it's 10 years out of date. I would not enjoy interacting with it.
show comments
Tacite
Doesn't work.
" zsh: segmentation fault rcli"
show comments
focusgroup0
The fact that Apple didn't ship this in years after Siri acquisition is an indictment of its Product leadership
show comments
tristor
> What would you build if on-device AI were genuinely as fast as cloud?
I think this has to be the future for AI tools to really be truly useful. The things that are truly powerful are not general purpose models that have to run in the cloud, but specialized models that can run locally and on constrained hardware, so they can be embedded.
I'd love to see this able to be added in-path as an audio passthrough device so you can add on-device native transcriptioning into any application that does audio, such as in video conferencing applications.
j45
"Apple M3 or later required. MetalRT uses Metal 3.1 GPU features available on M3, M3 Pro, M3 Max, M4, and later chips. M1/M2 support is coming soon. On M1/M2, RCLI automatically falls back to the open-source llama.cpp engine."
show comments
john_strinlai
i knew i recognized this name from somewhere.
they are a company that registers domains similar to their main one, and then uses those domains to spam people they scrape off of github without affecting their main domain reputation.
edit2: it appears that RunAnywhere is getting damage-control help by dang or tom.
this comment, at this time, has 23 upvotes yet is below 2 grey comments (i.e. <=0 upvotes) that were posted at roughly the same time (1 before, 1 after) -- strong evidence of artificial ordering by the moderators. gross.
show comments
pzo
FWIW this RCLI is only MIT license but their engine MetalRT is commercial. Not sure the license of their models I guess also not MIT. So IMHO this repo is misleading.
Not sure why they decided to reinvent the wheel and write yet another ML engine (MetalRT) which is proprietary. I would most likely bet on CoreML since it have support for ANE (apple NPU) or MLX.
Other popular repos for such tasks I would recommend:
I think the title should read "RunAnywhere," not "RunAnwhere."
show comments
Imustaskforhelp
I am just gonna link the stats of this hackernews post[0] and let public decide the rest because for context, this is same company which was mentioned in a blow-up post 12 days ago which had gotten 600 upvotes and they didn't respond back then[1] (I have found it hard for posts to have such a 2x factor within minutes of posting, that's just my personal observation. Usually one gets it after an hour or two or three.)
I was curious so I did some more research within the company to find more shady stuff going on like intentionally buying new domains a month prior to send that spam to not have the mail reputation of their website down. You can read my comment here[2]
Just to be on the safe side here, @dang (yes pinging doesn't work but still), can you give us some average stats of who are the people who upvoted this and an internal investigation if botting was done. I can be wrong about it and I don't ever mean to harm any company but I can't in good faith understand this. Some stats
Some stats I would want are: Average Karma/Words written/Date of the accounts who upvoted this post. I'd also like to know what the conclusion of internal investigation (might be) if one takes place.
[There is a bit of conflicts of interest with this being a YC product but I think that I trust hackernews moderator and dang to do what's right yeah]
I am just skeptical, that's all, and this is my opinion. I just want to provide some historical context into this company and I hope that I am not extrapolating too much.
Just tried it. really cool, and a fun tech demo with rcli. I filed a bug report; not everything is loading properly when installed via homebrew.
Quick request: unsloth quants; bit per bit usually better. Or more generally UI for huggingface model selections. I understand you won't be able to serve everything, but I want to mix and match!
Also - grounding:
"open safari" (safari opens, voice says: "I opened safari") "navigate to google.com in safari" (nothing happens, voice says: "I navigated to google.com")
Anyway, really fun.
I’m a bit confused by what you’re offering. Is it a voice assistant / AI as described on your GitHub? Or is it more general purpose / LLM ?
How does the RAG fit in, a voice-to-RAG seems a bit random as a feature?
I don’t mean to come across as dismissive, I’m genuinely confused as to what you’re offering.
If I send a Portfile patch, would you consider MacPorts distribution?
Very cool, congrats! I'm curious how you were able to achieve this given Apple's many undocumented APIs. Does it use private Neural Engine APIs or fully public Metal APIs?
Either way, this is a tremendous achievement and it's extremely relevant in the OpenClaw world where I might not want to have sensitive information leave my computer.
Personally I'm so disappointed about the state of local AI. Only old models run "decent" but decent is way to slow to be usable.
This doesn't work on any of the methods I've tried.
Wow, this is such a cool tool, and love the blog post. Latency is killer in the STT-LLM-TTS pipeline.
Before I install, is there any telemetry enabled here or is this entirely local by default?
I'm not looking for STT->AI->TTS, I'm looking for truly good voice-to-text experience* on Linux (and others). Siri/iOS-Dictation is truly good when it comes to understanding the speech. Something this level on Linux (and others) would be great, yeah always listening, maybe sending the data somewhere, but give me UX - hidden latency, optimizing for first chars recognized - a good (virtual) input device.
Amazing, this is what I am trying to do with https://github.com/computerex/dlgo
Based on the demo video, the TTS sounds like it's 10 years out of date. I would not enjoy interacting with it.
Doesn't work. " zsh: segmentation fault rcli"
The fact that Apple didn't ship this in years after Siri acquisition is an indictment of its Product leadership
> What would you build if on-device AI were genuinely as fast as cloud?
I think this has to be the future for AI tools to really be truly useful. The things that are truly powerful are not general purpose models that have to run in the cloud, but specialized models that can run locally and on constrained hardware, so they can be embedded.
I'd love to see this able to be added in-path as an audio passthrough device so you can add on-device native transcriptioning into any application that does audio, such as in video conferencing applications.
"Apple M3 or later required. MetalRT uses Metal 3.1 GPU features available on M3, M3 Pro, M3 Max, M4, and later chips. M1/M2 support is coming soon. On M1/M2, RCLI automatically falls back to the open-source llama.cpp engine."
i knew i recognized this name from somewhere.
they are a company that registers domains similar to their main one, and then uses those domains to spam people they scrape off of github without affecting their main domain reputation.
edit: here is the post https://news.ycombinator.com/item?id=47163885
----
edit2: it appears that RunAnywhere is getting damage-control help by dang or tom.
this comment, at this time, has 23 upvotes yet is below 2 grey comments (i.e. <=0 upvotes) that were posted at roughly the same time (1 before, 1 after) -- strong evidence of artificial ordering by the moderators. gross.
FWIW this RCLI is only MIT license but their engine MetalRT is commercial. Not sure the license of their models I guess also not MIT. So IMHO this repo is misleading.
Not sure why they decided to reinvent the wheel and write yet another ML engine (MetalRT) which is proprietary. I would most likely bet on CoreML since it have support for ANE (apple NPU) or MLX.
Other popular repos for such tasks I would recommend:
https://github.com/FluidInference/FluidAudio
https://github.com/DePasqualeOrg/mlx-swift-audio
https://github.com/Blaizzy/mlx-audio
https://github.com/k2-fsa/sherpa-onnx
I think the title should read "RunAnywhere," not "RunAnwhere."
I am just gonna link the stats of this hackernews post[0] and let public decide the rest because for context, this is same company which was mentioned in a blow-up post 12 days ago which had gotten 600 upvotes and they didn't respond back then[1] (I have found it hard for posts to have such a 2x factor within minutes of posting, that's just my personal observation. Usually one gets it after an hour or two or three.)
I was curious so I did some more research within the company to find more shady stuff going on like intentionally buying new domains a month prior to send that spam to not have the mail reputation of their website down. You can read my comment here[2]
Just to be on the safe side here, @dang (yes pinging doesn't work but still), can you give us some average stats of who are the people who upvoted this and an internal investigation if botting was done. I can be wrong about it and I don't ever mean to harm any company but I can't in good faith understand this. Some stats
Some stats I would want are: Average Karma/Words written/Date of the accounts who upvoted this post. I'd also like to know what the conclusion of internal investigation (might be) if one takes place.
[There is a bit of conflicts of interest with this being a YC product but I think that I trust hackernews moderator and dang to do what's right yeah]
I am just skeptical, that's all, and this is my opinion. I just want to provide some historical context into this company and I hope that I am not extrapolating too much.
It's just really strange to me, that's all.
[0]: https://news.social-protocols.org/stats?id=47326101 (see the expected upvotes vs real upvotes and the context of this app and negative reception and everything combined)
[1]: Tell HN: YC companies scrape GitHub activity, send spam emails to users: https://news.ycombinator.com/item?id=47163885
[2]:https://news.ycombinator.com/reply?id=47165788
[flagged]
Lets go!!