a2128

    You're not just using a tool — you're co-authoring the science.
This README is an absolute headache that is filled with AI writing, terminology that doesn't exist or is being used improperly, and unsound ideas. For example, it focuses a lot on doing "ablation studies", by which it means removing random layers of an already-trained model, to find the source of the refusals(?), which is an absolute fool's errand because such behavior is trained into the model as a whole and would not be found in any particular layer. I can only assume somebody vibe-coded this and spent way too much time being told "You're absolutely right!" bouncing back the worst ideas
show comments
ComputerGuru

Reviews of the tool on twitter indicate that it completely nerfs the models in the process. It won't refuse, but it generates absolutely stupid responses instead.

show comments
g947o

Went through the README but still have no idea how well this works, in terms of removing the censorship while minimally degrading the quality of responses. Well to be honest I can't tell if this works at all or is just an idea.

Alifatisk

This is for local models right? I can't use it on, say my glm-5 subscription connected to opencode?

show comments
PeterStuer

Already censored for sharing on FB Messenger?

littlestymaar

Don't use this 2 days old vibe coded bullshit please.

p-e-w's Heretic (https://news.ycombinator.com/item?id=45945587) is what you're looking for if you're looking for an automatic de-censoring solution.

ftkftk

Didn't make it past the first paragraph of AI slop in the README. Have some respect for your readers and put actual information in it, ideally human generated. At least the first paragraph! Otherwise you may as well name it IGNOREME.

SilverElfin

Does anyone offer a live (paid) LLM chatbot / video generation / etc that is completely uncensored? Like not requiring doing any work except just paying for it?

show comments
measurablefunc

This is another instance of avant-garde "art".

greenpizza13

Never stopped to ask if they should...