Proposing Ctrl-G, a neurosymbolic framework that enables arbitrary LLMs to follow logical constraints (length control, infilling …) with 100% guarantees.
Ctrl-G beats GPT4 on the task of text editing by >30% higher satisfaction rate in human eval.
https://t.co/U6oz3bc935
IMO @AnthropicAI is very close to making a breakthrough in productizable interpretability.
For ~4 years all we've had to really control LLMs is temperature/top_p and logit bias. We recently got `seed` and constrained structured output, with `interactive=false` on the way.
But now Claude Sonnet can let people clamp up/down 34 million features rang... See more
🎥 New talk: "How Might We Learn?"
A (proto-?)vision talk of sorts—a first attempt at a broader picture of the future of learning I want to create, particularly given developments in AI.
Thanks to @HaijunXia and @ProfHollan for hosting me! 🙇♂️
(YT link in thread if you prefer) https://t.co/TNWssgwuRy