Internally at OpenAI, insiders say that disagreements had emerged over the speed at which Altman was pushing for commercialization and company growth, with Sutskever arguing to slow things down. Sources told reporter Kara Swisher that OpenAI's Dev Day event hosted November 6, with Sam front and center in a keynote pushing consumer-like products was... See more
With the release of Codex, however, we had the first culture clash that was beyond saving: those who really believed in the safety mission were horrified that OAI was releasing a powerful LLM that they weren't 100% sure was safe. The company split, and Anthropic was born.
weirdly my main reaction is gratitude to the OpenAI founders for actually creating a governance structure that committed them to sacrifice profits if the mission required it. no idea if that's what happened here, but at least we know the commitment had teeth. Show more
“There was a long period of time where the right thing for [Isaac] Newton to do was to read more math textbooks, and talk to professors and practice problems ... that’s what our current models do,” said Altman, using an example a colleague had previously used.
But he added that Newton was never going to invent calculus by simply reading about geomet... See more
"According to people familiar with the board's thinking, members had grown so untrusting of Altman that they felt it necessary to double-check nearly everything he told them," the WSJ report said. The sources said it wasn't a single incident that led to the firing, "but a consistent, slow erosion of trust over time that made them increasingly uneas... See more
what OpenAI, Anthropic, DeepMind have all tried to do is raise billions & tap vast GPU resources of tech giants without having the resulting tech de facto controlled by them. I'm arguing the OpenAI fracas show that might be impossible.