Sublime
An inspiration engine for ideas
My guess is that watching the keynote would have made the mismatch between OpenAI's mission and the reality of its current focus impossible to ignore. I'm sure I wasn't the only one that cringed during it.
I think the mismatch between mission and reality was impossible to fix.
partial debiasing may actually make the problem worse, they argue, in the sense that it leaves the majority of these stereotypical associations intact while removing the ones that are the most visible and easiest to measure.
Brian Christian • The Alignment Problem
- Microsoft introduced Phi 1.5 – a compact AI model with multimodal capabilities, meaning it can process images as well as text. Despite being significantly smaller than OpenAI's GPT-4, with only 1.3 billion parameters, it demonstrates advanced features like those found in larger models. Phi 1.5 is open-source, emphasizing the trend towards efficient
FOD#27: "Now And Then"

We currently don't understand how to make sense of the neural activity within language models. Today, we are sharing improved methods for finding a large number of "features"—patterns of activity that we hope are human interpretable.
Perplexity AI has an increasingly poor reputation for questionable (or lacking) ethics, including allegations of plagiarism, failing to produce proper attributions, and ignoring robots.txt on sites.
DevonThink & Apple Intelligence
In its opinion on the case from 2021 , Creative Commons acknowledged that “The legal uncertainty caused by ethical concerns around AI, the lack of transparency of AI algorithms, and the patterns of privatization and enclosure of AI outputs, all together constitute yet another obstacle to better sharing. 4
Alek Tarkowski • Filling the governance vacuum related to the use of information commons for AI training
“The reality is that tech companies have been using automated tools to moderate content for a really long time and while it’s touted as this sophisticated machine learning, it’s often just a list of words they think are problematic,” said Ángel Díaz, a lecturer at the UCLA School of Law who studies technology and racial discrimination.