Sublime
An inspiration engine for ideas
As you become more focused and a source of signal in your own right, you’ll want to shift from seeking surface area to going deep with a smaller group of people whom you respect. Rather than meeting as many people as possible, you’ll want to focus on the people you deem the most competent.
Scott Belsky • The Messy Middle: Finding Your Way Through the Hardest and Most Crucial Part of Any Bold Venture
“I think a lot of people obviously want to talk about the sexy kind of new consumer applications. I would tell you that I think that the earliest and most significant effect that AI is going to have on our company is actually going to be as it relates to our developer productivity. Some of the tools that we’re seeing are going to allow our devs to ... See more
Adam Huda • The Transformative Power of Generative AI in Software Development: Lessons from Uber's Tech-Wide Hackathon
Many of these projects are saving time by training on small, highly curated datasets. This suggests there is some flexibility in data scaling laws. The existence of such datasets follows from the line of thinking in Data Doesn't Do What You Think, and they are rapidly becoming the standard way to do training outside Google
semianalysis.com • Google "We Have No Moat, and Neither Does OpenAI"
We went to OpenAI's office in San Francisco yesterday to ask them all the questions we had on Quivr (YC W24), here is what we learned:
1. Their office is super nice & you can eat damn good croissant in SF!
2. We can expect GPT 3.5 & 4 price to keep going down
3. A lot of people are using the Assistants API to build their use cases
4. It costs ... See more
1. Their office is super nice & you can eat damn good croissant in SF!
2. We can expect GPT 3.5 & 4 price to keep going down
3. A lot of people are using the Assistants API to build their use cases
4. It costs ... See more
Feed | LinkedIn
DeepSeek Coder comprises a series of code language models trained from scratch on both 87% code and 13% natural language in English and Chinese, with each model pre-trained on 2T tokens. We provide various sizes of the code model, ranging from 1B to 33B versions. Each model is pre-trained on repo-level code corpus by employing a window size of 16K ... See more
DeepSeek Coder
deep learning
Prashanth Narayan and • 4 cards
Sage LaTorra
@olde_fortran