Home
>
Topics
>
OpenAI

Top 25 OpenAI articles on Substack

Latest OpenAI Articles


Brace for Impact: Here Comes the "Cram Down"

Upcoming Edtech Happy Hour Events, ASU+GSV 2024 Session Overviews, US Newspapers Sue OpenAI, Coursera and Chegg Stock Down, and more!
Brace for Impact: Here Comes the “Cram Down” By Ben Kornell
Sarah Morin, Ben Kornell, and Alex Sarlin ∙ 4 LIKES
Matt Rubins
Ben - this is so insightful and so true. Twain said "history doesn't repeat itself, but it often rhymes". I've lived through three of these cycles now - the S&L crisis in '90-93, the dot com and telecom winter from '01 to '04, and then Global Financial Crisis from '08-'11. Every time we go through the same cycle. When a bubble bursts, during the first year people believe that a recovery is right around the corner. It'll be fine! The second year, they realize this may take a while longer and that they need to start cutting costs to extend the runway and avoid exposing themselves to "market pricing discovery". When they run out of moves, they reach the capitulation stage and that's when the dreaded "inside down round" happens. People start to read the deal docs and understand how weighted average anti-dilution provisions really work, what discounts on notes and SAFEs really do to founder economics, and how pay to play provisions work. It's ugly. The companies that get through this phase quickly, or even better proactively in the first two years, are well positioned to be acquirors of both market share and weaker competitors. These cycles typically last 4 years and we're about 18-24 months into this one.
I'm very optimistic about the future. We're seeing strong revenue growth in our portfolio and the long term trends underlying the digitation of education and alternative ways to upskill the workforce are very much intact. It just takes time, but anyone who's been around education for a long time knows that everything takes time in our business.

Last Week in AI #269: Better evals for multimodal AI, new OpenAI lawsuits, Meta's AI ads tool troubles, AI startups focus on enterprise, and more!

Reka AI releases Vibe-Eval, 8 US newspapers sue OpenAI, Meta's AI ads tool's overspending problem, AI startups are pivoting to enterprise customers
Top News Vibe-Eval: A new open and hard evaluation suite for measuring progress of multimodal language models Reka AI introduces Vibe-Eval, a new evaluation suite designed to measure the progress of multimodal language models. Researchers from the company have created a set of challenging prompts to test the capabilities of these models, particularly focu…
Last Week in AI ∙ 4 LIKES

The Sam Altman Playbook

Fear, The Denial of Uncertainties, and Hype
How do you convince the world that your ideas and business might ultimately be worth $7 trillion dollars? Partly by getting some great results, partly by speculating about unlimited potential, and partly by downplaying and ignoring inconvenient truths.
Gary Marcus ∙ 139 LIKES
Raul I Lopez
“all of this has happened before. all of this will happen again.”
Yep, I’ve been there. Working on AI research in 1990-1991, just before the second AI Winter.
John Richmond
Sam is a pseudo-philosopher in a world that has forgotten how to think critically. Thanks for this. Best Gary thus far. Look forward to more.

Tom White
"What if this was more common? Great writing is precious enough that I wish we had multiple interpretations of most great works. It would be a great way to see the evolution of artists."
Yes! Mark Twain on Jane Austen is a good (read as: hilarious) place to start: In his extensive correspondence with fellow author and critic William Dean Howells, Mark Twain seemed to enjoy venting his literary spleen on Jane Austen precisely because he knew her to be Howells’ favorite author, In 1909 Twain wrote that “Jane Austin” [sic] was “entirely impossible” and that he could not read her prose even if paid a salary to do so. Howells notes in My Mark Twain (1910) that in fiction Twain “had certain distinct loathings; there were certain authors whose names he seemed not so much to pronounce as to spew out of his mouth...
Rather than pitying Twain when he was sick, Howells threatened to come and read Pride and Prejudice to him.
Twain marveled that Austen had been allowed to die a natural death rather than face execution for her literary crimes. “Her books madden me so that I can’t conceal my frenzy,” Twain observed, apparently viewing an Austen novel as a book which “once you put it down you simply can’t pick it up.” ... In a letter to Joseph Twichell in 1898, Twain fumed, “I have to stop every time I begin. Everytime I read “Pride and Prejudice” I want to dig her up and beat her over the skull with her own shin-bone.” From: https://www.vqronline.org/essay/barkeeper-entering-kingdom-heaven-did-mark-twain-really-hate-jane-austen


Met Gala Not Dead, But Decaying

The Gala itself has a new theme, a new carpet design, and varied guests each year, but has become rather predictable.
Before the Met Gala Monday night, Anna Wintour apologized for “confusion” over the theme. The Costume Institute exhibition at the Metropolitan Museum of Art that the Gala opens and raises money for is Sleeping Beauties: Reawakening Fashion, but the dress code was “The Garden of Time.” Anna said on
Amy Odell ∙ 137 LIKES
Anne Hjortshøj
It all seemed so joyless.
J.W.
I wonder if part of the reason is that that the event has outgrown itself in a specific kind of way. When you compare it to the Oscars, for instance, there is a point to it all (the actual awards) that the public is part of, because they can view the entire event. The Met Gala just feels so disconnected now because you know the vast majority of attendees don't care about museums (some of them probably don't even know the difference between the Met and MoMA, I'm guessing) or even fully understand why they are there. So it all just feels kind of fake, and the public isn't allowed inside for the party, so it winds up feeling very hollow, despite the absurd star power on the red carpet.
When it was more of a society event, there was a perception of authenticity, because those attending seemed to have a true interest in the institution of the Met. It felt like more of a NYC-specific type of thing that was more closely connected to the museum. I am not sure if any of this made sense, it's kind of word salad, but tldr: the gala is too big, too corporate, and too phony-feeling now to be relevant.

Last Week in AI #268: Gen AI for gene editing, Moderna partners with OpenAI, model releases from Microsoft and Snowflake, and more!

Gen AI used to generate new gene editors like CRISPR, Moderna's internal ChatGPTs, Microsoft releases Phi-3-mini LLM that can run on a phone, Snowflake open sources enterprise LLM
Top News Generative A.I. Arrives in the Gene Editing World of CRISPR Generative AI, which have already revolutionized areas such as art and programming, are now making significant strides in biotechnology. A new A.I. system developed by the Berkeley-based startup Profluent has been designed to create blueprints for novel gene editors by employing methods …
Last Week in AI ∙ 14 LIKES

LWiAI Podcast #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

China unveils Sora challenger able to produce videos from text similar to OpenAI tool, Capabilities of Gemini Models in Medicine, and more!
Our 165th episode with a summary and discussion of last week's big AI news! Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai Subscribe Apple Podcasts Spotify YouTube RSS Timestamps + links: Tools & Apps (00:01:27)
Last Week in AI ∙ 5 LIKES
Stash of Code
There's a fierce and interesting review of Github Copilot Workspace there:

☁️ Amazon: Wild Margin Expansion

AI requires billions in Capex but it looks like money well spent
Welcome to the Friday edition of How They Make Money. Over 100,000 subscribers turn to us for business and investment insights. In case you missed it: 🚖 Tesla: Robotaxi Pivot ♾ Meta: The Anti-Apple 🔎 Google: "A Positive Moment" 🍿 Netflix: Engagement Machine
App Economy Insights ∙ 54 LIKES

Microsoft and OpenAI’s increasingly complicated relationship

An AI Soap Opera in the making?
You might think that Microsoft owns 49% of OpenAI. But as far as I understand it doesn’t. It has a right to about 49% of a for-profit subsidiary of OpenAI’s profits, up until a very complex point that may require litigation to resolve, but the for-profit hasn’t made any profits, and the for-profit is owned by a non-profit. And I’ll be damned if I can ac…
Gary Marcus ∙ 56 LIKES
Gerben Wierda
Is "it's complicated" a civilised way to say 'clusterfuck'? Or might all of thus mean that OpenAI has handed Microsoft the means to fill whatever mini-'moat' OpenAI had? Did OpenAI give away whatever crown jewels they had in that Microsoft deal that got them the compute they needed? Definitely intriguing.
Ko
Relation"shop" haha is that deliberate?

GC, a16z Capture 44% VC fundraising💰, Massive Acquisitions in Software Startups 🛒, Network Effects🕸️

Welcome to The VC Corner, your weekly dose of Venture Capital and Startups to keep you up and running! 🚀 You can now become a premium subscriber and read the full guest posts I share on The VC Corner. Next Saturday, I will have Peter Walker, head of insights at Carta, publishing in my newsletter a deep dive into actual VC Valuations
Ruben Dominguez Ibar ∙ 13 LIKES
Money for Entrepreneurs
So valuable, as always!

🔮 Can the West wean off from China?; European startups; AI war rooms; fragile societies ++ #472

Hi, I’m Azeem Azhar. In this week’s edition, we explore China’s dominance of the battery supply chain. And in the rest of today’s issue: Need to know: GenAI as a GPT Is generative AI a general-purpose technology? We’ve long believed it to be one, and mounting evidence over the past year contributes to this position.
Azeem Azhar and Nathan Warren ∙ 23 LIKES

How Perplexity builds product

Johnny Ho, co-founder and head of product, explains how he organizes his teams like slime mold, uses AI to build their AI company, and much more
👋 Hey, Lenny here! Welcome to this month’s ✨ free edition ✨ of Lenny’s Newsletter. Each week I tackle reader questions about building product, driving growth, and accelerating your career. If you’re not a subscriber, here’s what you missed this month:
Lenny Rachitsky ∙ 166 LIKES
Harshal Patil
Love perplexity, use it every day, and Glad to read more about the behind-the-scenes.
Mostafa Fotouhi
I like these articles, I got things that helped me in my career, thanks a lot.
I didn't know Perplexity but I would like to test it


How RLHF works, part 2: A thin line between useful and lobotomized

Many, many signs of life for preference fine-tuning beyond spoofing chat evaluation tools.
See part 1 of this series for a textual overview of the 3 stages of RLHF: instruction-tuning, reward modeling, and RL. 17 months on from the release of ChatGPT and we still do not have any fully open-source replications of its fine-tuning process. We’re much further from it than most people think
Nathan Lambert ∙ 22 LIKES

Nobody Likes a Know-It-All: Smaller LLMs are Gaining Momentum

Phi-3 and OpenELM, two major small model releases this week.
Next Week in The Sequence: Edge 391: Our series about autonomous agents continues with the fascinating topic of function calling. We explore UCBerkeley’s research on LLMCompiler for function calling and we review the PhiData framework for building agents.
Jesus Rodriguez ∙ 25 LIKES

Key journalism funder considers becoming invitation-only

Twelve active calls and survival techniques for starving publishers in a post-grant, post-Meta, post-truth media landscape.
Welcome! This week on the Media Finance Monitor: Conversations about media funding, publishing technology, newsroom leadership and more Civitates is considering becoming invitation-only Survival techniques for starving publishers in a post-grant, post-Meta, post-truth media landscape
Peter Erdelyi and Ioana Epure ∙ 3 LIKES


What I Read This Week...

Investors are pulling money out of risk assets, Micron receives a multi-billion dollar grant from the U.S. government, and a new House bill makes a TikTok ban more likely
Watch All-In E175 Read our deep dives into Climate, Artificial Intelligence, Healthcare and Space Caught My Eye… Money is being pulled out of equities and junk bonds at the fastest rate in more than a year. Investors are becoming more conservative in their allocations, citing elevated geopolitical risk and potential upside risk to commodity prices and infl…
Chamath Palihapitiya ∙ 67 LIKES
sayandcode
Honestly the part about ppl talking mongoDB sounds compelling to *move to* the bay area. Beats overhearing about mundane shit like how the dog ate the petunias

Update #74: Detecting Postpartum Depression and Kolmogorov-Arnold Networks

We look at Dionysus Digital Health's new ML system for detecting postpartum depression in expectant or new mothers; Kolmogorov-Arnold Networks are getting a lot of hype.
Welcome to the 74th update from the Gradient! If you’re new and like what you see, subscribe and follow us on Twitter. Our newsletters run long, so you’ll need to view this post on Substack to see ev…
daniel bashir and Justin Landay ∙ 15 LIKES

AI #62: Too Soon to Tell

What is the mysterious impressive new ‘gpt2-chatbot’ from the Arena? Is it GPT-4.5? A refinement of GPT-4? A variation on GPT-2 somehow? A new architecture? Q-star? Someone else’s model? Could be anything. It is so weird that this is how someone chose to present that model.
Zvi Mowshowitz ∙ 21 LIKES
Dr. Y
>“Every college student should learn to train a GPT-2… not the most important thing but I bet in 2 years that’s something every Harvard freshman will have to do”
This used to be called "writing in a diary" back when people did their own thinking.
rational_hippy
Hey Zvi! Thanks a lot for mentioning the Pause AI Protests! I am actually using a Partiful Invite for organising the Paris protest rather than the facebook page: https://partiful.com/e/3Tl1xrS6i9NUZxyJGf5G


This is one of the major things wrong with our country

Big story yesterday: a rich guy, who engaged in a years-long scheme of fraud and international law breaking involving laundering money for drug running, child prostitution, terrorism, and Russian oligarchs, got off with a slap on the wrist. This rich guy is a crypto billionaire, Changpeng “CZ” Zhao, the founder and former CEO of the Binance crypto exc…
Lucian K. Truscott IV ∙ 222 LIKES
Margo Howard
There are two things I have no understanding of: anything Bitcoin, and Trump supporters.
(I sort of understand graft, so thanks for that.)
Patris
Fortunes buy privilege. One of which is immunity from justice. Did, does, will do.

AGI is what you want it to be

Certain definitions of AGI are backing people into a pseudo-religious corner.
Artificial general intelligence (AGI) doesn’t need to involve the idea of agency. The term’s three words only indicate a general level of capability, but given its lack of grounding in any regulatory agency or academic community, no one can control what others think it means. The biggest problem with the AGI debate is different folks have different end …
Nathan Lambert ∙ 24 LIKES
Dylan Patel
The real AGI is the friends we made along the way
Oleksii
Of all definitions, I like The Modern Turing Test (Suleyman) the least. Intuitively, it seems to be the most prone to "reward hacking". A model can just discover something which is trivial but super hard for humans (like flash trading).
All other reasonable definitions do seem to require embodiment. But let's consider this thought experiment - imagine human brain completely separated from its body but still alive and able to communicate with us. That is still AGI because brain implements human intelligence, but will we be able to tell? Intuitively, there seems to be connection with Gödel's first incompleteness theorem, e.g will NGI (us) be able to identify AGI?

What are human values, and how do we align AI to them? [Breakdowns]

A new approach to moral alignment funded by OpenAI
Hey, it’s Devansh 👋👋 In my series Breakdowns, I go through complicated literature on Machine Learning to extract the most valuable insights. Expect concise, jargon-free, but still useful analysis aimed at helping you understand the intricacies of Cutting-Edge AI Research and the applications of Deep Learning at the highest level.
Devansh ∙ 20 LIKES
Sorab Ghaswalla
I have mentioned this excellent write-up on moral alignment in my newsletter 'AI For Real' this week. Here's the link https://aiforreal.substack.com/p/are-you-ai-positive-or-ai-negative
Daniel Morales Salazar, PhD
This is pretty thorough Devansh, well done.
I would like to say that moral alignment is crucial, yet 11% of participants who did not feel the moral graph was fair is a huge number! Think about it, 11% alone could be "all Indian men" or "all Latino men" for any set of questions (because morality is so complex that it will never be "just one issue or question"). We know better than that, however, and the likelihood of such a situation arising (i.e., one in which a single population is targeted solely on the basis of its ethnicity) is extremely low. However, it does point to the fact that it is morally difficult to reach a consensus among populations.
I think we should still make efforts to elicit moral graphs. On the other hand, I do not think it is moral or ethical to appease end users with unscientific and potentially immoral or unethical beliefs. This means that AGI companies will have to "steer" certain users toward less extreme views, which can itself be considered unethical (but not immoral), and I am willing for them to do this with the most extreme users.
In summary, moral graphs seem to be an intuitive and worthwhile area of research in AI safety, it helps to realize a way to a cohesive and potentially coherent superset of values based on morality and ethics, which is very important for future machine learning models. On the other hand, there are still issues of representativeness, such as the 11% of users who felt that the generated moral graph was not fair. We need to avoid that 11% corresponding to a single-class target on metrics such as ethnicity, socioeconomic status, religious beliefs.
Conversely, we need to recognize that there is a large group of users with unethical and immoral beliefs, and AGI companies cannot "compromise" or " accommodate" these types of users, and whether the end result is to alienate them or steer them toward a more reasoned view, they should be OK with that.
Is my view reasonable or itself unethical or immoral? I am willing to change my mind and views as I learn and improve as a human.