Home
>
Topics
>
OpenAI

Top 25 OpenAI articles on Substack

Latest OpenAI Articles



You say you want a revolution...

On change, art & some genuinely *new* things...
Hi! Ari here, with a new entry of my newsletter for you. We’ve got lots of ‘hard news’ coming: the Supreme Court term ending with a decision on Trump’s coup trial, the first Presidential Debate on Thursday. This piece steps back from the crush of news to look into the future, and really the present…
Ari Melber ∙ 414 LIKES
Truda Stransky
Yes I believe creators should get more credit and revenue for their AI generated work. I have concerns about the abilities of AI leading to a decrease in creative processes and decrease in the thrill of curiosity. It’s a lofty concept to try to figure out how to regulate AI so that its applications don’t diminish people. I can see some real benefits to society but my doubts are with our political leaders to manage this technology. That sounds kind of cynical; maybe this will be their moment of success.
Andrew Rovins
I like the long essays; keep them.

OpenAI #8: The Right to Warn

The fun at OpenAI continues. We finally have the details of how Leopold Aschenbrenner was fired, at least according to Leopold. We have a letter calling for a way for employees to do something if frontier AI labs are endangering safety. And we have continued details and fallout from the issues with non-disparagement agreements and NDAs.
Zvi Mowshowitz ∙ 34 LIKES
Jeff Rigsby
"Leaking"... to the board of directors?
Ffs, we are doomed
John
I don't know of any of the fact of this particular case, but just a general observation: As a securities litigator, my understanding is that OpenAI employees who raise concerns about potential security risks have strong whistleblower protections under Sarbanes-Oxley. The Supreme Court's decision in Lawson v. FMR LLC establishes that employees of contractors to publicly traded companies like Microsoft are covered by Sarbox anti-retaliation rules.
Hence, if an OpenAI employee has a good faith belief that the company is not properly addressing significant security risks, it follows that information potentially material to Microsoft's shareholders is not being disclosed. That's a sufficient basis for employees to report their concerns internally, to law enforcement, or to Congress. They do not need to prove actual securities laws violations to be shielded from retaliation. One major limit, however, is that Sarbox does *not* protect disclosures to news media or otherwise sending information outside the organization/government channels.
The SEC has broad authority and substantial resources, and they are sometimes happy to explore exotic theories of securities fraud. An OpenAI whistleblower making a case that the company is concealing critical risks could get the SEC's attention, on the theory that information is being concealed from Microsoft investors. While this isn't legal advice for any particular situation, would-be whistleblowers at OpenAI should know they may have a path to protect themselves if they feel compelled to speak out.
(Disclaimer: not legal advice. Employees in this situation should obtain counsel before acting)

Apple + OpenAI Math: Notebook From a Week in Silicon Valley

Thoughts and observations from a week inside (and around) Silicon Valley's tech campuses.
The scene at Apple’s WWDC this week was, in a way, emblematic of the times. The tech giant’s AI announcements were massively hyped ahead of the show. The products themselves were interesting, but not the “next iPhone” some expected. Still, the market loved it,
Alex Kantrowitz ∙ 46 LIKES

Creative Jobs in Jeopardy? OpenAI CTO's Bold Claim Amidst Groundbreaking AI Advancements and Ethical Concerns #64

Welcome to >260.000 global readers about AI, Future Trends & Digital Transformation!
Signup / Advertise
Dr. Joerg STORM ∙ 5 LIKES
Meng Li
OpenAI CTO Murati mentioned in an interview that GPT-3's intelligence is comparable to that of a child, GPT-4 to that of a smart high school student, and the next generation model (GPT-5), set to be released in 18 months, will reach a PhD level.
Claude 3.5 Sonnet has already pushed the countdown to AGI to 75%, becoming the first model to achieve a test score higher than the smartest human PhD.
In graduate-level reasoning (GPQA), undergraduate-level knowledge (MMLU), and coding ability (HumanEval), Claude 3.5 Sonnet unexpectedly set new state-of-the-art records.
It scored 90.4 in MMLU and 67.2 in GPQA.
This is also the first time an LLM has surpassed the 65% threshold in GPQA, reaching the level of the smartest human PhD.
It is worth noting that an average PhD scores 34% in GPQA, a specialized PhD scores 65%, and Claude 3.5 Sonnet has clearly exceeded them.

🔮 Ilya’s resurrection; new news; copyright & AI; heat pumps, telepathic tech leaders, Xi's cities & API bills ++ #479

An insider’s guide to AI and exponential technologies
Hi, I’m Azeem Azhar. In this week’s Sunday edition, we explore the changing news landscape, the return of Ilya Sutskever, and “architecture of participation” for AI. Enjoy!
Azeem Azhar and Nathan Warren ∙ 43 LIKES
Gianni Giacomelli
Exponential View is very helpful as a principled content curator, with a level of commentary that is always grounded on reliable data sources. Most journalism is opinion, and analysis but not close to the curation end of the spectrum. Principled curation is a crucial part of a collective-intelligence knowledge ecosystem.
Nick Burnett
For me it's about consuming via trusted sources who are often aggregators of the sheer volume of news. I deliberately stopped consuming the news via traditional means years ago due to the if it bleeds it leads approach.

What happened in marketing: Google Ads makeover, AI influencers & Pinterest’s Gen-Z move

This week: Meta’s AI avatars, Pinterest’s Gen-Z loop & Google’s ad love. All 3 are highlights but a lot more on the plate this week. 🧃
H1 will be over in few hours, I hope you have great plans for the next half. I do, the newsletter is about to get better with new interviews and series. Internal News: I’m doing mid-week marketing recaps on Instagram reels, you can follow the newsletter
Jaskaran ∙ 6 LIKES

Steve Bannon's Prison Diary

What They Found In His Barbour Jacket
Day One. Well, folks, it finally happened. The deep state managed to throw me into the bowels of the beast. Today is the darkest day in American history, engineered by the Clinton-Biden Global Crime Family, the Soros upload into OpenAI, the Bavarian Illuminati, the Aspen Institute and its lizard alien overlords, the Bush-Obama-Hapsburg family, the Fugger…
Rick Wilson ∙ 460 LIKES
Bob McKeown
For Whom the Soap Drops
Bobby McGuire
I was Bannon's second shift assistant briefly back in his investment banking days. He was (and remains) hands-down, the most unpleasant person I have ever met. I used to say that if Leona Helmsley and Hitler met and had a baby, it would be Bannon.

What Would You Do If You Had 8 Years Left to Live?

And Other State of AI Updates | 2024 Q2
This week you’re receiving two free articles. Next week, both articles will be for premium customers only, including why Sam Altman must leave OpenAI. I’ll be in Mykonos this week and Madrid next week. LMK if you’re here and want to hang out.
Tomas Pueyo ∙ 183 LIKES
EB
Your question reminds me of a interesting passage in the book The Maltese Falcon. Sam Spade tells Bridget that he was hired by a woman to find her husband who disappeared. Sam finds him and asks why he left. He tells Sam that one day while walking to work a girder fell from a crane and hit the sidewalk right in front of him. Other than a scratch from a piece of concrete that hit his cheek, he was unscathed. But the shock of the close call made him realize that he could die any moment and he realizef that if that's true, then he wouldn't want to spend his last days going to a boring job and going home every evening to have the same conversation and do the same chores. He tossed all that and started wandering the world, worked on a freight ship, etc. When Sam finds hm however, the man has a new family, lives in a house not far from his other one, and goes to a boring job every day. Bridget is confused and asks why he went back to the same routines. Sam says "when he thought that he could die at any moment he changed his entire life. But even he realized over time that he wasn't about to die, he went back to what he was familiar with." And that's my long winded answer to your question. Until I have solid evidence that AI, an asteroid, or Trump's election are going to end my life, I would continue doing the same. Going by how many stupid mistakes ChatGPT makes, I'm not worried about it destroying humanity.
Ammon Haggerty
I try to keep an open mind to the progress and value of LLMs and GenAI, so I'm actively reading, following, and speaking with "experts" across the AI ideological spectrum. Tomas, I count you as one of those experts, but as of late, you are drifting into the hype/doom side of the spectrum (aka, Leopold Aschenbrenner-ism) — and not sure that's your intention. I recommend reading some Gary Marcus (https://garymarcus.substack.com/) as a counter balance.
You did such a wonderful job taking an unknown existential threat (COVID), grounding it in science and actionable next steps. I'd love to see the same Tomas applying rational, grounded advice to AI hype, fears and uncertainties — it's highly analogous.
Here's my view (take it or leave it). LLMs started as research without a lot of hype. Big breakthrough (GPT 3.5) unlocked a step-change in generative AI capabilities. This brought big money and big valuations. The elephant in the room was that these models were trained on global IP. Addressing data ownership would be an existential threat to the companies raising billions. So they go on the defensive — hyping AGI as an existential threat to humanity and the almost certain exponential growth of these models (https://www.politico.com/news/2023/10/13/open-philanthropy-funding-ai-policy-00121362). This is a red herring to avert regulatory and ethics probes into the AI companies' data practices.
Now the models have largely stalled out because you need exponential data to get linear model growth (https://forum.effectivealtruism.org/posts/wY6aBzcXtSprmDhFN/exponential-ai-takeoff-is-a-myth). The only way to push models forward is to access personal data, which is now the focus on all the foundation models. This has the VCs and big tech that have poured billions into AI freaked out and trying to extend the promise of AGI as long as possible so the bubble won't pop.
My hope is that we can change the narrative to the idea that we've created an incredible new set of smart tools and there's a ton of work to be done to apply this capability to the myriad of applicable problems. This work needs engineers, designers, researchers, storytellers. In addition, we should address the elephant in the room and stop allowing AI companies to steal IP without attribution and compensation — they say it's not possible, but it is (https://www.sureel.ai/). We need to change the narrative to AI as a tool for empowerment rather than replacement.
Most of the lost jobs of the past couple years have not been because of AI replacement, they are because of fear and uncertainty (driven by speculative hype like this), and without clear forecasting, the only option is staff reduction. Let's promote a growth narrative and a vision for healthy adoption of AI capabilities.

What Apple's AI Tells Us: Experimental Models⁴

Siri versus the machine god?
I wanted to give some quick thoughts on the Apple AI (sorry, “Apple Intelligence”) release. I haven’t used it myself, and we don’t know everything about their approach, but I think the release highlights something important happening in AI right now: experimentation with four kinds of models - AI models, models of use, business models, and mental models…
Ethan Mollick ∙ 293 LIKES
Chris Barlow
When life gives you llms, make llmonade.
Rob Nelson
What a perfect summary of where we are: "the mere idea of AGI being possible soon bends everything around it." The question is how long will that continue when AGI is always 2-10 years away.
Self-driving cars, human cloning, and MOOCs were hyped, but they never had the initial success and huge investments of LLMs. I don't think there is a useful historical precedent for AGI.

Update #78: Accelerating Candy Crush Development and Neural Network Flexibility

Activision Blizzard scientists discuss AI's role in Candy Crush; researchers study neural networks' flexibility in fitting data.
Welcome to the 78th update from the Gradient! If you’re new and like what you see, subscribe and follow us on Twitter. Our newsletters run long, so you’ll need to view this post on Substack to see ev…
daniel bashir, Justin Landay, and Sharut Gupta ∙ 8 LIKES
M Flood
A general comment on the 1839 Awards. I've noticed a trend in commentary on AI developments - not this Substack, but in general - to move goalposts as a form of, as the kids say, "cope." People will celebrate a human tricking human judges in an AI image generation competition with a "real" image, or say that current AI is garbage and overhyped because it cannot meet one specialized use case that, if they understood how the systems work, they should not expect it to be capable of. People are still joking that image generators cannot draw hands at all, but if you use a paid image model like Midjourney you know that is a solved problem - in the rare case that a human hand is not rendered correctly, a user can fix it in seconds with inpainting. A good rule of thumb: whatever media is telling you an AI system "cannot" do, check on the frontier paid models, and in the open source literature: chances are someone has worked out how to do just the thing you said is impossible (though the real question is whether it can be done consistently at scale).
I definitely believe that businesses and VCs have every motive to overhype AI, and that they are, dishonestly or honestly, doing so (it is easier to hype what you actually believe). But if the hype is "artificial general intelligence by 2030," the alternative is not "no progress at all from now to 2030." it's a vast range of possible futures, not one of which is that AI systems become less capable than they are today. The transformation doesn't come when the AI is, at economically valuable task X, better than the best human being, but when it's better than 80% of all human beings, while being orders of magnitude faster and less expensive.

🔮 Genome architects; Chinese cables; sun & the data centre; rising rivals; hair juche & Schrodinger’s AI cat ++ #480

Hi all, welcome to the last Sunday edition of this month. I spent the week in NYC, where I had a number of meetings and speaking engagements. On the back of last week’s Sunday edition (which got picked up by Elon on X…) about the collapse of traditional news outlets, I had a number of conversations with some of the leading figures of New York & British …
Azeem Azhar and Nathan Warren ∙ 26 LIKES
blaine wishart
I'd love to join the 4 July chat.
--
"For LLMs to fulfil their GPT potential, we must inspire people to embrace them, not fear them."
Will recent SCOTUS rulings make this harder by putting too much power in the judiciary or put pressure on Congress and the executive branch to step up?
Simon Torrance
Could you help clarify a point: What does 'significantly impact' mean in the sentence above on GPTs are GPTs: "... they could significantly impact over half the tasks in 46% of jobs." Does it mean 'reduce completion time by more than a half'? (ie. is the definition of 'significantly impact' the same as 'exposure' per the chart)?

What I Read This Week…

Powell wants inflation to come down further before considering rate cuts, Apple announces a system-wide integration with ChatGPT, and China is emerging as a scientific superpower.
Watch All-In E183 Read our Creator Economy Deep Dive Caught My Eye… The Labor Department reported Wednesday that CPI held steady at 3.3%. Later that day, Powell announced during June’s FOMC meeting that he is looking for signs that inflation is moving towards the Fed’s 2% target before considering rate cuts
Chamath Palihapitiya ∙ 58 LIKES
Matt Sill
I appreciate the way you balance perspective in this post by providing contrasting perspectives on Apple's AI integration and China's prominence in academic literature.

AI is disrupting Customer Support. Salesforce is feeling the pinch.

As enterprises invest in AI proof-of-concepts for customer support, software companies are getting squeezed with acquisition costs rising. Will these once glorious companies ever get their glam back?
«The 2-minute version» Ever wondered what the behind-the-scenes operation of a customer service call looks like? Turns out that it is quite a complex operation where a customer has to be routed through the phone queues to the next available agent. The agent has to then verify the customer’s identity and subsequently match the customer’s queries with the …
Uttam Dey and Amrita Roy ∙ 100 LIKES
J.K. Lund
After chatting with GPT-4o, I am quite certain that most customer support jobs are on the way out.
Who wouldn’t want an AI agent, available 24/7, who is helpful, friendly, and best yet….no need to navigate the dreaded phone tree?
Companies will find AI agents, not only a great deal cheaper, but better than human. The biggest challenge will probably be integrating the technology in existing software.
Greg Aikens
Another masterful edification!! We as readers should be truly appreciative, for your talent is rare and is most luminous in the present moment.

 Apple: AI for the Rest of Us

The big announcements from WWDC explained
Welcome to the Friday edition of How They Make Money. Over 120,000 subscribers turn to us for business and investment insights. In case you missed it: 💻 Microsoft: AI Inflection 🛡️ Cybersecurity Earnings 📊 Earnings Visuals (5/2024) 🤖 NVIDIA: Industrial Revolution
App Economy Insights ∙ 46 LIKES
Beachman
Love this broader take on the Apple AI developments. I could not agree more with everything you said above. Cheers.
John Shelburne
What happens to the electricity grid when 100 million devices send requests to GPU powered servers? Did anyone ask that question at WWDC? This will be a sloookw roll out. Texas will probably be patient zero since a bunch of good ol boys run ERCOT.


What happened in marketing: TikTok and Meta's AI fever + YT's Community notes push

This Week: Cannes marked launch of a million updates, retail media to OOH. Meanwhile, we have some crazy stats to worry about this week. 🧃
Cannes is over. If you were there, at least you got some rest. If you weren't, a handful of updates are below. As always, data to help with decision-making is at the end of the newsletter Before you go further, Attention is like money–you can waste it or invest it. If you invest it in my paid newsletter, you'll get dividends for years.
Jaskaran ∙ 6 LIKES


Apple Intelligence and the Race to Win Personal AI

Personal context, privacy, and an integrated approach give Apple a massive advantage
Dear subscribers, Today, I want to talk about Apple Intelligence and the personal AI race. Many people believe that Apple is behind in AI, but I think: Apple is best positioned to win the personal AI race, which will be a step function improvement over today’s LLMs.
Peter Yang ∙ 37 LIKES
Dan
Excellent!
I found this informative and interesting and I look forward to trying out your AI prompt with context for my articles on my website -
You also helped convince me to buy more Apple stock!
I'm excited for their AI enhanced macbooks and the next iPhone, though I've never owned an IPhone before! Seems Apple will be hard to beat in the future, and should be a good long term hold and a great earner!

💯 Surprisingly useful ChatGPT apps

New free ways to use ChatGPT for creative work
Quick Summary: To use ChatGPT more creatively, you can now pick from thousands of free apps called “GPTs.” Each AI app has a special super power. GPTs can help you create images, diagrams, and videos, or get help with negotiating, designing, or presenting. Read on for my favorites and how to make the most of these Custom GPTs.
Jeremy Caplan ∙ 94 LIKES
James La Forte
Thanks, I'll take a look at these. To be honest, I can't even get 4.o to work with basic proofreading of text. The false-positive rate is like 70%. It will show me 15 errors and display what it changed, but there is no error and no change. When I ask ChatGPT what it changed, it responds by apologizing and saying there was no change.
Try it yourself. Upload a 1500 word text document and ask it to proofread it for typos, errors, and mistakes. It can't do it reliably.
Tom Salmond
This is an incredibly interesting article Jeremy - it highlights how powerful the GPTs can be (as well as limitations to be aware of!). Thank you!

AI Firms Sold Their Souls to Steal Yours

A new story with an old plot
A blog about AI that's actually about people I. Sins and surveillance Did you know you can map how artificial intelligence companies do things into the seven deadly sins? Look: Pride: “No one but us knows how to bring prosperity and abundance to the world. We’ll do that with a
Alberto Romero ∙ 80 LIKES
Diego Pineda
Tech companies and governments have made the public believe that sharing their information is okay as long as they have nothing to hide (e.g., they are not doing anything illegal). Nothing further from the truth. It conflates having "nothing to hide" with being unaffected by surveillance. Everyone has personal information they wish to keep private, whether medical records, financial details, or personal communications. Privacy is not solely about hiding illegal activities.
Privacy is a form of power—the more others know about you, the more they can try to predict, influence, and interfere with your decisions and behavior. This undermines individual autonomy and democracy itself.
Pascal Montjovent
Your inspired essay serves as a wake-up call for our generation, but perhaps less so for the next ones. Having interacted with younger generations for the past 15 years, I've come to admire their sharp minds and adaptability.
I believe they are well-informed about the consequences of each interaction with their favorite apps.
Have we really seen the effectiveness of behavioral prediction generated by precise targeting of individuals? Are younger generations more avid consumers than their elders? I don't think so.
It's true that with the rise of totalitarian regimes worldwide, one might think governments could seize individual data and target operations against certain categories of people. But if that's really what we should fear, young people will know how to hack and "pollute" these databases.
Their tech-savviness and rebellious spirit are our best safeguards against dystopian scenarios. While vigilance is necessary, I trust in their ability to outsmart those who seek to control them.
The future is not written yet.

Last Week in AI #275 - Apple Intelligence, Luma AI's Dream Machine, Runway's Gen-3 Alpha, and more!

Apple Intelligence: every new AI feature coming to the iPhone and Mac, Luma AI’s Dream Machine expands access to generative AI video creation, Runway unveils new AI video model Gen-3 Alpha
Note: apologies for the newsletter being late this week, sickness delayed preparation of this post. Top News Apple Intelligence: every new AI feature coming to the iPhone and Mac
Last Week in AI ∙ 7 LIKES

Generative AI

The 2024 landscape across enterprise and vertical platforms.
Hey Readers, Welcome to New Economies, where we explore the latest tech trends and ecosystems. In this edition, we explore the progression of Generative AI over the last 18 months, uncovering the latest 270+ startups driving innovation across enterprise and vertical platforms as well as highlighting predictions of what we should expect in the near futur…
Ollie Forsyth ∙ 8 LIKES