Top 25 AI Articles on Substack

Latest AI Articles


Doing Stuff with AI: Opinionated Midyear Edition

AI systems have gotten more capable and easier to use
Every six months or so, I write a guide to doing stuff with AI. A lot has changed since the last guide, while a few important things have stayed the same. It is time for an update. This is usually a serious endeavor, but, heeding the advice of Allie Miller
Ethan Mollick ∙ 312 LIKES
Kevin James O’Brien
I appreciate your posts. And look forward to playing with these projects this summer.
This spring I had to pivot as a high school English teacher trying to pitch the value of poetry to students. I was seeing writing with what I suspected had AI help to say the least, so I asked my students to write with integrity as they experimented with ChatGPT and poetry - asking big questions as to role of the poet in an AI world.
They had to credit AI where credit was due - indicating AI writing in bold font - as they wrote poems and reflections on…
Why write poetry?
Does poetry matter?
What’s the point if large language models can generate sonnets and sestinas in seconds?
They read various Ars Poeticas by poets and wrote their own. They researched and presented more than 90 poets and cross checked with ChatGPT. This fact checking is essential as AI churns out words, words, words - some true, yet some false. Discernment is an essential skill. They concluded that writers write with an authentic voice that reflected their lived experience - and context is everything: historical, biographical, political, and social.
Echoing Ross Gay, writing serves as an “evident artifact” to thinking, to struggling,
to investigating, to enduring,
to living - and to inspiring
by sharing with the world.
As educators, we will have to ask big questions as we rethink teaching and learning with this technology.
We must consider our students and their future as they develop their respective relationship with writing and reading.
Right now, more questions than answers.
And as Rilke writes:
“I want to beg you, as much as I can, dear sir, to be patient toward all that is unsolved in your heart and to try to love the questions themselves like locked rooms and like books that are written in a very foreign tongue. Do not now seek the answers, which cannot be given you because you would not be able to live them. And the point is, to live everything. Live the questions now. Perhaps you will then gradually, without noticing it, live along some distant day into the answer.”
“Writing is the evident artifact of some kind of change.” - Ross Gay
From slow stories podcast.
Daniel Nest
I especially love some of the "fun" use cases. A great way to dip your toe into working with AI while having fun in the process.

Import AI 375: GPT-2 five years later; decentralized training; new ways of thinking about consciousness and AI

…Are today's AGI obsessives trafficking more in fiction than in fact?...
Welcome to Import AI, a newsletter about AI research. Import AI runs on lattes, ramen, and feedback from readers. If you’d like to support this (and comment on posts!) please subscribe. SPECIAL EDITION! GPT2, Five Years On: …A cold eyed reckoning about that time in 2019 when wild-eyed technologists created a (then) powerful LLM and used it to make some ver…
Jack Clark ∙ 47 LIKES
Mikhail Samin
> I've found myself increasingly at odds with some of the ideas being thrown around in AI policy circles, like those relating to needing a license to develop AI systems; ones that seek to make it harder and more expensive for people to deploy large-scale open source AI models; shutting down AI development worldwide for some period of time; the creation of net-new government or state-level bureaucracies to create compliance barriers to deployment
Sane policies would be "like" those, but this doesn't represent any of the ideas well and doesn't provide any justification for them.
Frontier AI labs are locked in a race; locally, they have to continue regardless of risks; they publicly say that they should be regulated (while lobbying against any regulation in private).
As a lead investor of Anthropic puts it (https://twitter.com/liron/status/1656929936639430657), “I’ve not met anyone in AI labs who says the risk [from a large-scale AI experiment] is less than 1% of blowing up the planet”.
Pointing at complicated processes around nuclear safety to argue that we shouldn't give the governments the power to regulate this field seems kind of invalid in this context.
If the CEO and many employees of your company believe there's a 10-90% chance of your product or the product of your competitors killing everyone on the planet, it seems very reasonable for the governments to step in. It's much worse than developing a nuclear bomb in a lab in the center of a populated city.
Stopping frontier general AI training worldwide until we understand it to be safe is different from shutting down all AI development (including beneficial safe narrow AI systems) "for a period of time". Similarly, a sane idea with licenses wouldn't be about all AI applications; it'd be about a licensing mechanism specifically for technologies that the companies themselves believe might kill everyone.
Ideally, right now there should be a lot of effort focusing on helping the governments to have visibility into what's going on in AI, increasing their capability to develop threat models, and developing their capacity to have future regulation be effective (such as with compute governance measures like on-chip licensing mechanisms that'd allow controlling what GPUs can be used for if some uses are deemed existentially unsafe).
If all the scientists developing nuclear powerplants at a lab estimated that there's a 10-90% chance that everyone will die in the next decades (probably as a result of a powerplant developed), but wanted to race nonetheless because the closer you are to a working powerplant, the more gold it already generates, and others are also racing, we wouldn't find it convincing if a a blog post from a lab's cofounder and policy chief argued that it's better for all the labs to self-govern and not have the governments have any capacity to regulate, impose licenses, or stop any developments.
Bernard
You mentioned the P(Doom) debate. I’m concerned that this debate may focus too much on the risk of extinction with AGI, without discussing the risk of extinction without AGI. For a proper risk assessment, that probability should also be estimated. I see the current p(Doom) as very high, assuming we make no changes to our current course. We are indeed making changes, but not fast enough. In this risk framing, AGI overall lowers the total risk, even if AGI itself carries a small extinction risk
It’s a plausible story to me that we entered a potential extinction event a few hundred years ago when we started the Industrial Revolution. Our capability to affect the world has been expanding much faster than our ability to understand and control the consequences of our changes. If this divergence continues, we will crash. AI, and other new tools, give us the chance to make effective changes at the needed speed, and chart a safe course. The small AGI risk is worthwhile in the crisis we face.

Understanding the real threat generative AI poses to our jobs

There will be no robot jobs apocalypse, but there's still plenty to worry about. How *will* generative AI impact our jobs?
Hello, and welcome back to Blood in the Machine: The Newsletter. (As opposed to Blood in the Machine: The Book.) It’s a one-man publication that covers big tech, labor, and AI. It’s all free and public, but if you find this sort of independent tech journalism and criticism valuable, and you’re able, I’d be thrilled if you’d help back the project. Enough…
Brian Merchant ∙ 81 LIKES
J T
Small but important note -- even if you don't have a union, if you and your coworkers *collectively* take some form of action (e.g. a jointly signed letter to management expressing concern about poorly-implemented AI), that is still legally protected by labor law, so it would be illegal for your employer to engage in any kind of retaliation. It's called "concerted activity" if you wanna get technical about it, but the legal standard is basically "do something with at least one other person."
Jenni
> "Who stands to profit, after all, from the rise of job-stealing software that costs
> a monthly fee to license?"
As well as being about as reliable as a Yugo's transmission. And who has to fix the problems? PEOPLE! And just as people need time off for vacations and illnesses, software "takes time off" when it's down. These idiots who want to fire everyone and just use software doesn't understand that.
I know someone whose company went all out a few years ago with getting all sorts of time-saving, money-saving software. They laid off 25% of their workers. Now they have more problems than they can count, are far behind, and are spending much more money (on software) to achieve the same results. Last December for the first time ever they could not pay out bonuses because all their money went to fixing the software that was going to save them all that money. And this is "reputable" software, from companies like Salesforce, Oracle and Google. At conferences they talk to others in their industry who tell the same story, so it's not just them. When I asked my friend why they did this when it was clearly a losing move, she replied, "Because everyone else [i.e., their competitors] is doing it." Brilliant. Reminds me of Apple's infamous "Lemmings" commercial. It angered people who saw it, but Apple told the truth, and everyone hated them for it.

What's all the noise in the AI basement?

🔥 Will Nvidia be overtaken by the new AI players?
Image Credit: Josh Brown, the Compound. Hey Everyone, Like Josh Brown recently said, Nvidia is now worth more than JPMorgan, Berkshire Hathaway and Meta stacked on top of each other. Ask yourself, does this sound right to you? Today I want to introduce my audience to
Michael Spencer and Claus Aasholm ∙ 53 LIKES
Oguz Erkan
It's amazing what's happening in the semiconductor sphere.
I don't think TSMC will be replaced by any other company as the leading advanced chip manufacturer in the world. Samsung is the closest but their yields are 20% lower than TSMC.
Nvidia on the other hand, will be a giant of its own league. With data centers deployed projected to double in the next decade, and the new data centers taking up to 1 million GPUSs, it'll likely experience a demand leap and can again double in market value in the next 5-6 years.
Richard
Now over 3 Trillion….amd likely to rise more.

can everyone kindly shut the fuck up about AI

The robots aren't coming, but the people who can't shut the fuck up about them are already here.
Please please please can everyone just take a moment to shut the fuck up about AI? It’s so stupid and it’s barely even started and already everyone can’t stop nutting about how cool it is BUT IT IS NOT COOL. "It's fascinating and it's going to change every single aspect of our lives forever!!!" Do you even hear yourself??? It’s a fucking chatbot just like Alexa, Siri, and the exception that proves the rule, the GOAT itself SmarterChild.
Alex Dobrenko` ∙ 245 LIKES
Seth Werkheiser
The real AI was the first overall pick in the NBA draft in 1996.
Sara Schroeder
I snort-laughed a couple times. Read this just as I’m having angst over more IG issues and watching visual artists move to Cara. *sigh*. I can’t keep up. Needed the levity and adore the use of Krab to illustrate your point. My son used to pronounce it “Kay-rab” - emphasizing the letter K, because of its obvious importance.

Introducing My First Substack Conversation Series on Artificial Intelligence

Featuring Kester Brewin and Numerous AI Experts
Hi Friends! I’m excited to launch my very own Substack called “Process This” (think blog that shows up in your inbox) where I’ll have more of an opportunity to connect with Homebrewed listeners around fascinating topics. To get started, I’m hosting a conversation series on
Tripp Fuller ∙ 23 LIKES
Candace Adams
Yay! Welcome to Substack, Tripp!

May 27

Import AI 374: China's military AI dataset; platonic AI; brainlike convnets

Plus, a poem about meeting aliens (well, AGI)
Welcome to Import AI, a newsletter about AI research. Import AI runs on lattes, ramen, and feedback from readers. If you’d like to support this (and comment on posts!) please subscribe. Berkeley researchers discover a suspiciously military-relevant Chinese dataset:
Jack Clark ∙ 18 LIKES

How the AI Solopreneur “AI-ified” his proven $450,000 course-building framework

Hey there, Digital Writers! Want to know the fastest path to go from 0 to 6-figure creator? Divorce your income from time, energy, and effort. And launch an online course. Digital courses are one of the best ways to distribute and monetize your knowledge at scale. Once a course is setup, it keeps making you money as long as you continue driving traffic to…
Nicolas Cole and Dickie Bush ∙ 22 LIKES

Four Singularities for Research

The rise of AI is creating both crisis and opportunity
As a business school professor, I am keenly aware of the research showing that business school professors are among the top 25 jobs (out of 1,016) whose tasks overlap most with AI. But overlap doesn’t necessarily mean replacement, it means disruption and change. I have written extensively about how a big part of my job as a professor - my role as an edu…
Ethan Mollick ∙ 371 LIKES
Dov Jacobson
It was humbling when I first used ChatGPT to review a peer review I had written. The article being reviewed presented the efficacy of a health device based on random controlled trial sponsored by the device's manufacturer. Of course I was alert to bias, and discovered a few minor instances. But the LLM mildly mentioned a discrepancy between the control and treatment conditions that had been worded so slyly as to evade human detection. Pulling on the thread that the LLM exposed uncovered a deceptive practice that invalidated their conclusions, and (after they protested "This is the way it is always done!") a large body of previous sponsored research.
[ If you are curious: the researcher required subjects to "follow the manufacturer's instructions" for each device. In practice, treatment group subjects were told to comply with the ideal duration, frequency and manner of use specified in the printed instructions. But control group subjects were given a simpler competing device that offered no enclosed instructions and thus were given no performance requirements at all for participation in the research. ]
Ezra Brand
In my personal experience, this aspect is key: "[M]ore researchers can benefit because they don’t need to learn specialized skills to work with AI. This expands the set of research techniques available for many academics."
For the first time in my life, over the past year, I've been able to do serious text analysis on relatively large texts, with Python. (Specifically, on the Talmud, which is ~1.8 million words.)
The barrier to entry for doing meaningful coding is now far lower

Seed is broken🌱, AI Startups By Country🌍, The AI Index Report📈

Welcome to The VC Corner, your weekly dose of Venture Capital and Startups to keep you up and running! 🚀 The Creator MBA: Build a lean, profitable internet business in 2024 The Creator MBA delivers a complete blueprint for starting, building, and sustaining a profitable Internet business.
Ruben Dominguez Ibar ∙ 17 LIKES
Rafael Campos
How would dilution play out with 2 seed rounds? Would this mean founders could lend after series A with less than 50% stake and funds should accept that easier?
Meng Li
Thank you for sharing the major news about Google's investment in Flipkart and DeepL's financing.

🤖 NVIDIA: Industrial Revolution

AI factories are reshaping the future of computing
Welcome to the Friday edition of How They Make Money. Over 100,000 subscribers turn to us for business and investment insights. In case you missed it: ♾ Meta: The Anti-Apple 💊 Pharma Titans Visualized 📊 Earnings Visuals (4/2024) 💰 Hedge Funds' Top Picks in Q1
App Economy Insights ∙ 70 LIKES
Aidan M. Newkirk
I’m curious to know how you gain your information 🤔

No. 40: New Drawings and Keeping an Eye Out for Ellipses

Plus some other announcements and AI related things to share.
Here are some of my latest daily “draw your world” journal pages. I have been filling up a new handmade sketchbook and enjoying the freedom of doing so without the pressures of sharing on social media. I am posting here and there, but with rumors spreading that Meta is using artists’ artwork to generate AI, and it being pretty much impossible to know if…
51 LIKES
Pierre Stanley Baptiste
This is mind blowing.. this shift my perspective when looking at objects. Checking out your Skillshare class now.. Thank you Samatha
Pamela Matthews
It’s probably helpful to lots of people but these posts about something *seemingly* simple like ellipses are so amazing to me as a not-total-novice-anymore but still very much in the early learning phase. Thank you, Sam!

AI Agents Are Really AI Tools

Troubling the Agent vs. Tool Dichotomy
Greetings, Amazing Educating AI Readers!!! Before I begin, I want to thank my readers who have decided to support my Substack via paid subscriptions. I appreciate this vote of confidence. Your contributions allow me to dedicate more time to research, writing, and building Educating AI's network of contributors, resources, and materials.
Nick Potkalitsky ∙ 30 LIKES
Tom Daccord
Very instructive and helpful article. Thanks, Nick!
Meng Li
The role of AI is shifting from tools to agents, though they still require human supervision to ensure ethics and effectiveness. In education, AI should serve as a tool to assist teachers rather than act as independent agents.

The Rise of the Software Creator

In the age of AI, software creators, like content creators, will emerge as the industry’s non-professional creative class.
No, this isn’t the end of software, but it is the beginning of a new software era. And I like the media industry analogy, but it’s an evolution, not a death. [1] Legacy media was dominated by high-budget mass producers of film, television, radio, and print. It was technically complex, expensive, and centrally controlled by a few big players. The internet revolutionized this, lowering costs, increasing access, and giving rise to giant “streaming” and user-generated content platforms.
Anu ∙ 52 LIKES
Aki Taha
I really enjoyed this post.
Feels like software is a part of the shift from mass to niche then; that it’s becoming more personalized.
And it seems that it’s also part of a big, sweeping trend in which people/users are losing trust in a powerful center; because they are not getting their needs met by that center.
So content, and media and software, yes, but you see the same decentralization (or “fractionalization” or “unbundling”, if you like) happening in:
- crypto — from traditional to centralized finance
- at work
People opting out of or unbundling from traditional, centralized work, is spurring decentralization in these other realms; while those realms becoming more of an option into which to unbundle our work is in turn feeding the unbundling of work. Round and round it goes and I imagine the cycle accelerates as it becomes easier and more socially acceptable to do non-traditional, non-one-job-at-a-time work.
Toby Schachman
Great post! I like how you talk about “short form”, “long form”, and other dimensions. Would be cool to see a “map” of these emerging software creators. Looking forward to you revisiting this topic as it plays out!

AI Image Of Trump Only Time He Will Ever Be Happy

In reality, he is miserable and disliked.
In a blatant display of digital deception, an AI-generated image of former President Donald Trump smiling at a cookout surrounded by Black supporters has surfaced online. This doctored image is notable not only for its artificial nature but also because it may be the only time in his life that Trump will ever be happy.
God ∙ 132 LIKES
Jasmine Wolfe
The lead up to this election is going to be rife with AI generated disinformation😕
Rich M
I'd prefer to see him miserable for the rest of his days, few may they be.
But yeah, the pic is funny 😂

📓 Make an AI notebook

Wonder Tools ✍️ Introducing Google’s NotebookLM
Google’s NotebookLM is a new free service that lets you apply AI to your own notes and documents. You can use it to surface new ideas and find fresh connections in your thoughts and research. Read on for how I’m using it, what I like most about it, its limitations, and two interesting alternatives.
Jeremy Caplan ∙ 60 LIKES
(AI + Real Life) x Purpose
It's an interesting tool, but I've found that if you just move the same documents into a folder in your Drive, you can prompt Gemini and tell it to look at the documents in that folder and it seems to be more intelligent / less limited to the documents themselves.
Tom Parish
Very good summary of the tool. I've been on their Discord server and using NotebookLM since last fall. It's been a work in process for sure. But I think they are on to something important. There is a major upgrade coming that we've all been patiently waiting for.
But even if Google's NotebookLM project does not become widely used, I have a hunch we're going to see the same concept for tools like this soon. We'll have to wait until Apple's Dev event in June to see what they will bring forward.
So learning how to use AI-based notebooks will become an important skill all of us will want to learn regardless of which vendor(s) we end up using.

The Wild World of Edtech Certifications: Establishing Proof of Impact

And more on upcoming events, Khan Academy and Microsoft, 2U, Common Sense Media AI Research, and Pearson AI Research.
🚨 Follow us on LinkedIn to be the first to know about new events and content! 🚨 The Wild World of Edtech Certifications: Establishing Proof of Impact By Natalia I. Kucirkova and Pati Ruiz Natalia I. K…
Sarah Morin, Alex Sarlin, and Ben Kornell ∙ 7 LIKES

What I Read This Week...

Salesforce stock drops more than 20% after releasing Q2 earnings, investor sentiment is shifting on generative AI, and a new drug offers the possibility of tooth regeneration in humans
Watch All-In E181 Read our latest deep dive into semiconductors Caught My Eye… Salesforce’s share price dropped more than 20% after releasing its Q2 2024 earnings, despite earnings falling just 0.3% below Wall Street analysts' expectations. What's going on? Two factors appear to be responsible for this decline. First, a slowing economy poses a risk to reve…
Chamath Palihapitiya ∙ 75 LIKES
Andrew
When will the deep dive on the creator economy be published?
Kevin
Outside of investor sentiment, the actual utility of LLM's is also losing its shine. Trying to find some use of AI in equity research. Looking for the intersection of 3 Venn diagrams: The quality of the prompts / The quality of the LLM / The quality of the data. It's the last one that is lacking. I've tried uploading all annual reports and documents of a specific company to get a "company Chatbot" to talk to, but the results are mixed.

Mistral Codestral is the Newest AI Model in the Code Generation Race

Plus updates from Elon Musk's xAI , several major funding rounds and intriguing research publications.
Next Week in The Sequence: Mistral Codestral is the New Model for Code Generation Edge 401: We dive into reflection and refimenent planning for agents. Review the famous Reflextion paper and the AgentVerse framework for multi-agent task planning. Edge 402:
Jesus Rodriguez ∙ 11 LIKES

The Case for Bio/Acc

Is Biological Acceleration Relevant Under Short AI Timelines?
One criticism I often see levied against bio/acc - biological acceleration - goes along the lines of, “If AI timelines short, then what does it even matter?” We build AGI sometime in the late 2020s-early 2030s, it rapidly scales to ASI, and then everything ends - ascension, death, or simulation. Biology is “slow”. Inherently, because chemical reactions …
Anatoly Karlin ∙ 31 LIKES
sean pan
The case for PauseAI actually means that there might be time for bio/acc if we can slow it down. The other case is that, in the words of an AI researcher I recently talked to:
"If things keep moving as they have been, we will not finish in time. Then we will all die."
"I think this is bad."
Chris Bartlett
Great article. I can't really argue with any of it, though as with everything, I worry about the wrong people being in charge of it.

It's not artists who should fear AI

AI might be an extinction-level threat, but not in the way we think
As artists, we’ve been living in a state of perpetual fear and uncertainty since 2022. Sorry, I meant since 1422. Maybe earlier. But that constant sensation of being an endangered species intensified dramatically in 2022 with the arrival of generative AI.
Simon K Jones ∙ 139 LIKES
Johnathan Reid
Think you've set the correct acerbic tone for the many creatives amongst us. One of the issues to note which came up on a foresight exercise I participated in a while back is related to this scenario you flagged:
"The most useful AI tools will [be] assisting doctors and experts and researchers, or aiding those who have immense skill in specific areas..."
The problem here is that to reach the levels of expertise demanded of surgeons, lawyers, engineers etc requires years of training under the supervision of said experts. But if they begin to use AI assistants for reasons of cost, efficiency and availability, then there won't be any trainees coming up through the ranks to replace them. This means expertise degrading irrespective of long-term demand. It's not a great scenario.
Caz Hart
The joy also used to be in the research. Apparently, according to Google, we can make do with an AI synopsis, which might or might not bear any relationship to primary materials or reality. This is a concerning business decision, a dumb decision.

🔮 AI & creativity; exponential compute; peak GHGs, recycling concrete & marriage saves lives ++ #476

Hi, I’m Azeem Azhar. In this week’s edition, we explore AI’s capacity for divergent thinking and how this can help the scientific process. And in the rest of today’s issue: Need to know: Modest gains Daron Acemoglu’s latest research looks at the next decade of AI’s macroeconomic impact.
Azeem Azhar ∙ 37 LIKES
Charles Fadel
Thanks for brining up Creativity, Azeem.
I'd like to be a bit more precise in language: Creativity is a number of different attributes, including but not limited to divergent thinking. My Center's extensive review of the learning sciences literature comes up with 5 subcompetencies for Creativity:
* Developing personal tastes, aesthetics, and style
* Generating and seeking new ideas
* Being comfortable with risks, uncertainty, and failure
* Connecting, reorganizing, and refining ideas into a cohesive whole
* Realizing ideas while recognizing constraints
In an AI world, incremental innovation is no longer sufficient - we agree, as AI can analogize/mimic and extrapolate, as humans do. The radical innovation side - imagination - is harder to do, but humans also need to wade through a lot of increments to come up with brilliance (Mozart etc. also did plenty of pedestrian work, with occasional flashes of brilliance).
This and other 9 competencies are described in the book I shared with you as pdf a few months ago: https://curriculumredesign.org/our-work/education-for-the-age-of-ai/ happy to discuss when you turn your attention to the consequences on education.
Be well, Charles

Leopold Aschenbrenner - China/US Super Intelligence Race, 2027 AGI, & The Return of History

The trillion dollar cluster...
Chatted with my friend Leopold Aschenbrenner about the trillion dollar cluster, unhobblings + scaling = 2027 AGI, CCP espionage at AI labs, leaving OpenAI and starting an AGI investment firm, dangers of outsourcing clusters to the Middle East, & The Project.
Dwarkesh Patel ∙ 24 LIKES
Nathan Lambert
Honestly this feels like he (and other intelligence explosion people) has a huge blind spot. The exponential is *one way* AGI *could* happen but it requires the log linear graph to continue without interruption.
With this logic, we would’ve already had “the big earthquake” a few times. There’s nothing assuring scaling to keep working, it’s one narrow path. Feels sort of like smart people have been brainwashed to believe this.
It’s a good thought experiment, but it’s certainly not proven reality.
Oliver
I didn't get the whole way through the interview, but I'm very skeptical of Leopold's views.
> Six months ago, 10 GW was the talk of the town. Now, people have moved on. 10 GW is happening. There’s The Information report on OpenAI and Microsoft planning a $100 billion cluster.
This sounds very miscalibrated for two reasons.
1) Datacenters and power plants are very complicated pieces of infrastructure. You need various kinds state approval and geological surveys and civil engineering contractors and so on, which mean you need a full time operations team running for several years. At the scale we're talking about, you start needing to buy first-of-a-kind power plant hardware that has to first be custom engineered. Even the ~$100mm datacenters at my workplace require a full time team and take years to build out. (Also re: the later point that you can buy up power-hungry aluminium smelters in structural decline, I agree, except by a sort of efficient markets argument, why hasn't this already been done for previous datacenters? What changes now? I feel like there's a Chesterton's fence here.)
2) Reading a report from The Information about $100bn of capex and taking it at face value is very questionable. That's multiple times Microsoft's annual capex budget; if they do spend that much there will be signs of it that Wall St analysts will start seeing many months in advance.
> For the average knowledge worker, it’s a few hours of productivity a month. You have to be expecting pretty lame AI progress to not hit a few hours of productivity a month.
I think very few knowledge workers would pay $100/mo not just because it's a huge amount, but because of differentiated pricing: the marginal value of the $100 model isn't enough above the $10 model for most individuals to justify.
That said I think if these models get good enough we will see a lot of enterprise / site licenses for LLMs that could go up to this price, because an employer is willing to pay more for worker productivity than workers. But I wouldn't be surprised to see a lot of the more valuable contracts go to wrapper LLMs run by LexisNexis and Elsevier affiliates and the likes, because competition can commoditise LLMs leaving the producer surplus flowing to the IP owners.
But taking a step back, it feels weird to me to assume that you'd raise copilot prices to fund $100bn in capex. If you need $100bn that bad just save it up or sell some bonds or take a GPU-secured loan from a consortium of banks; there is no principled reason to risk losing the copilot market by raising prices too early.
> The question is, when does the CCP and when does the American national security establishment realize that superintelligence is going to be absolutely decisive for national power? This is where the intelligence explosion stuff comes in, which we should talk about later.
Neither establishment is asleep at the wheel in this particular case. Obama called "Superintelligence" by Bostrom one of his favourite books 10 years ago, and with the amount Americans have been publicly fearmongering about Chinese LLMs you can bet it's a common conversation topic in Beijing. Rather I think the apparent lack of action is just because nobody is quite sure what to do with this situation, as it's so hard to forecast. What concretely would you have politicians do? Disclaimer: I know very little about China, but I have studied Chinese history and live in Hong Kong.
> There are reports, I think Microsoft. We'll get into it.
The press release linked to on the word "reports" discusses G42, which as far as I can tell is using Azure cloud compute, and which as far as I can tell is an "AI" consulting company. I could be wrong though - the chair of G42 is famously the UAE's top spy, and I don't know what to make of that. But I worked for an LLM research lab in SF for a while, so I think my BS radar is reasonably well calibrated.
> My primary argument is that if you’re at the point where this thing has vastly superhuman capabilities — it can develop crazy bioweapons targeted to kill everyone but the Han Chinese, it can wipe out entire countries, it can build robo armies and drone swarms with mosquito-sized drones — the US national security state will be intimately involved.
What the actual #$%(&?
I realise these are just hypotheticals, but the fact that CCP ethnic bioweapons are a salient idea indicates to me that Leopold should read a book or two about Chinese history. Of course I can't prove that nobody in Beijing wants this, but it conflicts so sharply with my understanding of the PRC state that I can't help but call BS.

A New Assessment Design Framework for the AI Era: Reflections Part 1

An Introduction to "Stop Grading Essays, Start Grading Chats"©
Today, I'm excited to share a novel method of student evaluation that I implemented four separate times during the ‘23-24 school year. Despite its limited trials, this method has provided meaningful insights into how teachers can continue to develop critical thinking in their students in the age of AI by placing themselves within the “process” of learni…
Mike Kentz ∙ 10 LIKES
Amanda L Price
I am a designer and am finishing up an asynchronous online course that provides some video-based instruction on using Copilot to guide students in identifying research questions, and findings core topics and sources for a scaffolded final paper. We are wanting to have students submit their chat history and I am looking for an easy way for the instructor (who is an adjunct and not involved with the course design so I don't want to place a heavy AI burden on them) to review them.
I am curious if you could share the chat exemplars you provided to students for higher and lower quality chats. I'd love to use them with our students and invite them to evaluate them. Then to later evaluate their own chats prior to submitting .
Terry underwood
It all depends on how one hovers eh:)? I want to respond thoughtfully to this. Wow. I love the pushback. I think I might use this convo as a basis for a post and get your name out there to my subscribers. I’ve got not a lot (200) but a lot who read regularly and are well placed in education. I’d love to see some subscribe to you.

VC Says "Chaos" Coming for Startups, Ads, and Online Business as Generative AI Eats Web

If the web is an infrastructure built on paying and optimizing for referred traffic, what happens when that's diminished?
As generative AI products ingest more of the web — via deals like OpenAI’s with Vox and The Atlantic this week — the impact could be felt well beyond news publishers. “Chaos” is en route for the broader online economy, VC Joe Marchese of Human Ventures texted me this week, with the technology poised to reshape a decades-old system of online referrals an…
Alex Kantrowitz ∙ 39 LIKES
Oh That’s Good Company
I took a stab at equalizing this conceptually from a copyright standpoint this week https://ohthatsgoodcompany.substack.com/p/solving-generative-ais-copyright-problem
M Le Baron
The eat rocks and put glue on pizza to make the cheese stick is not a bug or a fluke. It’s baked into the architecture of LLMs. There is even a story at MSN.com how AI makes things up and is horrible at search. One industry expert said search should be downplayed, while a computer science brain posited it could not be fixed. It was intrinsic to LLM architecture.
Depending on how AI vendors react to this, regular ad serving economics look pretty stable for now.