Top 25 AI Articles on Substack

Latest AI Articles


Import AI 375: GPT-2 five years later; decentralized training; new ways of thinking about consciousness and AI

…Are today's AGI obsessives trafficking more in fiction than in fact?...
Welcome to Import AI, a newsletter about AI research. Import AI runs on lattes, ramen, and feedback from readers. If you’d like to support this (and comment on posts!) please subscribe. SPECIAL EDITION! GPT2, Five Years On: …A cold eyed reckoning about that time in 2019 when wild-eyed technologists created a (then) powerful LLM and used it to make some ver…
Jack Clark ∙ 43 LIKES
Mikhail Samin
> I've found myself increasingly at odds with some of the ideas being thrown around in AI policy circles, like those relating to needing a license to develop AI systems; ones that seek to make it harder and more expensive for people to deploy large-scale open source AI models; shutting down AI development worldwide for some period of time; the creation of net-new government or state-level bureaucracies to create compliance barriers to deployment
Sane policies would be "like" those, but this doesn't represent any of the ideas well and doesn't provide any justification for them.
Frontier AI labs are locked in a race; locally, they have to continue regardless of risks; they publicly say that they should be regulated (while lobbying against any regulation in private).
As a lead investor of Anthropic puts it (https://twitter.com/liron/status/1656929936639430657), “I’ve not met anyone in AI labs who says the risk [from a large-scale AI experiment] is less than 1% of blowing up the planet”.
Pointing at complicated processes around nuclear safety to argue that we shouldn't give the governments the power to regulate this field seems kind of invalid in this context.
If the CEO and many employees of your company believe there's a 10-90% chance of your product or the product of your competitors killing everyone on the planet, it seems very reasonable for the governments to step in. It's much worse than developing a nuclear bomb in a lab in the center of a populated city.
Stopping frontier general AI training worldwide until we understand it to be safe is different from shutting down all AI development (including beneficial safe narrow AI systems) "for a period of time". Similarly, a sane idea with licenses wouldn't be about all AI applications; it'd be about a licensing mechanism specifically for technologies that the companies themselves believe might kill everyone.
Ideally, right now there should be a lot of effort focusing on helping the governments to have visibility into what's going on in AI, increasing their capability to develop threat models, and developing their capacity to have future regulation be effective (such as with compute governance measures like on-chip licensing mechanisms that'd allow controlling what GPUs can be used for if some uses are deemed existentially unsafe).
If all the scientists developing nuclear powerplants at a lab estimated that there's a 10-90% chance that everyone will die in the next decades (probably as a result of a powerplant developed), but wanted to race nonetheless because the closer you are to a working powerplant, the more gold it already generates, and others are also racing, we wouldn't find it convincing if a a blog post from a lab's cofounder and policy chief argued that it's better for all the labs to self-govern and not have the governments have any capacity to regulate, impose licenses, or stop any developments.
Bernard
You mentioned the P(Doom) debate. I’m concerned that this debate may focus too much on the risk of extinction with AGI, without discussing the risk of extinction without AGI. For a proper risk assessment, that probability should also be estimated. I see the current p(Doom) as very high, assuming we make no changes to our current course. We are indeed making changes, but not fast enough. In this risk framing, AGI overall lowers the total risk, even if AGI itself carries a small extinction risk
It’s a plausible story to me that we entered a potential extinction event a few hundred years ago when we started the Industrial Revolution. Our capability to affect the world has been expanding much faster than our ability to understand and control the consequences of our changes. If this divergence continues, we will crash. AI, and other new tools, give us the chance to make effective changes at the needed speed, and chart a safe course. The small AGI risk is worthwhile in the crisis we face.

Understanding the real threat generative AI poses to our jobs

There will be no robot jobs apocalypse, but there's still plenty to worry about. How *will* generative AI impact our jobs?
Hello, and welcome back to Blood in the Machine: The Newsletter. (As opposed to Blood in the Machine: The Book.) It’s a one-man publication that covers big tech, labor, and AI. It’s all free and public, but if you find this sort of independent tech journalism and criticism valuable, and you’re able, I’d be thrilled if you’d help back the project. Enough…
Brian Merchant ∙ 78 LIKES
J T
Small but important note -- even if you don't have a union, if you and your coworkers *collectively* take some form of action (e.g. a jointly signed letter to management expressing concern about poorly-implemented AI), that is still legally protected by labor law, so it would be illegal for your employer to engage in any kind of retaliation. It's called "concerted activity" if you wanna get technical about it, but the legal standard is basically "do something with at least one other person."
Jenni
> "Who stands to profit, after all, from the rise of job-stealing software that costs
> a monthly fee to license?"
As well as being about as reliable as a Yugo's transmission. And who has to fix the problems? PEOPLE! And just as people need time off for vacations and illnesses, software "takes time off" when it's down. These idiots who want to fire everyone and just use software doesn't understand that.
I know someone whose company went all out a few years ago with getting all sorts of time-saving, money-saving software. They laid off 25% of their workers. Now they have more problems than they can count, are far behind, and are spending much more money (on software) to achieve the same results. Last December for the first time ever they could not pay out bonuses because all their money went to fixing the software that was going to save them all that money. And this is "reputable" software, from companies like Salesforce, Oracle and Google. At conferences they talk to others in their industry who tell the same story, so it's not just them. When I asked my friend why they did this when it was clearly a losing move, she replied, "Because everyone else [i.e., their competitors] is doing it." Brilliant. Reminds me of Apple's infamous "Lemmings" commercial. It angered people who saw it, but Apple told the truth, and everyone hated them for it.

Introducing My First Substack Conversation Series on Artificial Intelligence

Featuring Kester Brewin and Numerous AI Experts
Hi Friends! I’m excited to launch my very own Substack called “Process This” (think blog that shows up in your inbox) where I’ll have more of an opportunity to connect with Homebrewed listeners around fascinating topics. To get started, I’m hosting a conversation series on
Tripp Fuller ∙ 22 LIKES
Candace Adams
Yay! Welcome to Substack, Tripp!

May 27

Import AI 374: China's military AI dataset; platonic AI; brainlike convnets

Plus, a poem about meeting aliens (well, AGI)
Welcome to Import AI, a newsletter about AI research. Import AI runs on lattes, ramen, and feedback from readers. If you’d like to support this (and comment on posts!) please subscribe. Berkeley researchers discover a suspiciously military-relevant Chinese dataset:
Jack Clark ∙ 18 LIKES

What's all the noise in the AI basement?

🔥 Will Nvidia be overtaken by the new AI players?
Image Credit: Josh Brown, the Compound. Hey Everyone, Like Josh Brown recently said, Nvidia is now worth more than JPMorgan, Berkshire Hathaway and Meta stacked on top of each other. Ask yourself, does this sound right to you? Today I want to introduce my audience to
Michael Spencer and Claus Aasholm ∙ 49 LIKES
Oguz Erkan
It's amazing what's happening in the semiconductor sphere.
I don't think TSMC will be replaced by any other company as the leading advanced chip manufacturer in the world. Samsung is the closest but their yields are 20% lower than TSMC.
Nvidia on the other hand, will be a giant of its own league. With data centers deployed projected to double in the next decade, and the new data centers taking up to 1 million GPUSs, it'll likely experience a demand leap and can again double in market value in the next 5-6 years.
Richard
Now over 3 Trillion….amd likely to rise more.

Seed is broken🌱, AI Startups By Country🌍, The AI Index Report📈

Welcome to The VC Corner, your weekly dose of Venture Capital and Startups to keep you up and running! 🚀 The Creator MBA: Build a lean, profitable internet business in 2024 The Creator MBA delivers a complete blueprint for starting, building, and sustaining a profitable Internet business.
Ruben Dominguez Ibar ∙ 16 LIKES
Rafael Campos
How would dilution play out with 2 seed rounds? Would this mean founders could lend after series A with less than 50% stake and funds should accept that easier?
Meng Li
Thank you for sharing the major news about Google's investment in Flipkart and DeepL's financing.

🤖 NVIDIA: Industrial Revolution

AI factories are reshaping the future of computing
Welcome to the Friday edition of How They Make Money. Over 100,000 subscribers turn to us for business and investment insights. In case you missed it: ♾ Meta: The Anti-Apple 💊 Pharma Titans Visualized 📊 Earnings Visuals (4/2024) 💰 Hedge Funds' Top Picks in Q1
App Economy Insights ∙ 67 LIKES

AI Agents Are Really AI Tools

Troubling the Agent vs. Tool Dichotomy
Greetings, Amazing Educating AI Readers!!! Before I begin, I want to thank my readers who have decided to support my Substack via paid subscriptions. I appreciate this vote of confidence. Your contributions allow me to dedicate more time to research, writing, and building Educating AI's network of contributors, resources, and materials.
Nick Potkalitsky ∙ 29 LIKES
Meng Li
The role of AI is shifting from tools to agents, though they still require human supervision to ensure ethics and effectiveness. In education, AI should serve as a tool to assist teachers rather than act as independent agents.
Michael Woudenberg
This is a good case study for why I like to entrust AI to specific tasks / roles vs. just 'trusting' AI as an amorphous, autonomous, system. I wrote about that in more detail here:

The Rise of the Software Creator

In the age of AI, software creators, like content creators, will emerge as the industry’s non-professional creative class.
No, this isn’t the end of software, but it is the beginning of a new software era. And I like the media industry analogy, but it’s an evolution, not a death. [1] Legacy media was dominated by high-budget mass producers of film, television, radio, and print. It was technically complex, expensive, and centrally controlled by a few big players. The internet revolutionized this, lowering costs, increasing access, and giving rise to giant “streaming” and user-generated content platforms.
Anu ∙ 48 LIKES
Aki Taha
I really enjoyed this post.
Feels like software is a part of the shift from mass to niche then; that it’s becoming more personalized.
And it seems that it’s also part of a big, sweeping trend in which people/users are losing trust in a powerful center; because they are not getting their needs met by that center.
So content, and media and software, yes, but you see the same decentralization (or “fractionalization” or “unbundling”, if you like) happening in:
- crypto — from traditional to centralized finance
- at work
People opting out of or unbundling from traditional, centralized work, is spurring decentralization in these other realms; while those realms becoming more of an option into which to unbundle our work is in turn feeding the unbundling of work. Round and round it goes and I imagine the cycle accelerates as it becomes easier and more socially acceptable to do non-traditional, non-one-job-at-a-time work.
Melissa
I currently maintain two private repos with my blockchain blogging software. Software creation has definitely evolved in the innovative blockchain and AI space in recent years, especially when it comes to participating in creator - focused hackathons. I've witnessed many weekend hackathon projects become full on businesses and as well as become abandon. I think today's creators are more empowered with so many incentives from organizations willing to pay for the space to create and see what happens next.

📓 Make an AI notebook

Wonder Tools ✍️ Introducing Google’s NotebookLM
Google’s NotebookLM is a new free service that lets you apply AI to your own notes and documents. You can use it to surface new ideas and find fresh connections in your thoughts and research. Read on for how I’m using it, what I like most about it, its limitations, and two interesting alternatives.
Jeremy Caplan ∙ 59 LIKES
(AI + Real Life) x Purpose
It's an interesting tool, but I've found that if you just move the same documents into a folder in your Drive, you can prompt Gemini and tell it to look at the documents in that folder and it seems to be more intelligent / less limited to the documents themselves.
Tom Parish
Very good summary of the tool. I've been on their Discord server and using NotebookLM since last fall. It's been a work in process for sure. But I think they are on to something important. There is a major upgrade coming that we've all been patiently waiting for.
But even if Google's NotebookLM project does not become widely used, I have a hunch we're going to see the same concept for tools like this soon. We'll have to wait until Apple's Dev event in June to see what they will bring forward.
So learning how to use AI-based notebooks will become an important skill all of us will want to learn regardless of which vendor(s) we end up using.

Mistral Codestral is the Newest AI Model in the Code Generation Race

Plus updates from Elon Musk's xAI , several major funding rounds and intriguing research publications.
Next Week in The Sequence: Mistral Codestral is the New Model for Code Generation Edge 401: We dive into reflection and refimenent planning for agents. Review the famous Reflextion paper and the AgentVerse framework for multi-agent task planning. Edge 402:
Jesus Rodriguez ∙ 11 LIKES

The Wild World of Edtech Certifications: Establishing Proof of Impact

And more on upcoming events, Khan Academy and Microsoft, 2U, Common Sense Media AI Research, and Pearson AI Research.
🚨 Follow us on LinkedIn to be the first to know about new events and content! 🚨 The Wild World of Edtech Certifications: Establishing Proof of Impact By Natalia I. Kucirkova and Pati Ruiz Natalia I. K…
Sarah Morin, Alex Sarlin, and Ben Kornell ∙ 7 LIKES

What I Read This Week...

Salesforce stock drops more than 20% after releasing Q2 earnings, investor sentiment is shifting on generative AI, and a new drug offers the possibility of tooth regeneration in humans
Watch All-In E181 Read our latest deep dive into semiconductors Caught My Eye… Salesforce’s share price dropped more than 20% after releasing its Q2 2024 earnings, despite earnings falling just 0.3% below Wall Street analysts' expectations. What's going on? Two factors appear to be responsible for this decline. First, a slowing economy poses a risk to reve…
Chamath Palihapitiya ∙ 71 LIKES
Andrew
When will the deep dive on the creator economy be published?
Kevin
Outside of investor sentiment, the actual utility of LLM's is also losing its shine. Trying to find some use of AI in equity research. Looking for the intersection of 3 Venn diagrams: The quality of the prompts / The quality of the LLM / The quality of the data. It's the last one that is lacking. I've tried uploading all annual reports and documents of a specific company to get a "company Chatbot" to talk to, but the results are mixed.

AI Image Of Trump Only Time He Will Ever Be Happy

In reality, he is miserable and disliked.
In a blatant display of digital deception, an AI-generated image of former President Donald Trump smiling at a cookout surrounded by Black supporters has surfaced online. This doctored image is notable not only for its artificial nature but also because it may be the only time in his life that Trump will ever be happy.
God ∙ 126 LIKES
Jasmine Wolfe
The lead up to this election is going to be rife with AI generated disinformation😕
Rich M
I'd prefer to see him miserable for the rest of his days, few may they be.
But yeah, the pic is funny 😂

It's not artists who should fear AI

AI might be an extinction-level threat, but not in the way we think
As artists, we’ve been living in a state of perpetual fear and uncertainty since 2022. Sorry, I meant since 1422. Maybe earlier. But that constant sensation of being an endangered species intensified dramatically in 2022 with the arrival of generative AI.
Simon K Jones ∙ 122 LIKES
Johnathan Reid
Think you've set the correct acerbic tone for the many creatives amongst us. One of the issues to note which came up on a foresight exercise I participated in a while back is related to this scenario you flagged:
"The most useful AI tools will [be] assisting doctors and experts and researchers, or aiding those who have immense skill in specific areas..."
The problem here is that to reach the levels of expertise demanded of surgeons, lawyers, engineers etc requires years of training under the supervision of said experts. But if they begin to use AI assistants for reasons of cost, efficiency and availability, then there won't be any trainees coming up through the ranks to replace them. This means expertise degrading irrespective of long-term demand. It's not a great scenario.
Caz Hart
The joy also used to be in the research. Apparently, according to Google, we can make do with an AI synopsis, which might or might not bear any relationship to primary materials or reality. This is a concerning business decision, a dumb decision.

🔮 AI & creativity; exponential compute; peak GHGs, recycling concrete & marriage saves lives ++ #476

Hi, I’m Azeem Azhar. In this week’s edition, we explore AI’s capacity for divergent thinking and how this can help the scientific process. And in the rest of today’s issue: Need to know: Modest gains Daron Acemoglu’s latest research looks at the next decade of AI’s macroeconomic impact.
Azeem Azhar ∙ 37 LIKES
Charles Fadel
Thanks for brining up Creativity, Azeem.
I'd like to be a bit more precise in language: Creativity is a number of different attributes, including but not limited to divergent thinking. My Center's extensive review of the learning sciences literature comes up with 5 subcompetencies for Creativity:
* Developing personal tastes, aesthetics, and style
* Generating and seeking new ideas
* Being comfortable with risks, uncertainty, and failure
* Connecting, reorganizing, and refining ideas into a cohesive whole
* Realizing ideas while recognizing constraints
In an AI world, incremental innovation is no longer sufficient - we agree, as AI can analogize/mimic and extrapolate, as humans do. The radical innovation side - imagination - is harder to do, but humans also need to wade through a lot of increments to come up with brilliance (Mozart etc. also did plenty of pedestrian work, with occasional flashes of brilliance).
This and other 9 competencies are described in the book I shared with you as pdf a few months ago: https://curriculumredesign.org/our-work/education-for-the-age-of-ai/ happy to discuss when you turn your attention to the consequences on education.
Be well, Charles

Time to Calm Down About AI and Politics

It turns out generative AI isn't going to be the Meetup or Facebook or Twitter of this election cycle.
Hello to all my new subscribers! If you’re here because you read the tome I put out last week about how race, class, and identity were playing out in my home congressional district, as incumbent Rep. Jamaal Bowman and challenger George Latimer clash in the wake of the Israel-Palestine rift among Democrats, be forewarned: The Connector is my forum for re…
Micah L. Sifry ∙ 8 LIKES
Meng Li
Although AI tools increase efficiency, genuine creativity and effective movements remain the keys to success.
spiky
>Related: I’m intrigued by Sourcebase.ai, an AI platform designed for journalists, researchers, and professionals who need to make sense of massive amounts of source material.
Oh, you doogie-woogies. Aren't you supposed to be the ones making sense of things? If you're just repeating the sense an AI made of it, with a little bit of personal noodling at the start and the end, what do we need you for?
This has already sort of happened, though - it's pretty easy to tell when an article is just things the author saw on twitter or reddit with nothing of substance added. So no, AI isn't going to make a huge difference - lazy journalists have been able to rehash the internet instead of thinking for twenty years already...and boy have they.
You know who'll come out on top of all this? The few people who stay original, or at least rehash things other people haven't seen yet. It's them that the AI & thereafter everyone else will be rehashing.
It's not just "ideas" we're talking about. They who, like Shakespeare, invent words, lead -- it is by their clever fingers that our stage is painted bright and gaily furnish'd.

VC Says "Chaos" Coming for Startups, Ads, and Online Business as Generative AI Eats Web

If the web is an infrastructure built on paying and optimizing for referred traffic, what happens when that's diminished?
As generative AI products ingest more of the web — via deals like OpenAI’s with Vox and The Atlantic this week — the impact could be felt well beyond news publishers. “Chaos” is en route for the broader online economy, VC Joe Marchese of Human Ventures texted me this week, with the technology poised to reshape a decades-old system of online referrals an…
Alex Kantrowitz ∙ 38 LIKES
Oh That’s Good Company
I took a stab at equalizing this conceptually from a copyright standpoint this week https://ohthatsgoodcompany.substack.com/p/solving-generative-ais-copyright-problem
M Le Baron
The eat rocks and put glue on pizza to make the cheese stick is not a bug or a fluke. It’s baked into the architecture of LLMs. There is even a story at MSN.com how AI makes things up and is horrible at search. One industry expert said search should be downplayed, while a computer science brain posited it could not be fixed. It was intrinsic to LLM architecture.
Depending on how AI vendors react to this, regular ad serving economics look pretty stable for now.

Leopold Aschenbrenner - China/US Super Intelligence Race, 2027 AGI, & The Return of History

The trillion dollar cluster...
Chatted with my friend Leopold Aschenbrenner about the trillion dollar cluster, unhobblings + scaling = 2027 AGI, CCP espionage at AI labs, leaving OpenAI and starting an AGI investment firm, dangers of outsourcing clusters to the Middle East, & The Project.
Dwarkesh Patel ∙ 21 LIKES
Nathan Lambert
Honestly this feels like he (and other intelligence explosion people) has a huge blind spot. The exponential is *one way* AGI *could* happen but it requires the log linear graph to continue without interruption.
With this logic, we would’ve already had “the big earthquake” a few times. There’s nothing assuring scaling to keep working, it’s one narrow path. Feels sort of like smart people have been brainwashed to believe this.
It’s a good thought experiment, but it’s certainly not proven reality.
Oliver
I didn't get the whole way through the interview, but I'm very skeptical of Leopold's views.
> Six months ago, 10 GW was the talk of the town. Now, people have moved on. 10 GW is happening. There’s The Information report on OpenAI and Microsoft planning a $100 billion cluster.
This sounds very miscalibrated for two reasons.
1) Datacenters and power plants are very complicated pieces of infrastructure. You need various kinds state approval and geological surveys and civil engineering contractors and so on, which mean you need a full time operations team running for several years. At the scale we're talking about, you start needing to buy first-of-a-kind power plant hardware that has to first be custom engineered. Even the ~$100mm datacenters at my workplace require a full time team and take years to build out. (Also re: the later point that you can buy up power-hungry aluminium smelters in structural decline, I agree, except by a sort of efficient markets argument, why hasn't this already been done for previous datacenters? What changes now? I feel like there's a Chesterton's fence here.)
2) Reading a report from The Information about $100bn of capex and taking it at face value is very questionable. That's multiple times Microsoft's annual capex budget; if they do spend that much there will be signs of it that Wall St analysts will start seeing many months in advance.
> For the average knowledge worker, it’s a few hours of productivity a month. You have to be expecting pretty lame AI progress to not hit a few hours of productivity a month.
I think very few knowledge workers would pay $100/mo not just because it's a huge amount, but because of differentiated pricing: the marginal value of the $100 model isn't enough above the $10 model for most individuals to justify.
That said I think if these models get good enough we will see a lot of enterprise / site licenses for LLMs that could go up to this price, because an employer is willing to pay more for worker productivity than workers. But I wouldn't be surprised to see a lot of the more valuable contracts go to wrapper LLMs run by LexisNexis and Elsevier affiliates and the likes, because competition can commoditise LLMs leaving the producer surplus flowing to the IP owners.
But taking a step back, it feels weird to me to assume that you'd raise copilot prices to fund $100bn in capex. If you need $100bn that bad just save it up or sell some bonds or take a GPU-secured loan from a consortium of banks; there is no principled reason to risk losing the copilot market by raising prices too early.
> The question is, when does the CCP and when does the American national security establishment realize that superintelligence is going to be absolutely decisive for national power? This is where the intelligence explosion stuff comes in, which we should talk about later.
Neither establishment is asleep at the wheel in this particular case. Obama called "Superintelligence" by Bostrom one of his favourite books 10 years ago, and with the amount Americans have been publicly fearmongering about Chinese LLMs you can bet it's a common conversation topic in Beijing. Rather I think the apparent lack of action is just because nobody is quite sure what to do with this situation, as it's so hard to forecast. What concretely would you have politicians do? Disclaimer: I know very little about China, but I have studied Chinese history and live in Hong Kong.
> There are reports, I think Microsoft. We'll get into it.
The press release linked to on the word "reports" discusses G42, which as far as I can tell is using Azure cloud compute, and which as far as I can tell is an "AI" consulting company. I could be wrong though - the chair of G42 is famously the UAE's top spy, and I don't know what to make of that. But I worked for an LLM research lab in SF for a while, so I think my BS radar is reasonably well calibrated.
> My primary argument is that if you’re at the point where this thing has vastly superhuman capabilities — it can develop crazy bioweapons targeted to kill everyone but the Han Chinese, it can wipe out entire countries, it can build robo armies and drone swarms with mosquito-sized drones — the US national security state will be intimately involved.
What the actual #$%(&?
I realise these are just hypotheticals, but the fact that CCP ethnic bioweapons are a salient idea indicates to me that Leopold should read a book or two about Chinese history. Of course I can't prove that nobody in Beijing wants this, but it conflicts so sharply with my understanding of the PRC state that I can't help but call BS.

No. 40: New Drawings and Keeping an Eye Out for Ellipses

Plus some other announcements and AI related things to share.
Here are some of my latest daily “draw your world” journal pages. I have been filling up a new handmade sketchbook and enjoying the freedom of doing so without the pressures of sharing on social media. I am posting here and there, but with rumors spreading that Meta is using artists’ artwork to generate AI, and it being pretty much impossible to know if…
40 LIKES
Pamela Matthews
It’s probably helpful to lots of people but these posts about something *seemingly* simple like ellipses are so amazing to me as a not-total-novice-anymore but still very much in the early learning phase. Thank you, Sam!
Carolyn
Thanks for the tip about Cara.
I always enjoy your meet-ups…sometimes I can’t be on the whole time as it may be during work hours, but I do follow along on the replay. Thank you for taking the time to plan and host them! The addition of guest artists is interesting too.

The traffic impact of AI Overviews

An analysis of 1,675 keywords shows AIOs could reduce organic clicks
A warm welcome to 115 new Growth Memo readers who joined us since last week! Join the ranks of Amazon, Microsoft, Google and 12,500 other Growth Memo readers:
Kevin Indig ∙ 20 LIKES
Barry Adams
Great analysis dude. And perfectly timed as I’m presenting on this on Wednesday - I’ll definitely be citing you!
Salvador Lorca
I wrote about this article of yours, which I liked:

Google's AI-Generated Search Results Keep Citing The Onion

Plus other stories!
Hi all, Parker here. A Google search for the phrase “how many rocks should I eat each day” returned an AI-generated result citing “UC Berkeley geologists” who suggest people eat “at least one small rock a day.” It turns out that the actual source of this information was
Parker Molloy ∙ 141 LIKES
Sean Corfield
I'm so glad I switched from Google to Bing years ago...
Yes, Bing uses AI to provide summarized results as well, but at least it clearly annotates which sites/pages it drew parts of the summary from and provides a list of footnotes.
As for the misinformed American public... I despair! How are so many people -- a majority or near-majority on those issues -- so out of touch with reality? Is the news media doing such a poor job, or is it the politically motivated media just overwhelming any good news coming out of the mainstream?
Terry Cook
Yet no comments on the CIA protocols on usage of purple vs. green ink on document markups?

A New Assessment Design Framework for the AI Era: Reflections Part 1

An Introduction to "Stop Grading Essays, Start Grading Chats"©
Today, I'm excited to share a novel method of student evaluation that I implemented four separate times during the ‘23-24 school year. Despite its limited trials, this method has provided meaningful insights into how teachers can continue to develop critical thinking in their students in the age of AI by placing themselves within the “process” of learni…
Mike Kentz ∙ 9 LIKES
Amanda L Price
I am a designer and am finishing up an asynchronous online course that provides some video-based instruction on using Copilot to guide students in identifying research questions, and findings core topics and sources for a scaffolded final paper. We are wanting to have students submit their chat history and I am looking for an easy way for the instructor (who is an adjunct and not involved with the course design so I don't want to place a heavy AI burden on them) to review them.
I am curious if you could share the chat exemplars you provided to students for higher and lower quality chats. I'd love to use them with our students and invite them to evaluate them. Then to later evaluate their own chats prior to submitting .
Terry underwood
It all depends on how one hovers eh:)? I want to respond thoughtfully to this. Wow. I love the pushback. I think I might use this convo as a basis for a post and get your name out there to my subscribers. I’ve got not a lot (200) but a lot who read regularly and are well placed in education. I’d love to see some subscribe to you.

Generative AI Unicorn Capitulation

Adept and Humane are looking for buyers.
Next Week in The Sequence: Edge 399: Our series about autonomous agents continues with an overview of external aid planning. We dive into IBM’s Simplan method for planning in LLMs and review the Langroid framework for autonomous agents. Edge 340: A must read about AlphaFold 3 which expanded capabilities to predict many of the life’s m…
Jesus Rodriguez ∙ 20 LIKES

Inside Canva: Coaches not managers, giving away your Legos, and running profitably | Cameron Adams (co-founder and CPO)

Brought to you by: • WorkOS—Modern identity platform for B2B SaaS, free up to 1 million MAUs • Attio—The powerful, flexible CRM for fast-growing startups • Vanta—Automate compliance. Simplify security. — Cameron Adams is the co-founder and chief product officer of Canva. Canva is one of the world’s most valuable private software companies, used by 95% of Fortune 500 companies. Since its launch in 2013, Canva has grown to over 150 million monthly users in more than 190 countries, generating $2.3 billion in annual revenue. Prior to Canva, Cameron ran a design consultancy, worked at Google on Google Wave, and founded the email startup Fluent. He is also an author of five web design books and a regular speaker at global conferences. In our conversation, we discuss:
Lenny Rachitsky ∙ 57 LIKES