TLDR: LOL AI’s upending everything. Now what?
This morning, I woke up at 6:30am. I played with my son, made him and my wife breakfast, dropped him off at preschool and walked to work. Then I started using a massively powerful touchscreen computer in my pocket to play some music, book a private room for work meetings, confirm some appointments and use a search engine that indexes the world’s information to work on some client briefs. Then I used a generative AI tool to make a picture of a UFO flying over a farm in 60 seconds or so and had another AI tool make me Cliff’s Notes-style summaries of some complicated technical documents my team is working on.
That’s pretty weird!
I’ve written about this before, but we’re in the middle of what I call the Great Weirdening. A whole bunch of quirky societal things are happening at the same time of some genuine light-speed-ahead technical innovations and it’s resulting in… well, things getting weird. We might not have Mars colonies yet, or even just universal health care for Americans, but we have amazingly advanced artificial intelligence and biotechnology tools that are upending how we live and work. CRISPR and LLMs, after all, are on a continuum.
Here are my questions:
When the dust settles and the marketing hype subsides, what will tools like ChatGPT and Bard be really good at? What will they turn out to be truly bad at? What will the lasting work, educational and personal use of AI tools be like? (I’m thinking a mixture of the rise of personal computers and a lot of digital personal assistant use cases, but I’m all ears for other ideas.)
When will generative image tools like Midjourney or DALL-E become easier to use for the masses? When will prompt creation and editing become more accessible?
How many cultural producers who possess substantial economic capital (Movie studios, streamers, television production houses, book publishers, for a few examples) will switch to AI-centric content production models, even if they result in substantially worse products, for the sake of stockholder returns?
How many smaller media outlets will be able to substantially increase their ability to create video, image, audio and text content because AI tools are able to let a five-person team do the work of a 50 person team? How will that benefit niche and indie creators?
What will the societal ramifications be of the epic bullshitification that will result from anyone being able to make a realistic video or photo of just about anything? If you can create a fool-the-public-friendly hoax, libel or slander clip without access to specialized tools, what will that mean to both public trust and society at large?
That’s what I’m thinking about. How about you?