The app for independent voices

Even though we're living through insane times it's surprising how much of it was understandable if you had econ brain the the equilibrium. This holds true across both pessimistic (AI is a parlour trick) and optimistic (AI is a trickster demon) schools of thought, which is frustrating because you seem an outsider to both. These are mine, not exhaustive and in no order:

- Companies act like companies - whether that's openai or anthropic - the incentives predict outcomes (eg acceleration) more than vibes or essays

- It'll be hard for labs outside the biggies to fight competitively, Chinese open source will be competitive with prior gen of models but less polished

- AI effects once it gets good enough will be most pronounced on the labor market but individual productivity grows faster than tfp. The real world is incredibly complex and it can't be easily wrestled into an easily automated form

- Diffusion is hard!

- The stream engine needed us to retool the factories, same principle applies here, you work around the parts that don't work

- ie Real world is the bottleneck on progress, whether chip mfrg or energy

- so the bottlenecks will shift within those components as investment oppties, from chips to memory etc. and the economy growth will be impacted at the margin (which is still big btw)

- The default feeling among the general populace is distrust of tech, this will continue

- Natsec is foremost as govt understands the import. Govts are slow but aren't going to stand by and let another power develop without some control

- AI safety is important but safety against omnicide and safety against employment loss and safety against bias are all different

- the best safety is in shipping safe products to users. The market demands are far stronger than any alternative.

- AGI is likely to look a lot more like a company or an economy or some weird collective entity between the model(s) and a harness and more, not a singular entity

- So managing it will similarly need lessons from economics and org theory, not just CS

- Drawing a clean boundary on what an agent can't do is equivalent to a target teaching what it will do, shortly. This doesn't mean it can do everything, that's now how capabilities will work

- Writing is hard for AI, as is personality

- The slow death of hallucinations will come mostly from trench warfare

- This isn't magic it's training and data

- Because our AIs are incredibly good at learning patterns from data. This holds true for all types of data. AI is an anything-to-anything machine.

- While this holds capability grows. Fast. Exponential resources for linear capability increases is the rule

- AI video hasn't really been a big hit, yet. Sora isn't a replacement for Meta.

There's been plenty of surprises along the way. For me, the coup, the sheer speed of AI getting better at coding even when you knew it would get better, the incredible success of RL, nonemergence of new art, 4o love, how much multimodal would help for reasoning, even the sheer fundraising prowess, but the broad strokes of the actors' actions and the corporate motions were fairly predictable if you thought of the models as amazing tools but not birthing God. It's a win for fairly orthodox econ thought.

Feb 26
at
5:58 AM
Relevant people

Log in or sign up

Join the most interesting and insightful discussions.