The painter analogy for batch size tradeoffs clicked for me in a way the usual GPU utilisation charts don't. Your point about model labs backfilling idle capacity with training runs is exactly what I found when looking into who actually captures the money in inference. The companies that build and run models get to amortise hardware across workloads nobody else has. I dug into the broader economics here: medium.datadriveninvest…
Mar 5
at
10:10 PM
Relevant people
Log in or sign up
Join the most interesting and insightful discussions.