Have a feeling that Meta's next model will be more closed, less open, given how it is building its new AI team
A superpower of open source is leveraging a larger community of developers, beyond your immediate team, to help improve and eventually accelerate past closed source tech
Zuck has obviously determined that his core Meta team + Llama community is not good enough. This is largely due to DeepSeek, Qwen, etc. releasing models that are both more capable and more open (in terms of licensing), pulling community gravity away
Now that Meta is opting for a centralized "superstar" strategy with 100m annual packages, away from the open source community strategy, the logical next step is to close off future Llama models to recoup -- the PnL calculus is completely different when running an open source vs closed source lab
When this is all said and done, the US may have 0 open source AI labs!