In nearly every industry we go through the same process: bundling, then un-bundling, then re-bundling that un-bundling, then all over again.
The modern data stack created an enormously fragmented network to manage new data use cases. Now Databricks is looking to re-bundle it.
Databricks' philosophy to addressing modern data use cases is quite different than Snowflake's, which relies on extensive partner networks to do a lot of the heavy lifting. We saw that in the announcements for both companies this week (with conferences on exactly the same days).
Which one wins out is still not exactly clear. But the blistering pace of development (and deployment) for language and diffusion models is going to tell us very quickly—particularly with MongoDB also getting into vector search as of last week.