Saw that DGX Spark vs Mac Mini M4 Pro benchmark plot making the rounds (via LMSYS, lmsys.org/blog/2025-10-…).
Thought I’d share a few notes as someone who actually uses a Mac Mini M4 Pro and has been tempted by the DGX Spark.
First of all, I really like the Mac Mini. It’s probably the best computer I’ve ever owned. For local inference with open-weight LLMs, it works great (the plot above captures that well). I regularly run the gpt-oss-20B model on it.
That said, I would not fine-tune even small LLMs on it since it gets very hot. The DGX Spark probably targets that type of sustained workload. (From those who have one, any thoughts on the noise and heat levels?)
The other big thing that DGX Spark gets you is CUDA support. If you use PyTorch, that’s pretty essential since MPS on macOS is still unstable, and fine-tuning often fails to converge.
E.g., see github.com/rasbt/LLMs-f…
and
github.com/rasbt/LLMs-f…
I also like the Spark’s form factor (hey, it really appeals to the Mac Mini user in me).
But for the same money, I could probably buy about 4000 A100 cloud GPU hours, and I keep debating which would be the better investment.
Sure, I could also build/get a multi-GPU desktop. I had a Lambda system with four GTX 1080 Ti cards back in 2018, but it was too loud and hot for my office. And if I have to move it to another room and SSH into it anyway, I might as well use cloud GPUs instead?