The most full AI hub: fresh stories, workflows, prompts, deals. Updated daily.

Get the best stories delivered
to your inbox
Discussions
Notable voices
“LocalLLaMA thread blowing up: someone ran DeepSeek-V3 on a Mac Studio with 192GB RAM. 40 tok/s at Q4.”
“r/MachineLearning consensus: The new Llama 4 Scout model is surprisingly good at code. Rivals GPT-4o on HumanEval.”