Hugging Face Forms Dedicated PyTorch/MPS Team for Apple Silicon
Hugging Face forms a dedicated PyTorch/MPS team targeting 100× Apple Silicon perf gains — torch.sort and torch.multinomial are already MPS-native; flex attention is next.
Hugging Face forms a dedicated PyTorch/MPS team targeting 100× Apple Silicon perf gains — torch.sort and torch.multinomial are already MPS-native; flex attention is next.
DeepSeek V4-Pro open-sourced with 1.6T params, 1M context window, and 10x KV cache reduction vs V3.2 — #1 HuggingFace trending in 43 minutes.
HuggingFace crosses 1.2 million hosted AI apps, positioning it as likely the world's largest AI app store by application count.
DeepSeek V4-Pro launches with 1.6T parameters, 1M context, and 10× KV cache reduction over V3.2 — multiplying inference concurrency roughly 10× on the same hardware.
HuggingFace Inference Providers undercuts OpenRouter with zero markup on 200+ models, surfacing as the preferred open-model routing layer for cost-sensitive deployments.
HuggingFace's ml-intern autonomously runs the full ML research-to-training loop, lifting GPQA from 10% to 32% on a 1.7B model in <10 hours and beating Codex on HealthBench by 60%.
ml-intern reads arXiv, cleans datasets, runs SFT/GRPO, diagnoses failures, and iterates — pushing GPQA from 10% to 32% in under 10 hours for roughly $1 of compute.
Curated AI insights — sent when there's something worth your inbox.