Better stance classification, diffusion speed, and robustness
Researchers tackle emotion in text, speed up diffusion models, and harden LLMs against adversarial attacks.
Researchers tackle emotion in text, speed up diffusion models, and harden LLMs against adversarial attacks.
I'm seeing solid gains in math reasoning and RAG efficiency with these new papers.
Bespoke-Minicheck is now live in Ollama, giving us a practical way to catch hallucinations in local inference.
Hybrid inference lets you offload heavy models to the cloud while keeping your local tooling intact.
Agentic workflows and aggressive quantization are reshaping how we think about local inference.
We're seeing a shift from discrete attacks to understanding the continuous topology of model safety, plus a new way to distill reasoning models without losing accuracy.
Research is tackling evolutionary AI control and sustainable inference to save energy.
SPARTA exposes multi-hop QA flaws, while Duel-Evolve optimizes without ground-truth labels.
New benchmarks tackle RAG complexity, while research into attention scaling and modality collapse offers fresh insights for optimizing local AI infrastructure.
New benchmarks reveal auditing tools struggle with hidden behaviors and synthetic training.