Mistral launches Voxtral TTS, completing its open-source speech pipeline
Mistral launched a text-to-speech model that clones a voice from 3 seconds of audio and runs on 3GB of RAM.
The models, the launches, the funding rounds, and the quiet policy decisions that actually shape how artificial intelligence gets built and deployed.
Mistral launched a text-to-speech model that clones a voice from 3 seconds of audio and runs on 3GB of RAM.
Meta built a model that predicts how your brain responds to what you see and hear. They also own the world’s largest advertising platform.
Lovable just launched AI-powered penetration testing for $100. A traditional pentest costs up to $50,000. That gap deserves some scrutiny.
RAM prices are up, AI models are getting larger, and your hardware is struggling to keep up. ComfyUI just shipped an update that helps with all three problems at once.
NVIDIA found a way to make AI video understanding up to 19x faster by teaching models to ignore the parts of a video that do not matter.
Firecrawl launched a new endpoint that lets AI agents interact with a live browser session after scraping — logins, infinite scrolls, and dynamic content included.
Seedance 2.0 launched to big hype, but the initial user reactions are far from positive.
GLM-5 arrived as one of the most compelling budget alternatives to frontier coding models. Users on Reddit are now reporting the quality is quietly getting worse.
xAI launched a $10/month SuperGrok Lite tier. The catch is that it puts capabilities that used to be free behind a paywall.
Google just found a way to compress AI memory by 6x with zero quality loss — meaning longer conversations, faster responses, and cheaper inference on the same hardware.