NVIDIA is rolling out a major update to DLSS with a new Transformer Model that promises both smarter visuals and leaner memory usage.
Ditching the older CNN-based approach, NVIDIA’s DLSS 3.5 now integrates Vision Transformers-advanced AI networks designed to analyze entire image frames in parallel and enhance pixel generation across multiple frames.
So, what’s the real-world impact? According to NVIDIA’s latest SDK 310.3.0 and its updated Programming Guide, the new DLSS Transformer model cuts VRAM usage by around 20% across all resolutions
. That’s especially great news for gamers stuck with 8GB GPUs or less, who’ve struggled with recent memory-hungry titles.
These Vision Transformers aren’t just lighter-they’re smarter. The new model increases the number of parameters by 2x and rendering compute by 4x, allowing DLSS to create sharper, more stable images with better ray-reconstructed scenes. With this level of tech, DLSS becomes more than just an upscaling tool-it’s practically an AI co-pilot for your GPU.
While performance gains are real, some gamers remain skeptical. A chunk of the community believes this is NVIDIA’s way of justifying smaller VRAM capacities in increasingly expensive GPUs. But despite the cynicism, few deny that DLSS 3.5’s Transformer model has real potential to improve the gaming experience, especially for those not rocking top-tier hardware.
Though DLSS Transformer is out of beta, the full deployment across supported games and titles is expected soon. For now, mid-range gamers can look forward to better visuals, faster frames-and slightly less heat from the endless VRAM wars.