

AMD opened CES 2026 with a statement about the current and future use cases of AI, told through AI itself. Before CEO and chair Lisa Hsu took the stage, attendees watched an AI-produced film that set the tone for the keynote and for the year ahead. It welcomed the audience into a moment in human history where AI is quickly changing how we work, learn, move, and connect, and where the idea of what’s 'impossible' is being rewritten.
To bring the idea to life, AMD partnered with Tool, a content and AI studio, working closely with AMD’s in-house creative and brand teams along with a large, global team of artists, technologists, and storytellers whose collaboration reflected the film’s core idea: advancing AI for all is a shared human effort.
The film follows the first-person perspective of the viewer using AI across healthcare, transportation, personal computing, urban planning, communications, and education, showing how AI is already part of everyday progress and how it will advance humanity for the good of all.
The story itself was conceived by creatives, with AI used as the primary production tool. That choice was intentional. It allowed the team to push the creative, move fast, and show what AI can enable when it’s accessible and in the hands of storytellers. Further, it allowed the team to showcase future state AI use cases that don’t exist yet that would have been difficult to shoot with live action production. The result wasn’t just a film about AI, but a film made possible by it.
While the visuals were generated using an AI-first pipeline leveraging a large suite of AI tools, traditional editorial, compositing, colour, motion graphics, music, and finishing were tightly integrated from the very beginning. This hybrid approach allowed the team to maintain consistency and control, while upleveling the production value and allowing the team to deliver on the brand story.
The film was designed for the Keynote’s immersive seven-screen environment, each operating at very high resolution and in non-standard aspect ratios. The central screen was closer to a 32:9 format - a canvas that current AI image and video tools are not designed to support, either compositionally or technically.
Beyond resolution, the multi-screen configuration introduced both technical and storytelling challenges. Content needed to remain coherent across screens while avoiding visual overload, distraction, or narrative fragmentation.
To solve for this, Tool developed a series of creative and technical approaches, including a custom out painting pipeline to extend AI-generated imagery into extreme aspect ratios and VFX-driven integration of out painted content to expand peripheral vision while preserving the core narrative on the centre screen
Throughout development, every multi-screen decision balanced what was technically possible, what was most pleasing to experience at scale, and how each screen enhanced or shaped the audience’s emotional and narrative understanding.
Tool took the approach of using an AI led creative production, with traditional methods such as motion graphics and compositing strategically leveraged into the workflow to deliver on creative precision and storytelling.
For this video, AI-generated imagery carried the emotional weight of the film, while design and motion graphics served as a framing and communication layer. Elements appeared briefly to explain or reinforce the visuals of how AI is being used in different use cases such as transportation, then receded to keep the narrative moving.
The look was purposefully familiar to the technology world, stripped back to its essential components. Clean typography, restrained motion, and a limited visual palette ensured the design felt credible and modern without competing with the imagery.
Speed was a core consideration. The story unfolds quickly, and the design needed to communicate concepts just as fast - prioritising immediate legibility and clear hierarchy.
In advance of CES, it was important that Tool could preview the video on a large screen format. Tool partnered with Pier59 Studios to leverage their state-of-the-art Mega Wall as a large format validation environment. This allowed the team to preview and refine AI-generated keynote graphics at true presentation scale, assess whether motion and transitions felt appropriate for a live audience, and avoid visual intensity that could cause viewer fatigue or motion sickness. Working at this scale also provided critical insight into the overall impact of content on a wall of this size, enabled accurate colour evaluation, and helped surface potential compression artifacts early. By using Pier59’s facility for large-format AI previs, Tool reduced downstream risk, accelerated creative decision-making, and delivered greater confidence in how the content would perform in the live CES environment.