senckađ
Group745
Group745
Group745
Group745
Group745
Group745
EDITION
Global
USA
UK
AUNZ
CANADA
IRELAND
FRANCE
GERMANY
ASIA
EUROPE
LATAM
MEA
People in association withLBB Reel Builder
Group745

How Jack Morton Built a Living City with Generative AI

15/10/2025
0
Share
Jack Morton UK technology director, Sebastien Jouhans, look back on brought the AI city to life from early designs to it overwhelmingly positive debut as part of LBB's Problem Solved series

Sebastien Jouhans is a technology director with over 20 years of experience creating digital experiences and platform solutions for global brands including Microsoft, EY, Nike, and HP. At Jack Morton, he bridges creativity and innovation to craft immersive, connected, and data-driven brand experiences.

His deep passion and expertise in spatial and physical computing, the Internet of Things (IoT), and machine learning enable him to design applications, products, and services that blur the lines between people and technology.

Sebastien thrives at the intersection of creativity and emerging technology, collaborating with multidisciplinary teams to inject cutting-edge thinking at the heart of every idea. From spatial computing to AI-powered interactions, his work helps brands stay relevant in a rapidly evolving digital landscape.

Sebastien spoke with LBB to give a behind-the-scenes look at Jack Morton designed and built ‘The AI Living City’ through six months of intense, complex, cross-disciplinary collaboration.


What We Made

To bring a leading computer technology company’s next-generation AI capabilities to life, the Jack X team at Jack Morton designed and built ‘The AI Living City’: an interactive, multi-sensory brand experience that visualises the transformative power of AI in a way that is human, intuitive, and immersive.

Unveiled at AWS re:Invent, this modular experience allowed users to create AI-generated buildings using movement, voice, and object recognition, each added to a dynamic cityscape displayed on a massive living wall.

Every interaction offered a personal, tangible connection to the brand’s generative AI tools, showcasing the breadth of its capabilities across cloud, edge, and personal computing in a way that was playful, visually rich, and emotionally resonant.

‘The AI Living City’ wasn’t just a one-off installation, it was designed to travel across the event calendar for one of the leading computer technology companies, with reconfigurable terminals and updatable software that could adapt to evolving hardware and firmware updates.

In essence, it was a scalable and story-driven framework for AI storytelling.


The Problem

Our client was a leading computer hardware giant which found itself in a race to assert relevance in the fast-evolving generative AI landscape, one increasingly defined by nimble software players and Nvidia’s dominance. They challenged us to develop a hands-on experience that would make their AI offerings comprehensible, credible, and compelling for both expert developers and casual attendees.

The brief was ambitious: demonstrate how the brand’s AI works across three layers – cloud, edge, and laptop – and do it using emerging, often unfinalised hardware.

With much of the hardware still in development, we had to prototype on legacy machines and adapt continuously as updated firmware and components rolled in.

On top of that, we needed to deliver a unified UX that made three very different technical stacks feel part of a coherent journey.


Ideation

In early sessions, we quickly realised that their product teams were inclined to showcase the backend: code, graphs, APIs, alt-tab flows. But we knew that wouldn’t translate well for users, especially in a live event space.

So, we invited the client into our process with a series of in-person workshops, using sketches, physical mock-ups, and storyboards to show how we could abstract complexity into intuitive, interactive moments.

At the same time, our R&D team had already been exploring ComfyUI, image gen models, LLMs, and fine-tuning models, tools we believed could be harnessed to create something both magical and meaningful. This project became a canvas to apply months of exploration in a single, cohesive experience.

But we also had some surprises. Unlike their competitor’s familiar ecosystem, our client’s development stack required learning new libraries and APIs, often without mature community support.


Prototype and Design

Experience and application design were tightly interwoven. Each of the three terminals featured a dual-screen setup: one for user interaction and the other for displaying outputs and information. This ensured that users always understood where to focus their attention, especially during inputs and responses.

Across all terminals, user-generated content was also displayed prominently on the Hero Display, a large-scale visualisation of ‘The AI Living City’. As attendees completed their experience, their isometric AI-generated assets replaced pre-generated tiles in this evolving cityscape.

This real-time integration offered immediate gratification, turning each participant into a visible contributor to the living digital ecosystem, and helped visually reinforce the cumulative, collaborative nature of generative AI.


Terminal 1: Home (Laptop / AIPC)

At this station, powered by a Core Ultra AIPC, users were invited to strike a pose, which was captured and used to generate a unique AI image. The experience cleverly demonstrated the use of three integrated accelerators:

  • The NPU (AI Boost) handled pose detection
  • The CPU performed background segmentation
  • The GPU processed the stable diffusion generation.

This terminal showcased how powerful on-device AI computing has become, placing generative capability directly in the user’s hands. Once generated, the custom image was immediately displayed to the user and then added to the Hero Display, transforming the city in real time.


Terminal 2: Cloud (Gaudi 2 Processor)

In this experience, attendees use their voice to answer questions on style, theme, and mascot (e.g. “curvy and organic,” “pottery,” “owl”). This was performed by running the open source LLM (Llama) on the AIPC by leveraging the NPU. The attendee’s selections were sent to the cloud, where Gaudi 2 processors powered the image generation using stable diffusion.

Speech-to-text functionality ran locally on the AIPC to capture the user’s spoken inputs using the onboard NPU, before sending the prompt to the cloud. The resulting image, once received, was displayed back to the user and added to the Hero Display, taking its place in ‘The AI Living City’ among others.


Terminal 3: Edge (Arc and Object Detection)

This station focused on object recognition using YOLO, which had been retrained to identify custom 3D-printed tokens representing various industries (e.g. 5G, security, mechanical defects). These object categories were used to create prompts for image generation.

Processing was performed on an edge device running Arc Graphics, which handled stable diffusion locally. Once generated, the asset was uploaded and visually embedded into the Hero Display, allowing users to see their input reflected in the broader city landscape.

To pull this off, we brought together a dream team of specialists:

  • Python developers fluent with LLMs architecture, fine tuning models
  • Front-end engineers who wrote modular UI templates
  • DevOps experts who built and deployed environments using Developer Cloud and Docker
  • UX and UI designers who understood the complexity of making a user journey simple, attractive and compelling.

It was six months of intense, complex, cross-disciplinary collaboration.


Live

Then came AWS re:Invent, and as expected, new hardware and firmware landed just before deployment. We scrambled. Some machines weren’t running final spec, and we had to patch on the fly, side-by-side with the engineers, to stabilise the entire system before the first attendees walked in.

We used Grafana dashboards to monitor performance and log errors, giving us real-time diagnostics and helping our client understand user behaviour in-depth.

The entire city, every click, pose, object, became a data-rich canvas, and a feedback loop for further refinement.


Looking back

This wasn’t just a showpiece. It was a culmination of deep R&D, hands-on prototyping, and a willingness to build around a moving target. It proved we could bridge multiple technical ecosystems, make abstract tools feel intuitive, and deliver it all in a high-pressure, high-stakes environment.

The feedback from our client was overwhelmingly positive, not just because it worked, but because it told a coherent story about their AI landscape in a way that put power into people’s hands.

It’s rare that a project blends so many disciplines, UX, creative coding, physical computing, infrastructure, and storytelling, into one experience. And it was even rarer to do it under the conditions we faced.

But that’s what made it thrilling.

SIGN UP FOR OUR NEWSLETTER
SUBSCRIBE TO LBB’S newsletter
FOLLOW US
LBB’s Global Sponsor
Group745
Language:
English
v2.25.1