Skip to main content
Blog

How to Build AI-Driven Dynamic Interfaces with Flutter's GenUI SDK

A practical guide to Flutter's GenUI SDK — how it works, when to use it, and how to ship adaptive AI-powered UI from a single codebase.

Carlton Aikins7 min read

Most mobile apps ship a fixed UI. Every screen, every layout, every interaction path is defined at build time. The user gets the same experience whether they're a power user on day 300 or a first-timer who just tapped the icon.

Flutter's GenUI SDK changes that equation. It's an orchestration layer that sits between your existing Flutter widgets and an AI agent, letting the agent compose and adapt your UI in real time based on context, user behavior, and conversational input. Instead of rendering static screens, your app renders what the agent decides is most useful right now — built from your own widget catalog, following your own design system.

The SDK is currently in alpha on pub.dev, but it's already production-capable for specific use cases. This guide covers how it works, when it makes sense, and how to start building with it.

What GenUI actually does#

At its core, GenUI is a coordination protocol. It manages three parties: the user, your Flutter widget catalog, and an AI agent (typically Gemini, though other LLMs are supported). The flow works like this:

The user interacts with your app — tapping buttons, filling forms, asking questions in a chat interface. Those interactions are captured and forwarded to the AI agent as structured context. The agent decides what UI to render next, expressed as a JSON-based composition of widgets from your catalog. Flutter renders that composition. When the user interacts with the new UI, state changes flow back to the agent, and the loop continues.

This is fundamentally different from server-driven UI, where a backend returns a layout spec. GenUI is agent-driven UI — the agent has goals, context, and the ability to reason about what the user needs, not just what the server wants to show.

The result is a high-bandwidth interaction loop. Instead of the user typing a question and getting a text response, they get a fully interactive, app-native interface tailored to their request. A travel app doesn't respond to "find me flights to Tokyo next week" with a list of text results — it renders a filterable flight comparison widget with your app's styling, connected to your booking flow.

When GenUI makes sense (and when it doesn't)#

GenUI is not a replacement for standard UI development. It's a specialized tool for a specific class of problem: interfaces where the right layout depends on runtime context that can't be fully anticipated at build time.

Strong use cases include conversational commerce experiences where the UI adapts based on what the user is looking for, onboarding flows that reshape themselves based on user responses, dashboard-style apps where different users need different data views, and support or FAQ interfaces where the agent surfaces relevant interactive widgets instead of walls of text.

Weak use cases include apps with simple, well-defined navigation (a calculator doesn't need GenUI), anything where latency is critical (the agent round-trip adds 200-500ms), and apps where the UI must be pixel-identical across sessions for regulatory or branding reasons.

The honest assessment: if your app's screens are well-defined and your users follow predictable paths, GenUI adds complexity without much payoff. If your app's value proposition involves personalization, discovery, or conversation, it's worth exploring.

Setting up GenUI in a Flutter project#

Getting started requires three things: the GenUI package, an AI backend, and a widget catalog that the agent can compose from.

First, add the dependency. The package is available on pub.dev as genui. Add it to your pubspec.yaml alongside the Firebase AI package if you're using Gemini as your agent backend.

Second, define your widget catalog. This is the critical step that most tutorials gloss over. Your catalog is the set of widgets the agent is allowed to use when composing UI. Each widget needs a schema definition — what props it accepts, what data it displays, what interactions it supports. The agent can only work with what you give it.

Think of the catalog as an API contract between your design system and the AI. A well-defined catalog with clear, descriptive schemas produces dramatically better results than a loosely-typed grab bag of widgets. Name your widgets clearly, constrain your prop types tightly, and document what each widget is for. The agent reads these schemas to decide what to render — garbage in, garbage out.

Third, wire up the orchestration. GenUI provides a GenUIController that manages the conversation state, agent communication, and widget rendering. You embed this controller in your widget tree and it handles the rest — forwarding user interactions to the agent, receiving composition instructions, and rendering the result.

How the JSON composition format works#

The agent doesn't generate Dart code. It produces a JSON structure that maps to your widget catalog. Each node in the JSON tree references a widget by name, passes props, and optionally nests children.

This is an important design decision. By constraining the agent to your pre-built widgets rather than letting it generate arbitrary code, GenUI ensures that every rendered UI element is something you've tested, styled, and approved. The agent is a composer, not a coder. It arranges your building blocks — it doesn't create new ones.

The JSON format also enables streaming. As the agent generates its composition, Flutter can begin rendering partial results, showing the UI as it's being "thought through" rather than waiting for the full response. This significantly improves perceived latency for complex compositions.

State changes flow back through the same JSON protocol. When a user taps a button or submits a form, the interaction is serialized and sent to the agent as context for the next composition decision. The agent sees what the user did and decides what to show next — a confirmation screen, an error state, a follow-up question, or an entirely different layout.

Performance considerations#

The elephant in the room with any AI-driven UI system is latency. Every new screen composition requires an agent round-trip, and even fast LLMs add meaningful delay compared to locally-rendered Flutter widgets.

In practice, the latency budget breaks down like this: 100-300ms for the agent to process context and generate a composition, plus 50-100ms for JSON parsing and widget tree construction, plus Flutter's normal render pipeline. Total time from interaction to pixels is typically 200-500ms for simple compositions, 500-1000ms for complex multi-widget layouts.

That's fine for conversational interfaces where users expect a brief "thinking" moment. It's not fine for instant-feedback interactions like scrolling a list or dragging a slider. The solution is hybrid architecture: use GenUI for the high-level layout decisions and agent-driven flows, but keep standard Flutter widgets for everything that needs instant response.

Caching helps too. GenUI supports composition caching — if the agent produces the same layout for similar inputs, the cached composition can be rendered instantly without a round-trip. In practice, a well-designed catalog with consistent user flows will hit cache frequently after the first few interactions.

Where this connects to the broader AI-in-mobile trend#

GenUI represents one side of a larger shift: AI agents are moving from being features inside apps to being the architecture that drives apps. On the development side, tools like Xcode 26.3's agentic coding let agents write and test your code. On the runtime side, GenUI lets agents compose and adapt your UI. On the distribution side, platforms like Stora use agents to handle screenshots, store listings, and compliance — the entire pipeline from code to live store listing.

The common thread is that mobile developers in 2026 are increasingly working with agents, not just building for users. Your widget catalog is now an API surface for AI. Your test suite is now an agent's feedback loop. Your store listing metadata is now something an agent can optimize and submit.

For Flutter developers specifically, GenUI is worth watching even if you don't adopt it immediately. The pattern of agent-composable UI — widgets with clear schemas, structured state, and machine-readable interactions — makes your code better regardless of whether an AI is driving it. It's the same principle behind good component architecture: clear contracts, explicit props, composable building blocks.

Getting started today#

If you want to experiment with GenUI, the fastest path is the Flutter team's official get-started guide on docs.flutter.dev. It walks through setting up a basic agent-driven chat interface with a small widget catalog and Gemini as the backend.

From there, the learning curve is less about the SDK and more about catalog design. The developers getting the best results with GenUI are the ones who spend the most time refining their widget schemas — making props descriptive, constraining types, and writing clear documentation that helps the agent understand when to use each widget.

The SDK is alpha, the ecosystem is young, and the best patterns are still being discovered. But the core idea — letting AI agents compose UI from your building blocks instead of generating code from scratch — is solid, and it's the direction Flutter is betting on for the next generation of adaptive mobile experiences.

Ready to automate your app store presence?

Screenshots, metadata, compliance — all from a single GitHub repo connect.

Get started for free