— work/generative analytics · ai-native canvas · 2024–present
generative analytics · ai-native canvas · 2024–present

Designing the foundation of an AI-native analytics canvas at Fusedash — six product surfaces, two personas, one generative system.

I'm the founding designer at Fusedash — a generative analytics platform where dashboards, charts, maps, and reports build themselves from your data. I own the design foundation across six product surfaces, two personas sharing one system, and the UX of non-deterministic AI workflows — confidence signaling, human-in-the-loop handoffs, progressive disclosure of automation. This is a current 0→1 engagement, not a closed case — what follows is the foundation as it stands and the methodology shipping it.

— who

Fusedash · generative analytics platform · AI dashboards from CSV, REST, or any MCP-compatible model

— what

Generative UX · Dual-Persona Architecture · Data-Viz Doctrine · Agentic Patterns · Design Foundation · Component System

— result

6 surfaces designed · 2 personas on one foundation · MCP-based · live and iterating

— scope

full-time Senior Product Designer · Oct 2024 – Present · founding designer · Vienna, VA (remote)

Generative architecture — data sources flow through the user's chosen AI model into six product surfaces.

— outcomes

6

product surfaces · dashboards, storytelling, charts, maps, chat, real-time

2

personas · operator power and end-customer simplicity on one foundation

0→1

founding designer · the design system, the patterns, the doctrine

problem

dashboards that build themselves break every assumption a dashboard tool makes.

Fusedash is a generative analytics platform — upload a CSV, connect a REST API, or link any MCP-compatible model and the canvas generates KPI dashboards, AI charts, choropleth and heatmap maps, plain-language data chat, and real-time monitoring views. The product is built on the Model Context Protocol — customers bring their own AI model, no lock-in, no per-vendor dependency. Pricing runs on token packs for AI-powered actions: chart generation, summaries, conversational queries.

That product premise breaks the templates dashboard tools have leaned on for two decades. Fixed dashboards assume a designer or analyst pre-builds the view; Fusedash assumes the system builds it. Deterministic UI assumes one input produces one output; generative UI produces ranges of outputs that need to be ranked, edited, accepted, or rejected. The design problem isn't "lay out a dashboard" — it's "design the contract between a human, a non-deterministic agent, and a dataset where every interaction may produce something the user has never seen before."

users

two personas, one foundation — operator power and end-customer simplicity.

By use case, Fusedash serves e-commerce and retail, financial services, SaaS, agencies running client reporting, and operations and logistics teams. By role, it splits cleaner: business leaders who want answers, and analysts and BI teams who want control over metric logic and reusable views.

That split is the design challenge. BI teams need to define metrics, govern data sources, and own the canvas; business leaders need a chat that returns a chart and a one-line takeaway. Same dataset. Two presentation contracts. Most analytics products pick one persona and treat the other as an afterthought — Fusedash can't, because the operator is the one provisioning the canvas the end customer consumes. The design foundation has to make both fluent on the same primitives.

role

Senior Product Designer · founding and sole designer.

Full-time engagement, October 2024 – Present. I own the full product design process for complex data and analytics software — concept, wireframes, high-fidelity Figma, developer handoff. I built the design foundation from zero, designed every consuming surface, partnered with engineering and product on implementation, run user research and usability testing to validate, and contribute to the design guidelines, standards, and shared component system the team builds against.

Most of my time goes into the UX of non-deterministic systems — agent interfaces, human-in-the-loop handoffs, confidence signaling, progressive disclosure of automation. The chat surface and the AI chart generator are where this work concentrates.

process

building the foundation while shipping the surfaces.

phase one — surface inventory and dual-persona mapping

Mapped the six surfaces against the two personas — what operators do that end customers don't, what end customers do that operators shouldn't, and the primitives they share. The shared layer is large: tokens, charts, tables, filters, empty states. The divergent layer is what governs whether each persona feels addressed — operator panels for metric definition, model routing, and source governance; end-customer surfaces stripped to question, answer, and one-tap follow-up.

phase two — generative architecture and agentic patterns

Designed the canvas around the generative loop — prompt → model → candidate output → human edit → commit. Confidence signals on every AI-generated artifact. Edit affordances at every step (chart type, encoding, filter, copy). Provenance shown without becoming clutter — which model produced this, on which data, when, with what tokens. The chat surface is where this concentrates: a plain-language Q&A over data that has to feel like a conversation but commit results that look like a chart a BI team would sign off on.

phase three — data-viz doctrine

Stephen Few–grounded chart-type decisions baked into the AI chart generator's defaults. The model can produce any chart, but the system biases toward the right chart — bars for categorical comparisons, lines for time series, no pie charts past three slices, no dual-axis without explicit user override. The doctrine is codified as a custom skill in my toolchain and applied to Fusedash's output set, the same skill applied to wheat and GDP datasets in parallel work.

phase four — design foundation

Tokens, components, and patterns built to support the dual-persona split. Component primitives that compose differently for operator and end customer without forking the system. AI-native production toolchain — Claude Code, Figma MCP, custom skills — compressing discovery to handoff so a single designer can keep pace with engineering on a six-surface product.

evidence
Generative architecture — data sources flow through the user's chosen AI model into six product surfaces.
Generative architecture — data → user's AI model → six surfaces. MCP-based, no lock-in.
Dual-persona map — operator power versus end-customer simplicity, where they share primitives and where they diverge.
Dual-persona map — operator power versus end-customer simplicity, where they share primitives and where they diverge.
Design foundation — tokens and components for data-heavy workflows across both personas.
Design foundation — tokens and components for data-heavy workflows across both personas.
Stephen Few–grounded chart-type decision matrix applied to the AI chart generator's defaults.
Stephen Few–grounded chart-type decision matrix applied to the AI chart generator's defaults.
Agentic handoff in chat — confidence signaling, HITL, progressive disclosure of automation.
Agentic handoff in chat — confidence signaling, HITL, progressive disclosure of automation.
Dashboard surface — high-fidelity rendered view of the system in production use.
Dashboard surface — high-fidelity rendered view of the system in production use.
status

in-flight · current engagement · live and iterating.

This is not a closed case study with shipped business outcomes. The product is live, the team is shipping, and the foundation is iterating against real usage. The case is the methodology and the foundation as it stands — generative analytics framing, dual-persona architecture, agentic UX patterns, data-viz doctrine. The receipts are the surfaces. The proof is forthcoming.

— bottom line

6 product surfaces designed for an AI-native analytics canvas · Dual-persona foundation built for BI operators and business leaders without forking the system · Agentic UX patterns applied across the canvas — confidence signaling, human-in-the-loop handoffs, progressive disclosure of automation.