Enterprise
Data Engineering
AI & ML
Analytics

From 500 Dashboards to Self-Service: AI-Driven BI Migration and Autonomous Analytics

How an enterprise replaced a bottlenecked BI platform with AI-migrated dashboards and self-service analytics agents — in under five months

TZ

Tony Zeljkovic

2026-04-06

Timeline
  • Industry: Enterprise
  • Duration: ~4–5 months
  • Stack: Looker (source), Streamlit on Snowflake (target), Snowflake Cortex, Claude Code, MCP, headless browser automation
  • Compliance: HIPAA/HITECH, SOC2
  • Key Results: 30% tech cost reduction | 50% headcount cost reduction | 5x data capacity | Turnaround from weeks to minutes

Executive Summary

An enterprise data platform team was drowning. Over 500 Looker dashboards served a growing business, but the queue to build or extend a dashboard had stretched to a week on average — with the worst requests taking three months. The data team couldn't scale fast enough, and the Looker contract renewal was approaching with a price tag leadership wanted to eliminate.

Narona Data led a four-to-five-month engagement that migrated the entire BI platform to Streamlit on Snowflake using AI-driven automation, then transformed the accumulated metadata into a self-service analytics layer powered by 12 domain-specific Cortex agents. The result: a platform where business users answer their own questions in minutes instead of waiting weeks.

The engagement cut the client's technology budget by 30%, reduced headcount costs by 50%, and increased the volume of data asks the organization could handle by 5x — while meeting HIPAA/HITECH and SOC2 compliance requirements throughout.

Situation

A fast-growing enterprise maintained a large data platform with over 500 dashboards, supported by a sizable team of data analysts and data engineers. The platform ran on Looker backed by Snowflake.

The data team was competent but overwhelmed. Business units were expanding faster than the team could service new requests, and the tooling itself was part of the problem — the team found Looker's interface click-heavy and time-consuming for the volume of work they were handling.

Leadership saw an opportunity: the Looker contract was coming up for renewal, and they wanted to move to a model that scaled with the business rather than with headcount. They wanted to leverage AI to empower a broader set of people to build dashboards and data products.

Complication

Three pressures converged:

Request backlog was unsustainable. The median turnaround to build or extend a dashboard was about one week. The p95 had reached three months — simply because too many requests were competing for too few analyst hours as the business expanded.

The BI contract was expensive. Renewing Looker at scale represented a significant line item that leadership wanted to eliminate in favor of an in-house solution.

Compliance couldn't be compromised. Any replacement platform needed to meet HIPAA/HITECH and SOC2 standards. This ruled out shortcuts and constrained the migration tooling itself.

Scope

  • Migrate 500+ Looker dashboards to Streamlit on Snowflake with feature parity
  • Build the migration using AI-driven automation, not manual recreation
  • Meet HIPAA/HITECH and SOC2 compliance throughout
  • Phase user migration to prevent service disruption
  • Transform accumulated BI metadata into a self-service analytics layer
  • Deliver measurable cost reduction and capacity improvement

Solutions

Phase 1: Metadata Inventory and Translation Layer

The engagement started with a comprehensive extraction of the entire Looker instance. Narona Data collected metadata on every dashboard, look, board, usage statistic, scheduled export, calculated field, filter, button, drop-down, and configuration setting. In parallel, the team mapped which Snowflake data models were being queried by which assets.

The result was a complete replica of the Looker platform in metadata form — the foundation for everything that followed.

With the inventory in hand, the team spent roughly a week designing the translation layer: a mapping from every Looker component type to an equivalent custom Streamlit component. This mapping defined what the AI pipeline would target during migration.

Phase 2: Component Library and Test Suite

Narona Data built the standardized Streamlit components specified by the translation layer, accompanied by an extensive test library. Each component was verified for correctness and performance before being used as a migration target. This gave the AI pipeline reliable, well-tested building blocks to compose into dashboards.

Phase 3: AI-Driven Migration Pipeline

This was the core of the engagement. Narona Data built LLM-driven pipelines using Claude Code to automatically translate Looker dashboards into Streamlit applications.

The team started with small batches of roughly five dashboards to establish baseline performance and iterate on the process. The initial zero-shot rate — meaning the migrated dashboard achieved perfect feature parity on the first attempt, with all visuals, data, and filters exactly correct — was approximately 10%.

Over several weeks, the team systematically improved this rate through a series of interventions:

Session architecture. The pipeline was restructured to use full Claude Code sessions rather than subtasks, which at the time provided roughly five times the context window.

Context encoding. Experimentation with input token volumes revealed performance degradation at high context utilization — consistent with Anthropic's research on designing harnesses for long-running LLM applications. The team developed compact encoding representations to keep input context within the performance envelope.

Pipeline modularity. The migration process evolved from roughly 3 steps to 12 parallelized steps, each running in its own session. This reduced per-session complexity and improved reliability.

Throughput management. Even with the modular architecture, performance decreased when migrating 100 dashboards concurrently versus 50 with an identical process. The team introduced rate limiting to extend migration duration while maintaining quality — slower but autonomous.

These optimizations brought the zero-shot rate from 10% to 65%.

Multi-Modal Verification

The migration pipeline used two layers of verification:

SQL verification via MCP. Narona Data set up and managed an MCP server on Snowflake, enabling developers and business users to query Snowflake directly through their preferred LLM framework — Claude Code, Cursor, or others. This allowed verification of SQL queries extracted from Looker against the actual data warehouse.

Visual verification via headless browser. Automated browser sessions navigated each Looker dashboard, pressed through filters and filter options, and compared the output against the Streamlit equivalent. This caught visual and interaction discrepancies that SQL-level checks alone would miss.

Within two to three weeks of active migration, all 500+ assets were transferred to Streamlit.

Phase 4: Phased User Migration

With the technical migration complete, Narona Data managed a phased rollout across business teams over several weeks. Each team received training on the new system, followed by stepwise deactivation from Looker. This prevented a thundering-herd problem on the contract end date and ensured genuine adoption rather than just deployment.

Phase 5: Self-Service Analytics Agents

The metadata inventory created during migration turned out to be far more valuable than a one-time migration artifact.

Every Looker dashboard and its underlying queries represented a verified question-answer pair: a real business question asked by a stakeholder, answered by a validated SQL query written by an analyst. Narona Data extracted approximately 1,000–2,000 of these verified queries and used them to build a semantic layer for 12 domain-driven self-service analytics agents.

These agents were hosted on the Snowflake Intelligence platform as Cortex agents. Business users could now ask data questions directly and receive answers grounded in the organization's actual query history — without waiting for an analyst.

Continuous Improvement Loop

Narona Data built an evaluation system that tracked inquiry volume, session frequency, and session duration across the self-service agents. Using conversations from experienced business users, the system assessed whether the semantic layer captured the granularity needed for each domain.

Where gaps were identified, an automated solution generated daily pull requests to extend the semantic layer — ensuring the knowledge base evolved continuously with stakeholder usage rather than requiring manual maintenance cycles.

Phase 6: Derived Data Products

The comprehensive metadata inventory and semantic layer enabled additional products beyond self-service analytics, including improved PHI catalogs for compliance and data governance. What started as a migration artifact became a foundational asset for the data organization.

Results

MetricBeforeAfterConfidence
Dashboard turnaround (median)~1 week~30–60 minutesHigh
Dashboard turnaround (p95)~3 monthsSame dayHigh
Technology costBaseline30% reductionHigh
Headcount costBaseline50% reductionHigh
Data asks handled (self-service + team)Baseline5x increaseHigh
Dashboards migrated500+ manually maintained500+ autonomously migratedHigh
AI migration zero-shot rate10%65%High
Self-service agent domains012 Cortex agentsHigh

Technology cost reduction reflects BI license elimination and infrastructure consolidation. Headcount cost reduction reflects the shift to self-service, reducing the need for analyst-mediated requests. The 5x capacity increase is measured across both self-service and data team output combined.

Closing Remarks

This engagement started as a BI migration and ended as a platform transformation. The AI-driven migration pipeline proved that large-scale dashboard migrations don't need to be manual, multi-quarter projects — but the real leverage came from treating the accumulated metadata as a training corpus for self-service agents rather than discarding it after migration.

The semantic layer, evaluation loop, and automated PRs created a system that gets smarter with use. The data team shifted from answering repetitive questions to building higher-value products — and the organization handles 5x more data asks than before with lower costs.

Ready to Talk?

Facing a similar BI migration, self-service analytics challenge, or AI-driven automation opportunity? Narona Data offers a free consultation to help you cut through complexity and build a path forward.

Get in touch →