Analytics8 logo

Analytics8

HQ
Chicago, Illinois, USA
Total Offices: 3
200 Total Employees
Year Founded: 2002

Analytics8 Innovation & Technology Culture

Updated on January 28, 2026

Analytics8 Employee Perspectives

What’s your rule for fast, safe releases — and what KPI proves it works?

I always compare an agent to an intern when talking to a client about using AI in production. It can take a lot of work off of your plate and even surprise you with how well it handles something complex, but you must validate everything before approving output. 

I use Visual Studio Code to review changes made by the agent directly inside edited files. The agent’s updates appear inline, so I can keep, change or discard each one. It’s like reviewing a pull request with each prompt. Validating one piece of work at a time keeps accuracy high.

Model Context Protocol tools add a layer of safety by giving agents clear process tools for how to run specific actions. This cuts down unpredictable behavior. Sometimes, however, the agent skips the tool, and I need to adjust the prompt to ensure it uses the tool to complete the requested action.

What proves it works is time to value. For large tasks, it’s far quicker for engineers to review migrated SQL that the agent creates and documents than if they built it manually. Additionally, since adding MCP tools to our workflow, I’ve seen a drop in rework and re-prompting. We’re able to maintain accelerated AI speed without sacrificing quality.

 

What standard or metric defines “quality” in your stack?

Quality in my stack means the agent understands my environment and builds accurate work without forcing me into repeated re-prompts or rework. Quality also means multiple engineers get consistent results when they ask the agent for help. Hallucinations and invented logic don’t meet the standard, so I actively reduce them.

 

Name one AI/automation that shipped recently and its impact on your team.

During a recent onsite hackathon focused on clearing our toughest backlog items, we built and deployed an MCP-powered agent inside Visual Studio Code. It automates one of the most time-consuming parts of a modernization project: analyzing legacy transformation logic and generating initial dbt models. 

The agent connects to Snowflake through managed MCP servers and to Collibra metadata through a custom MCP server we built. It pulls metadata from Collibra, interprets legacy logic, and then uses dbt and Snowflake servers to recreate that logic, declare sources, build staging layers, draft first-pass intermediate models, create tests, and generate documentation.

The process used to take days of manual untangling, Now, with a few sharp prompts, the agent delivers solid first passes in minutes. In the hackathon alone, we completed what would normally be a two-week sprint in just a couple of days. The automation removed most of the repetitive translation work, produced surprisingly strong early versions of complex models, accelerated our client’s modernization timeline, and allowed them to focus on higher-value warehouse design and governance decisions instead of reviewing logic by hand.

Chris Domain
Chris Domain, Managing Consultant

What is the unique story that you feel your company has with AI? If you were writing about it, what would the title of your blog be?

The blog title would be “Clean Data In, Real Results Out: Our Ground-Level Advantage in AI.” Our story with AI starts where we’ve always been strong — data quality. Everyone talks about model performance, but we focus on the part that’s often overlooked, clean, consistent and well-defined data. That’s what makes AI actually work. 

We’ve helped clients across industries wrangle messy, siloed data into usable formats — whether it’s structured, semi-structured or unstructured — and those same skills are critical for AI success. The principles we’ve used for decades still apply; if the data isn’t clear and governed, the insights won’t be either. We’re not chasing shiny tools — we’re applying real data expertise to real AI use cases.

 

What are you most excited about in the field of AI right now?

I’m excited by how quickly AI is becoming usable. Until recently, if you wanted to build something with generative AI, you needed full-scale development — custom UIs, infrastructure and engineering support. Now, that’s changing. The frameworks are maturing and the vendors we already partner with are embedding AI into their tools in a way that lets us build fast, useful solutions for clients. Whether it’s applying RAG techniques to unstructured data or using semantic models to add context to structured data, we can now turn proprietary information into actionable insight — without reinventing the wheel. That shift is a game-changer.

 

AI is a constantly evolving field. Very few people coming into these roles have years of experience to pull from. Explain what continuous learning looks like on your team. How do you learn from one another and collaborate?

We treat AI like any evolving toolset — we stay grounded in core data principles while exploring what’s new. Everyone on the team keeps an eye on the latest model releases and vendor updates, but our real value comes from translating that into client impact. We test tools in private preview, share lessons through internal whitepapers and video walkthroughs and collaborate to figure out where new frameworks can plug into existing analytics work. It’s not about chasing trends — it’s about asking, “what’s worth trying and how do we apply it responsibly to real data problems?”