Data insights

How I used AI agents to optimize product operations

May 5, 2026
How I used AI agents to optimize product operations
Conversational analytics leveraging Fivetran, Jira, and Claude saves analytical effort and boosts productivity.

As Chief Product Officer, I have long wanted to find ways to optimize our valuable engineering and product resources. I needed a way to access, analyze, and act on project management data at scale. But even though the answer had to be somewhere in the data our team generates from daily activities, I couldn’t easily find it. 

Like a growing number of organizations around the world, we use Jira for project management. Usually, we answer resource allocation questions by having our product analysts pore over our Jira data, clean it, model it, and build dashboards and reports. However, our product analytics bandwidth is, like that of most organizations, also severely constrained, leading to project turnaround times measured in weeks, sometimes months.

Conversational analytics, enabled by generative and agentic AI, has been gaining traction as an alternative to traditional analytics. Instead of building dashboards and reports using BI platforms and SQL, or waiting on analysts to do so, you can make queries in natural language and receive detailed, actionable insights. This shift is enabled by LLMs that translate natural language into instructions, agentic AI that can operate software, and MCP servers that can turn natural language into valid API calls to programmatically control software.

[CTA_MODULE]

I tried querying the application through its MCP server

Jira offers a Remote MCP server, and you could, in principle, send analytical queries directly to it. Initially, I tried directly querying Jira through its MCP server (by way of Claude Desktop). I instantly ran afoul of data volume issues and could not join the data with additional context, such as team structure. As a result, I could not assemble any kind of comprehensive answer.

MCP servers for applications like Jira are fundamentally designed for programmatically controlling business operations, not for extracting data for analytics. You are very likely to be throttled by usage limits when using APIs for most software. In addition, you can’t use SQL clauses like filtering, aggregations, and joins, or otherwise transform the data, and you certainly can’t combine it with data from other sources. 

I had more luck querying the data lakehouses' MCP server

I found a better way to leverage conversational analytics for project management and productivity data using Fivetran as well as other off-the-shelf tools.

This time, I first integrated Jira data into BigQuery using Fivetran. Our connectors move the data — both structured and unstructured — that populates the data lake or lakehouse, along with the context agents need to run document searches and analytical queries. We do that with low latency and minimal impact, thus avoiding the issues of hitting an application MCP directly.

Then, I queried BigQuery through its MCP server. Modern data lakes and data lakehouses like BigQuery are essential for analytics as well as every other data use case. Centralized data repositories realize the value of data by matching and combining records across sources, creating a comprehensive, unified, and accurate view of customers and operations. Moreover, the interoperability of open table formats ensures that teams can swap query engines as needed for reporting, predictive modeling, generative AI, or agentic AI.

In hindsight, this should have been my first instinct. 

After this solution occurred to me, all it took was connecting some off-the-shelf tools, writing an Agent Skills file for Claude, and running the query. Through Claude’s main desktop interface, you invoke the markdown as you would any Agent Skill, then type in your prompt. Claude sends instructions to the MCP server for BigQuery, which translates the natural language into valid API calls and SQL to query the appropriate data models.

The user experience is shown below; conversational analytics with agentic AI can be extremely easy.

Since the Agent Skills file contains detailed instructions to perform the above flow, namely how to query BigQuery, the user’s experience is extremely simple:

  1. From the Claude desktop,
    1. Invoke the Skill.md 
    2. Type in the query and hit “enter.”
  2. The interface will spend some time “thinking.”
  3. You will receive summary results in the interface, as well as a detailed report in the form of a Markdown file

Even a short, simple query like “How much did engineering spend on incidents in March?” returns a detailed, downloadable report listing total incidents closed, total effort days, average effort per incident, and much more. Overall, our report classified about 28k tickets and 28k engineering days across 51 teams into incidents, bugs, debt, feature improvements, and projects. Two major findings from the report include:

  1. We learned that ~45% of company R&D capacity went to incidents, bugs, and debt, and identified teams with exceptionally high incident loads. By identifying and labeling incidents and their impacts, we were able to prioritize projects to accelerate incident resolution.
  2. One of our R&D teams had spent the previous quarter almost completely consumed with infrastructure work and had spent only 14% of their time on product work. We are now revisiting and resetting the scope of the team’s work.

This impactful report would have taken our engineering analytics team multiple sprints to build manually. Instead, the only real work required was writing the Skill markdown file, which augmented the prompt by stipulating basic parameters and assumptions for the project, such as:

  • General business context, such as team structure, typical project cadence, etc.
  • How to access the data, including projects, schemas, and tables
  • The underlying data model, in general
  • How to join, aggregate, and otherwise transform which tables to produce which metrics

A technical answer for a business question

I started off posing a business question that would have taken weeks to answer by hand, and ended up with a technical solution that a single person could execute in hours or, at most, days. In the process, I also answered a critical business question.

Aside from the learnings about our company’s product team, there are two major takeaways from this experience:

  • The first is that generative and agentic AI, with the abilities to interpret natural language and perform computer tool use, respectively, enable conversational analytics. This allows you to sidestep the aggravation involved in manually building dashboards and reports.
  • The second is that, despite the tempting prospect of using MCP servers to talk directly to applications, data integration for analytics remains a critical step. You should talk to analytical environments for analytical insights, and talk to applications to programmatically control them. There are no shortcuts to good analytics; you need to build an open, interoperable data infrastructure to ensure reliability and manage costs.

When we used to say “democratize data,” there was a heavy implication that everyone in an organization had to learn and use SQL to fully take advantage of data. Now, it requires little more than the ability to ask the right questions.

[CTA_MODULE]

Why do companies struggle to deliver AI? Avoid common pitfalls.
Read our report
Do you use Jira? Build your own agent from the ground up, starting with Fivetran.
Start now
Share

Related blog posts

Start for free

Join the thousands of companies using Fivetran to centralize and transform their data.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.