Product thinking for data teams

To get the most out of data centralization and democratization, treat data assets like products
May 12, 2023

Data centralization, the foundational use case for data movement, enables your organization to turn raw data into assets that are used by analysts or decision makers in an organization. 

These assets include:

  1. Data models – tables and schemas that represent important attributes of real-world phenomena and are easily used to produce metrics
  2. Visualizations – images that summarize the output of data models
  3. Dashboards collections of visualizations
  4. Reports – writings meant to support or guide decisions
  5. Operationalized internal data – embedded dashboards and metrics for decision support 
  6. Data services and APIs – data made available as streams for third-parties to consume
  7. AI/ML Predictive models (algorithm + data set) – fed into reports or dashboards, or directly integrated into products
  8. Customer-facing digital products – applications that incorporate data such as Google Maps, Netflix recommendations, etc.

More advanced uses of data require treating data assets as products. Data democratization requires the sustained production and management of data assets for internal consumers. Although beyond the scope of this discussion, longer-term, you may also find yourself building data solutions for external customers.

As a data professional, it’s important to adopt the mentality that you are creating products for end users of data assets to help them solve practical problems. 

This mentality is called product thinking.

What is product thinking?

Product thinking is a problem-solving method that uses first principles to understand problems and identify solutions. The fundamental question posed and answered by product thinking is “what problem does this product solve”?

The most critical element of product thinking is to place the user first, along with the jobs they're trying to accomplish. The second most important element is to rapidly and iteratively produce and refine products in response to new insights and changing needs. 

The basic process looks like so:

Identify

  • Understanding users
  • Gathering requirements

Design

  • Defining scope
  • Managing expectations

Develop

  • Rapid prototyping
  • Productionizing

Launch

  • Marketing and rolling out the product
  • Training users via office hours and internal communications
  • Driving adoption, including through self-service whenever possible

Assess

  • Evaluating against expectations and KPIs

How to identify problems

The simplest way to identify a problem or job to be done is to ask your users. A more difficult (but often more revealing) way is to carefully observe how your users go about solving their problems, i.e. an ethnographic approach. It’s sometimes tricky to determine the precise nature of the problem your user wants to solve. Sometimes, your user isn’t even able to easily articulate it. The importance of this initial step is hard to overstate – if you don’t properly understand what problem you’re solving, almost nothing else matters.

For internal data products, your users include both leaders and individual contributors across your organization. Because business functions across an organization are highly specialized, you should consider organizing your data teams using a hub-and-spoke model. This consists of a centralized analytics team that reports directly to senior leadership and produces less-differentiated data products, as well as satellite teams that are embedded with specific functional groups within your organization. This enables your analysts to develop familiarity and expertise in particular functional areas of the business, and in particular — better understand the end users of their data products.

How to design a solution

You should determine the full progression of steps required to complete your user’s “job.” You should also set firm boundaries about what the job, and its solution, aren’t.

For the purposes of analytics, you’ll need a deep understanding of the data sources used. The specific domain knowledge required for every business function makes yet another strong argument for a hub-and-spoke approach to organizing your data team.

How to develop the solution

Build an MVP iteratively. A “minimum viable product” is a prototype that does everything required to solve a specific and discrete problem. A successful MVP will validate your understanding of that problem.

Over multiple iterations, you will add features to improve and enrich the experience for your users. In the case of analytics, there are always more metrics to collect, richer visualizations and other potential improvements.

How to launch the solution

Launching internal data products consists of making an internal announcement and familiarizing your users with the proper use of the new assets. This may require detailed documentation and hands-on instruction.

How to assess your impact

Although most of your data products are unlikely to directly generate revenue, you can assess the success of your internal products by user satisfaction, through metrics like the net promoter score (NPS) and adoption rates. 

Suggested starting place

Low effort, high impact projects are good candidates for MVPs. 

Early examples common to many companies include:


Revenue

  • Annual recurring revenue (ARR)
  • Net revenue retention (NRR)
  • Unit economics: customer acquisition cost, sales efficiency

Sales and Marketing

  • Customer growth and churn rate
  • Month-over-month revenue growth
  • Marketing qualified lead (MQL) and conversion metrics
  • Other funnel metrics

Product

  • Daily, weekly and monthly active users
  • Customer journey
  • Feature usage
  • Net promoter score

Then, develop a roadmap laying out all the elements you need to support the MVP as well as further initiatives. For instance, suppose you have three data sources that must combine to produce your sales and marketing funnel. 

Assuming you have an automated data pipeline:

  1. First, sync all three sources
  2. Then, explore and document the data from all three sources
  3. Then, create date models for reports and dashboards 
  4. Route data into operational systems like your CRM as needed
  5. Build predictive models

Although internal-facing data products lack the cachet and glamor of customer-facing products, they are extremely important to ensure smooth operations in your organization. As a data professional, you shouldn’t spare any effort to exercise product thinking to deliver data assets. And who knows – eventually, you may find yourself producing data assets that are integrated into customer-facing products.

[CTA_MODULE]

Kostenlos starten

Schließen auch Sie sich den Tausenden von Unternehmen an, die ihre Daten mithilfe von Fivetran zentralisieren und transformieren.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Data insights
Data insights

Product thinking for data teams

Product thinking for data teams

May 12, 2023
May 12, 2023
Product thinking for data teams
To get the most out of data centralization and democratization, treat data assets like products

Data centralization, the foundational use case for data movement, enables your organization to turn raw data into assets that are used by analysts or decision makers in an organization. 

These assets include:

  1. Data models – tables and schemas that represent important attributes of real-world phenomena and are easily used to produce metrics
  2. Visualizations – images that summarize the output of data models
  3. Dashboards collections of visualizations
  4. Reports – writings meant to support or guide decisions
  5. Operationalized internal data – embedded dashboards and metrics for decision support 
  6. Data services and APIs – data made available as streams for third-parties to consume
  7. AI/ML Predictive models (algorithm + data set) – fed into reports or dashboards, or directly integrated into products
  8. Customer-facing digital products – applications that incorporate data such as Google Maps, Netflix recommendations, etc.

More advanced uses of data require treating data assets as products. Data democratization requires the sustained production and management of data assets for internal consumers. Although beyond the scope of this discussion, longer-term, you may also find yourself building data solutions for external customers.

As a data professional, it’s important to adopt the mentality that you are creating products for end users of data assets to help them solve practical problems. 

This mentality is called product thinking.

What is product thinking?

Product thinking is a problem-solving method that uses first principles to understand problems and identify solutions. The fundamental question posed and answered by product thinking is “what problem does this product solve”?

The most critical element of product thinking is to place the user first, along with the jobs they're trying to accomplish. The second most important element is to rapidly and iteratively produce and refine products in response to new insights and changing needs. 

The basic process looks like so:

Identify

  • Understanding users
  • Gathering requirements

Design

  • Defining scope
  • Managing expectations

Develop

  • Rapid prototyping
  • Productionizing

Launch

  • Marketing and rolling out the product
  • Training users via office hours and internal communications
  • Driving adoption, including through self-service whenever possible

Assess

  • Evaluating against expectations and KPIs

How to identify problems

The simplest way to identify a problem or job to be done is to ask your users. A more difficult (but often more revealing) way is to carefully observe how your users go about solving their problems, i.e. an ethnographic approach. It’s sometimes tricky to determine the precise nature of the problem your user wants to solve. Sometimes, your user isn’t even able to easily articulate it. The importance of this initial step is hard to overstate – if you don’t properly understand what problem you’re solving, almost nothing else matters.

For internal data products, your users include both leaders and individual contributors across your organization. Because business functions across an organization are highly specialized, you should consider organizing your data teams using a hub-and-spoke model. This consists of a centralized analytics team that reports directly to senior leadership and produces less-differentiated data products, as well as satellite teams that are embedded with specific functional groups within your organization. This enables your analysts to develop familiarity and expertise in particular functional areas of the business, and in particular — better understand the end users of their data products.

How to design a solution

You should determine the full progression of steps required to complete your user’s “job.” You should also set firm boundaries about what the job, and its solution, aren’t.

For the purposes of analytics, you’ll need a deep understanding of the data sources used. The specific domain knowledge required for every business function makes yet another strong argument for a hub-and-spoke approach to organizing your data team.

How to develop the solution

Build an MVP iteratively. A “minimum viable product” is a prototype that does everything required to solve a specific and discrete problem. A successful MVP will validate your understanding of that problem.

Over multiple iterations, you will add features to improve and enrich the experience for your users. In the case of analytics, there are always more metrics to collect, richer visualizations and other potential improvements.

How to launch the solution

Launching internal data products consists of making an internal announcement and familiarizing your users with the proper use of the new assets. This may require detailed documentation and hands-on instruction.

How to assess your impact

Although most of your data products are unlikely to directly generate revenue, you can assess the success of your internal products by user satisfaction, through metrics like the net promoter score (NPS) and adoption rates. 

Suggested starting place

Low effort, high impact projects are good candidates for MVPs. 

Early examples common to many companies include:


Revenue

  • Annual recurring revenue (ARR)
  • Net revenue retention (NRR)
  • Unit economics: customer acquisition cost, sales efficiency

Sales and Marketing

  • Customer growth and churn rate
  • Month-over-month revenue growth
  • Marketing qualified lead (MQL) and conversion metrics
  • Other funnel metrics

Product

  • Daily, weekly and monthly active users
  • Customer journey
  • Feature usage
  • Net promoter score

Then, develop a roadmap laying out all the elements you need to support the MVP as well as further initiatives. For instance, suppose you have three data sources that must combine to produce your sales and marketing funnel. 

Assuming you have an automated data pipeline:

  1. First, sync all three sources
  2. Then, explore and document the data from all three sources
  3. Then, create date models for reports and dashboards 
  4. Route data into operational systems like your CRM as needed
  5. Build predictive models

Although internal-facing data products lack the cachet and glamor of customer-facing products, they are extremely important to ensure smooth operations in your organization. As a data professional, you shouldn’t spare any effort to exercise product thinking to deliver data assets. And who knows – eventually, you may find yourself producing data assets that are integrated into customer-facing products.

[CTA_MODULE]

The Chief Data Officer's guide to digital transformation offers a roadmap to data maturity.
Read the ebook
Topics
No items found.
Share

Verwandte Beiträge

How to organize your analytics team
Data insights

How to organize your analytics team

Beitrag lesen
A new framework for evaluating data teams
Data insights

A new framework for evaluating data teams

Beitrag lesen
How data teams can do more with fewer resources
Data insights

How data teams can do more with fewer resources

Beitrag lesen
No items found.
Announcing Fivetran Managed Data Lake Service
Blog

Announcing Fivetran Managed Data Lake Service

Beitrag lesen
Fivetran Product Update: June 2024
Blog

Fivetran Product Update: June 2024

Beitrag lesen
Implementing a data fabric: From silos to insights
Blog

Implementing a data fabric: From silos to insights

Beitrag lesen

Kostenlos starten

Schließen auch Sie sich den Tausenden von Unternehmen an, die ihre Daten mithilfe von Fivetran zentralisieren und transformieren.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.