Why context is key to building reliable data pipelines

The data catalog is a critical step in the movement toward becoming a data-driven business. Here are 4 table-stakes questions to ask yourself to deliver reliable, trusted data.
March 9, 2023

Data pipelines drive data success

Everyone wants the data they need but few understand how that data actually gets to them. In this sense, data pipelines are the unsung heroes of the data stack.

Pipelines connect the dots or, in this case, the data systems. And the data engineers make it all work. These go well beyond data movement and data modeling. Data quality, security and privacy, and scaling operations all need to be handled to deliver reliable, trusted data.

The data integration iceberg from Charles Wang, Product Evangelist at Fivetran


Our team at Alation knows this well. The key to the success is our own use of Alation to provide visibility across all these systems to simplify the workflows of everyone, including our data engineering team.

Good data pipelines are key to delivering fast and compliant data access. At its best, this leads to:

  • Faster answers to business questions
  • Better input to create proactive AI models
  • Self-service to let everyone explore data

What could possibly go wrong?

With all that value to be gained, it can be tempting to jump right in and create lots of data pipelines and start answering questions as fast as you can. But just because we can does not mean that we should. Leaders would be wise to take a lesson from the Navy Seals who, long ago, figured out that ‘slow is smooth, smooth is fast.’ This motto came from asking: How can teams work together, quickly and smoothly, in chaotic conditions? Chris Fussel, of the McChrystal Group, shares that taking a moment to coordinate how a big team exits a helicopter ensures the team completes the maneuver as quickly as possible.

The same is true for data pipelines: Knowing the right questions to ask – and taking the time to coordinate and converse with your team before acting – is key to truly going fast.

Take a moment and work with your team to ask:

  • Do we have the right datasets?
  • Are there any issues with the data?
  • Who can we team up with to speed up data movement?
  • Who will benefit from the data and how?

Answering these questions creates the ‘smooth’ part of the equation and lets you and your team go fast – with precision and reliability.

Data intelligence delivers answers

But let’s back up. How do you have the information to host these kinds of informed conversations in the first place? Data intelligence in a data catalog like Alation delivers contextual answers to table-stakes questions about data: Where does it sit? Where did it come from? Who uses it the most, and how?

Answering these questions up-front enables all users to have action-oriented conversations with their colleagues about pipelines.

1. Which is the right data set?

What data actively is in use and which columns are the most popular? This helps your team pick the right data to move. And with data lineage, you can see where the data has already been – from its sources in the operational systems to its destinations in analytics and visualization tools. And with the new Metadata API from Fivetran, you can even see how that data was moved.

2. Are there any issues with the data?

With active data governance, everyone can see exactly what data they should use and how to use it. The data catalog is the single place to see trust flags and business definitions, as well as which policies are in effect. And with data lineage, you can trace the impact of any data disruptions and pick the best possible data for your use case.

Data lineage in the catalog gives users critical context.

3. Who can I collaborate with?

All that information on the data means one thing: You are not alone in wanting to work with it. The data catalog not only shows you how the data has been used, but who uses it most. This is not just the data owners or data stewards. With the modern data stack and self-service platforms, everyone can get access to data. And that means a host of new opportunities for collaboration. You might find data engineers that have developed pipelines you can reuse. Or data scientists that have model-building pipelines that can save you time and make your work more accurate. Starting from already done work is one sure way to go fast.

4. Who benefits from the data and how?

Once you get that data moving, you still need to make sure everyone can access and know how to use it. The data catalog is the one-stop shop for everyone across the organization to access trusted, relevant data, including the data you just identified, validated, moved and made available. The documentation, tags and business glossary ensure that people can understand that data, whereas trust flags and policies ensure they know how to use it compliantly. This too speeds up data access.

Moving all that data is only as valuable as its actual use by everyone in the organization. This last mile is where the data catalog really complements the data engineer’s work.

Data intelligence is the secret ingredient in creating a smooth pipeline process and truly going fast. With a data catalog, data engineers are empowered to deeply understand and leverage the right data to build more powerful pipelines.

Data engineers should use data intelligence to make critical and informed decisions to

  • Find the right data
  • Know how it can be used
  • Collaborate to speed pipeline development
  • Ensure everyone can access the data

Taking the time to host that conversation, with help from the data catalog, before you dive into building pipelines, will ensure their ultimate success.

Hear from Alation at this year's Modern Data Stack Conference on April 4-5, 2023 at their “Look before you leap” session. Register with the discount code MDSCON-ALATIONBLOG to get 20 percent off by March 31, 2023!

Start for free

Join the thousands of companies using Fivetran to centralize and transform their data.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Data insights
Data insights

Why context is key to building reliable data pipelines

Why context is key to building reliable data pipelines

March 9, 2023
March 9, 2023
Why context is key to building reliable data pipelines
The data catalog is a critical step in the movement toward becoming a data-driven business. Here are 4 table-stakes questions to ask yourself to deliver reliable, trusted data.

Data pipelines drive data success

Everyone wants the data they need but few understand how that data actually gets to them. In this sense, data pipelines are the unsung heroes of the data stack.

Pipelines connect the dots or, in this case, the data systems. And the data engineers make it all work. These go well beyond data movement and data modeling. Data quality, security and privacy, and scaling operations all need to be handled to deliver reliable, trusted data.

The data integration iceberg from Charles Wang, Product Evangelist at Fivetran


Our team at Alation knows this well. The key to the success is our own use of Alation to provide visibility across all these systems to simplify the workflows of everyone, including our data engineering team.

Good data pipelines are key to delivering fast and compliant data access. At its best, this leads to:

  • Faster answers to business questions
  • Better input to create proactive AI models
  • Self-service to let everyone explore data

What could possibly go wrong?

With all that value to be gained, it can be tempting to jump right in and create lots of data pipelines and start answering questions as fast as you can. But just because we can does not mean that we should. Leaders would be wise to take a lesson from the Navy Seals who, long ago, figured out that ‘slow is smooth, smooth is fast.’ This motto came from asking: How can teams work together, quickly and smoothly, in chaotic conditions? Chris Fussel, of the McChrystal Group, shares that taking a moment to coordinate how a big team exits a helicopter ensures the team completes the maneuver as quickly as possible.

The same is true for data pipelines: Knowing the right questions to ask – and taking the time to coordinate and converse with your team before acting – is key to truly going fast.

Take a moment and work with your team to ask:

  • Do we have the right datasets?
  • Are there any issues with the data?
  • Who can we team up with to speed up data movement?
  • Who will benefit from the data and how?

Answering these questions creates the ‘smooth’ part of the equation and lets you and your team go fast – with precision and reliability.

Data intelligence delivers answers

But let’s back up. How do you have the information to host these kinds of informed conversations in the first place? Data intelligence in a data catalog like Alation delivers contextual answers to table-stakes questions about data: Where does it sit? Where did it come from? Who uses it the most, and how?

Answering these questions up-front enables all users to have action-oriented conversations with their colleagues about pipelines.

1. Which is the right data set?

What data actively is in use and which columns are the most popular? This helps your team pick the right data to move. And with data lineage, you can see where the data has already been – from its sources in the operational systems to its destinations in analytics and visualization tools. And with the new Metadata API from Fivetran, you can even see how that data was moved.

2. Are there any issues with the data?

With active data governance, everyone can see exactly what data they should use and how to use it. The data catalog is the single place to see trust flags and business definitions, as well as which policies are in effect. And with data lineage, you can trace the impact of any data disruptions and pick the best possible data for your use case.

Data lineage in the catalog gives users critical context.

3. Who can I collaborate with?

All that information on the data means one thing: You are not alone in wanting to work with it. The data catalog not only shows you how the data has been used, but who uses it most. This is not just the data owners or data stewards. With the modern data stack and self-service platforms, everyone can get access to data. And that means a host of new opportunities for collaboration. You might find data engineers that have developed pipelines you can reuse. Or data scientists that have model-building pipelines that can save you time and make your work more accurate. Starting from already done work is one sure way to go fast.

4. Who benefits from the data and how?

Once you get that data moving, you still need to make sure everyone can access and know how to use it. The data catalog is the one-stop shop for everyone across the organization to access trusted, relevant data, including the data you just identified, validated, moved and made available. The documentation, tags and business glossary ensure that people can understand that data, whereas trust flags and policies ensure they know how to use it compliantly. This too speeds up data access.

Moving all that data is only as valuable as its actual use by everyone in the organization. This last mile is where the data catalog really complements the data engineer’s work.

Data intelligence is the secret ingredient in creating a smooth pipeline process and truly going fast. With a data catalog, data engineers are empowered to deeply understand and leverage the right data to build more powerful pipelines.

Data engineers should use data intelligence to make critical and informed decisions to

  • Find the right data
  • Know how it can be used
  • Collaborate to speed pipeline development
  • Ensure everyone can access the data

Taking the time to host that conversation, with help from the data catalog, before you dive into building pipelines, will ensure their ultimate success.

Hear from Alation at this year's Modern Data Stack Conference on April 4-5, 2023 at their “Look before you leap” session. Register with the discount code MDSCON-ALATIONBLOG to get 20 percent off by March 31, 2023!

Related blog posts

Five questions for MDSCon keynote speaker DJ Patil
Data insights

Five questions for MDSCon keynote speaker DJ Patil

Read post
Why you should attend the Modern Data Stack Conference
Company news

Why you should attend the Modern Data Stack Conference

Read post
10 data pipeline challenges your engineers will have to solve
Data insights

10 data pipeline challenges your engineers will have to solve

Read post
No items found.
How Tinuiti meets the data demands of digital marketing
Blog

How Tinuiti meets the data demands of digital marketing

Read post
Automated fraud detection with Fivetran and BigQuery
Blog

Automated fraud detection with Fivetran and BigQuery

Read post
How to build a data foundation for generative AI
Blog

How to build a data foundation for generative AI

Read post

Start for free

Join the thousands of companies using Fivetran to centralize and transform their data.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.