Five key attributes of a highly efficient data pipeline

Not all ELT solutions are created equal. Here are the capabilities your tool needs to efficiently move data.
July 17, 2023

How effectively data pipelines move data depends on how they’re engineered. Less-efficient pipelines will increase your data stack costs and require your data engineers to do more work. That means it’s worthwhile understanding pipeline efficiency if you want to save your data team time and control infrastructure costs.

Specifically, look for a data pipeline that:

  1. Normalizes data and provides well-designed schemas
  2. Functions idempotently, automatically recovering from sync failures
  3. Syncs data incrementally
  4. Allows you to granularly de-select columns and tables
  5. Enables programmatic management

NB: In this post, we’re focusing on data pipelines in operation as they extract data from a source and land it in a destination. Two additional factors influence overall pipeline efficiency: pipeline maintenance and data transformation capabilities. Consider choosing a data movement provider that automatically adapts to source changes and offers features that accelerate data transformation.

[CTA_MODULE]

1. Data normalization

Semi-structured data from a SaaS source like Marketo isn’t immediately usable for SQL-based data analysis. So what’s the best way to normalize it into a standard format and create a useful schema for data engineers and analysts?

A well-designed normalized schema provides nearly everything someone needs to know to work with a data set:

  • The tables represent business objects
  • The columns represent attributes of those objects
  • Well-named tables and columns are self-describing
  • The primary and foreign key constraints reveal the relationships between the business objects

Here’s a representative self-describing Fivetran schema for Marketo:

Fivetran schemas are as close to third normal form (3NF) as possible. For SaaS API connectors, we’ve designed the schemas to represent the underlying relational data model in that source, which means data teams can automate extract-load into a normalized schema, freeing up a significant amount of data engineering time for higher-value activities.

Fivetran-enabled workflow with automated access to data and no waiting for data engineering time

Normalized schemas also create confidence that data is accurate, reducing back and forth between teams and redundant pipeline management.

Impact of normalization on warehouse costs

If you’re using warehouse-native data integration tools like AWS Glue or Azure Data Factory, you’ll have to normalize the landed data yourself, a process that eats up compute bandwidth and increases costs.

If you’re using a third-party data movement tool that offers normalization, it’s important to know whether the provider performs the normalization within its own systems or within your data warehouse or destination. If it’s the latter, it can get expensive.

At Fivetran, we normalize your data within our own VPC, so you’ll never have to worry about data ingestion processes devouring your warehousing compute bill. Here’s what it looked like when a current customer simultaneously used Fivetran and two other data integration providers to replicate the same data into the same warehouse; the other providers used the customer’s warehouse instance to normalize the data. (Figures are for monthly usage.)

DATA MOVEMENT PROVIDER WAREHOUSE CREDITS USED PERCENTAGE OF COMPUTE COST
Provider 1 143.69 61%
Provider 2 69.33 30%
Fivetran 6.74 3%

2. Idempotence

In the context of data movement, “idempotence” refers to the ability of a data pipeline to prevent the creation of duplicate data when syncs fail.

For the sake of efficiency, data warehouses load records in batches, so sync progress is recorded by the batch. When a sync is interrupted, it’s often impossible to pinpoint the individual record that was being processed at the time of failure; when the sync resumes, the pipeline starts at the beginning of the most recent batch, and some data is processed again.

With an idempotent process, by contrast, every unique record is properly identified and no duplication occurs. Fivetran engineers idempotence into every data pipeline, so failures don’t produce duplicate data.

3. Incremental syncs

Another key feature of an efficient data pipeline is the ability to update incrementally. While full syncs are necessary to capture all your data initially, they’re inappropriate for routine updates because they often take a long time, can bog down both the data source and the destination and consume excessive network bandwidth.

The difference in data volume between a full and incremental sync is usually one or two orders of magnitude, with resulting increases in data warehouse costs.

Make sure your data movement provider syncs data incrementally — and does so in a way that solves the common problems associated with incremental syncs. Otherwise, the inefficiencies caused by those problems will be passed along to your data team.

4. Data selection

Your team likely isn’t interested in analyzing the data in every single table an ELT provider syncs by default into your destination — and if those tables are large, they will significantly increase your data integration and warehousing costs. We commonly see customers looking to avoid syncing logging tables, notification tables and external data feeds that have little or no historical or analytical value to the end user. 

Efficient ELT platforms will allow you to granularly de-select tables and columns that are unnecessary for analysis. Fivetran makes it easy for your team to do this:

5. Programmatic pipeline management

The ability to manage data pipelines via an ELT provider’s API, not just its user interface (UI), can majorly reduce your team’s workload — and is essential for anyone trying to build data solutions at scale. The Fivetran API offers an efficient way to perform bulk actions, communicate with Fivetran from a different application, and automate human-led processes using codified logic. Data teams that can programmatically design workflows will have far more time for business-critical tasks and advanced data analysis.

For example, we’ve helped customers use the Fivetran API to create connectors to hundreds of databases — and in one instance over 1,000 — saving them many hours of work in the process.

Here’s an illustration of just one of the API use cases we support — data consultancy Untitled Firm using the Fivetran API to connect to its customers’ data and create powerful analytics applications:

Choose the most efficient pipeline for your team and business

Selecting an ELT tool or platform that provides efficiency in all the above areas will have a powerful cumulative effect on the performance and efficiency of your data team and stack. It will also have a nontrivial quality-of-life impact — data teams are just happier when they can focus on interesting, high-value work and forget about manual engineering chores or suddenly soaring warehouse costs.

Interested in learning more about how to optimize your data stack and make your team more efficient? Check out our recent ebook, How to choose the most cost-effective data pipeline for your business.

[CTA_MODULE]

Start for free

Join the thousands of companies using Fivetran to centralize and transform their data.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Data insights
Data insights

Five key attributes of a highly efficient data pipeline

Five key attributes of a highly efficient data pipeline

July 17, 2023
July 17, 2023
Five key attributes of a highly efficient data pipeline
Not all ELT solutions are created equal. Here are the capabilities your tool needs to efficiently move data.

How effectively data pipelines move data depends on how they’re engineered. Less-efficient pipelines will increase your data stack costs and require your data engineers to do more work. That means it’s worthwhile understanding pipeline efficiency if you want to save your data team time and control infrastructure costs.

Specifically, look for a data pipeline that:

  1. Normalizes data and provides well-designed schemas
  2. Functions idempotently, automatically recovering from sync failures
  3. Syncs data incrementally
  4. Allows you to granularly de-select columns and tables
  5. Enables programmatic management

NB: In this post, we’re focusing on data pipelines in operation as they extract data from a source and land it in a destination. Two additional factors influence overall pipeline efficiency: pipeline maintenance and data transformation capabilities. Consider choosing a data movement provider that automatically adapts to source changes and offers features that accelerate data transformation.

[CTA_MODULE]

1. Data normalization

Semi-structured data from a SaaS source like Marketo isn’t immediately usable for SQL-based data analysis. So what’s the best way to normalize it into a standard format and create a useful schema for data engineers and analysts?

A well-designed normalized schema provides nearly everything someone needs to know to work with a data set:

  • The tables represent business objects
  • The columns represent attributes of those objects
  • Well-named tables and columns are self-describing
  • The primary and foreign key constraints reveal the relationships between the business objects

Here’s a representative self-describing Fivetran schema for Marketo:

Fivetran schemas are as close to third normal form (3NF) as possible. For SaaS API connectors, we’ve designed the schemas to represent the underlying relational data model in that source, which means data teams can automate extract-load into a normalized schema, freeing up a significant amount of data engineering time for higher-value activities.

Fivetran-enabled workflow with automated access to data and no waiting for data engineering time

Normalized schemas also create confidence that data is accurate, reducing back and forth between teams and redundant pipeline management.

Impact of normalization on warehouse costs

If you’re using warehouse-native data integration tools like AWS Glue or Azure Data Factory, you’ll have to normalize the landed data yourself, a process that eats up compute bandwidth and increases costs.

If you’re using a third-party data movement tool that offers normalization, it’s important to know whether the provider performs the normalization within its own systems or within your data warehouse or destination. If it’s the latter, it can get expensive.

At Fivetran, we normalize your data within our own VPC, so you’ll never have to worry about data ingestion processes devouring your warehousing compute bill. Here’s what it looked like when a current customer simultaneously used Fivetran and two other data integration providers to replicate the same data into the same warehouse; the other providers used the customer’s warehouse instance to normalize the data. (Figures are for monthly usage.)

DATA MOVEMENT PROVIDER WAREHOUSE CREDITS USED PERCENTAGE OF COMPUTE COST
Provider 1 143.69 61%
Provider 2 69.33 30%
Fivetran 6.74 3%

2. Idempotence

In the context of data movement, “idempotence” refers to the ability of a data pipeline to prevent the creation of duplicate data when syncs fail.

For the sake of efficiency, data warehouses load records in batches, so sync progress is recorded by the batch. When a sync is interrupted, it’s often impossible to pinpoint the individual record that was being processed at the time of failure; when the sync resumes, the pipeline starts at the beginning of the most recent batch, and some data is processed again.

With an idempotent process, by contrast, every unique record is properly identified and no duplication occurs. Fivetran engineers idempotence into every data pipeline, so failures don’t produce duplicate data.

3. Incremental syncs

Another key feature of an efficient data pipeline is the ability to update incrementally. While full syncs are necessary to capture all your data initially, they’re inappropriate for routine updates because they often take a long time, can bog down both the data source and the destination and consume excessive network bandwidth.

The difference in data volume between a full and incremental sync is usually one or two orders of magnitude, with resulting increases in data warehouse costs.

Make sure your data movement provider syncs data incrementally — and does so in a way that solves the common problems associated with incremental syncs. Otherwise, the inefficiencies caused by those problems will be passed along to your data team.

4. Data selection

Your team likely isn’t interested in analyzing the data in every single table an ELT provider syncs by default into your destination — and if those tables are large, they will significantly increase your data integration and warehousing costs. We commonly see customers looking to avoid syncing logging tables, notification tables and external data feeds that have little or no historical or analytical value to the end user. 

Efficient ELT platforms will allow you to granularly de-select tables and columns that are unnecessary for analysis. Fivetran makes it easy for your team to do this:

5. Programmatic pipeline management

The ability to manage data pipelines via an ELT provider’s API, not just its user interface (UI), can majorly reduce your team’s workload — and is essential for anyone trying to build data solutions at scale. The Fivetran API offers an efficient way to perform bulk actions, communicate with Fivetran from a different application, and automate human-led processes using codified logic. Data teams that can programmatically design workflows will have far more time for business-critical tasks and advanced data analysis.

For example, we’ve helped customers use the Fivetran API to create connectors to hundreds of databases — and in one instance over 1,000 — saving them many hours of work in the process.

Here’s an illustration of just one of the API use cases we support — data consultancy Untitled Firm using the Fivetran API to connect to its customers’ data and create powerful analytics applications:

Choose the most efficient pipeline for your team and business

Selecting an ELT tool or platform that provides efficiency in all the above areas will have a powerful cumulative effect on the performance and efficiency of your data team and stack. It will also have a nontrivial quality-of-life impact — data teams are just happier when they can focus on interesting, high-value work and forget about manual engineering chores or suddenly soaring warehouse costs.

Interested in learning more about how to optimize your data stack and make your team more efficient? Check out our recent ebook, How to choose the most cost-effective data pipeline for your business.

[CTA_MODULE]

Read our recent ebook, How to choose the most cost-effective data pipeline for your business, to explore this topic in more detail!
Download now
Read our recent ebook, How to choose the most cost-effective data pipeline for your business, to explore this topic in more detail!
Download now
Topics
Share

Related blog posts

Data Pipeline Checklist
Data insights

Data Pipeline Checklist

Read post
The Modern Data Pipeline
Data insights

The Modern Data Pipeline

Read post
Setting up your first data pipeline
Data insights

Setting up your first data pipeline

Read post
How Tinuiti meets the data demands of digital marketing
Blog

How Tinuiti meets the data demands of digital marketing

Read post
How to build a data foundation for generative AI
Blog

How to build a data foundation for generative AI

Read post
How HubSpot’s analytics engineering team gained pipeline autonomy
Blog

How HubSpot’s analytics engineering team gained pipeline autonomy

Read post
How Tinuiti meets the data demands of digital marketing
Blog

How Tinuiti meets the data demands of digital marketing

Read post
How to build a data foundation for generative AI
Blog

How to build a data foundation for generative AI

Read post
How HubSpot’s analytics engineering team gained pipeline autonomy
Blog

How HubSpot’s analytics engineering team gained pipeline autonomy

Read post

Start for free

Join the thousands of companies using Fivetran to centralize and transform their data.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.