Replacing iPaaS workflows with warehouse-centric data pipelines

Welcome to the data movement movement. Use Fivetran and Hightouch to replace your existing iPaaS workflows and build warehouse-centric data pipelines.
April 18, 2023

In 1999, Salesforce created the first SaaS (Software-as-a-Service) business, giving birth to an ever-increasing trillion-dollar industry. Over the last 20+ years, the SaaS ecosystem has exploded.

According to the latest MarTech map, there are over 10,383 vendors in the SaaS space, and this number continues to compound yearly. Even more mind-boggling is that the average company has over 100 SaaS applications. For Enterprise-level organizations, this number can quickly reach the thousands. Thus, every organization (regardless of size) ends up having the same two problems: too many data silos and no ability to keep data in sync between systems.

For years organizations have relied on a “spaghetti” approach of custom pipelines and point-to-point middleware tools to create a 360-degree view of the customer. However, companies are now leaning on the flexibility of the modern data stack, with the cloud data warehouse acting as the nexus point to build and automate their data pipelines.

This blog post will provide real-life scenarios highlighting how organizations can seamlessly replace Integration Platform as a Service as a service (iPaaS) workflows with modernized warehouse-centric pipelines.

The rise of point-to-point integrations

iPaaS tools arose as the first native solution to address this problem of moving data back and forth between systems, helping companies manage their many data silos. These platforms quickly gained popularity because they offered a low-code interface where users could easily connect to various APIs without writing complex code.

Major platforms like Mulesoft, Dell Boomi, Workato and Zapier were born out of this era because they introduced what’s known as trigger-based workflows. For the first time, organizations could build simple, complex workflows with a point-and-click UI.

iPaaS tools powered by point-to-point integrations are popular because they solve simple problems quickly–like creating event-based workflows like providing real-time notifications to internal teams for every new “Closed-Won” deal in Salesforce or automatically creating a new invoice in Netsuite whenever a deal closes.

Point-to-point integrations use what’s known as an event-based ingestion model. They’re programmed to perform actions based on triggers (e.g., when a field updates in one system, that same field is updated in another system.) The main advantage of point-to-point integrations is that it allows business teams to solve use cases on an ad-hoc basis quickly.

Why point-to-point integrations fail

Although point-to-point integrations are helpful for “hacking” type use cases, companies that implement these brittle workflows at a wide scale to power mission-critical data pipelines inevitably end up hurting rather than helping themselves. There are a few reasons for this.

1. Triggers: Since iPaaS workflows are based on an event-driven architecture, workflows can quickly become highly complex. A trigger has to be implemented whenever a field needs to be updated in another system which means data can’t be sent between systems until that trigger criteria has been met. In addition to this, all of the transformation logic has to be housed within the individual workflow components, creating additional complexities and dependencies.

2. Custom Code: Below is an example of a relatively simple Workato workflow syncing Snowflake rows to Salesforce Marketing Cloud. In step two, the user is forced to write custom SQL, for users who were not expecting to define components in a non-technical way visually, this is not a welcome surprise.

3. Data Changes: Data in SaaS tools is constantly being updated and changed as individual users update fields daily, so it’s only a matter of time before workflows break. Maintaining this at scale is an absolute nightmare because workflows must be updated one at a time, and building a new workflow means starting over entirely from scratch.

4. Dependencies and Errors: Since each workflow step depends on the last, identifying, debugging, and fixing errors can be very challenging, and solving this problem at scale is a complete nightmare.

5. Failures: In many organizations, one person is usually responsible for a subset of workflows, so if that employee leaves, the organization is left picking up the pieces. If a workflow breaks, companies are stuck searching for a needle in a haystack, only to find the needle and not know what to do with it in the first place.

Warehouse-centric pipelines

Middleware platforms are essentially Swiss army knives. They offer many tools that can solve many problems, but they end up being very limited. Swiss army knives are great for quick fixes, but in most cases, purpose-built tools are best suited for the job.

Point-to-point integrations were standardized prior to the existence of the modern data stack. Before cloud data warehouses, companies didn’t have a single source of truth to maintain one consistent view of the customer.

However, when the modern data stack came along, it solved four key challenges.

  1. Data platforms like Snowflake, BigQuery, and Databricks helped companies consolidate their customer data into a single source of truth.
  2. Fivetran introduced fully managed ELT connectors to automate data extraction and ingestion from any data source.
  3. dbt made it easier than ever to orchestrate and run SQL-based transformation jobs directly in the data warehouse
  4. Hightouch solved the last-mile problem, allowing companies to activate the data in the warehouse and sync it to the operational tools of business teams.

The warehouse-centric approach to data integration establishes a declarative model where organizations can simply define their data and point it toward an end system for Data Activation.

This declarative model establishes a hub-and-spoke approach where everything flows in and out of the data warehouse–thus ensuring all operational systems are in sync with one another and powered by a single source of truth.

Replacing iPaaS workflows

For many organizations, ripping and replacing existing point-to-point workflows in favor of a warehouse-centered approach can feel overwhelming, so it’s usually best to start with simple pipelines. Here are two example scenarios:

Example 1

You’re a B2B business deploying a PLG motion. You have a workflow setup between your CRM and your billing tool. Every time a customer signs up for your product/service, this workflow triggers the creation of a new subscription in Stripe.

During this process, your users submit unique personal information (first name, last name, email, payment method, etc.) This event automatically generates a new field in Hubspot, thereby giving your sales and revenue teams a new account to track.

Six months later, that user updates their email address and payment information. Since this workflow is built on an event-driven model, Hubspot now houses an outdated view of the customer. Here is an example of a similar workflow in Tray.io.

This same use case can be applied to any point-to-point integration. Anytime core business definitions are managed across various “spokes,” it’s only a matter of time before the data gets out of sync. This problem is exacerbated by the fact that SaaS tools are constantly being updated manually by users on a daily basis.

Example 2

Imagine a customer increases their subscription amount from your “Starter” tier ($250 per month) to your “Pro tier ($600 per month.) All of your subscription data is captured in Stripe, but your customer records are maintained in your CRM (e.g., Salesforce.)

You have a point-to-point integration setup to ensure your “Customer_ARR” field is updated in Salesforce every time a new subscription is created in Stripe. Unfortunately, you realize that Stripe doesn’t support ARR, so you have to run some calculations in your workflow and multiply MRR by 12.

If your definition of ARR changes, this entire workflow breaks, and you end up having two completely different definitions. (Note: the intricacies of Stripe, in particular, are far more complicated than this example lays out.)

While it is possible to configure these types of transformations in an iPaaS tool, these same metrics most likely already exist in your data warehouse, making it very easy to create mismatched definitions between your warehouse and your downstream SaaS tools.

Here is an example of a similar workflow in Workato.

Modernizing workflows

Rebuilding these same pipelines around the warehouse is much simpler. Using Fivetran, you can automatically extract data from your sources (e.g., Stripe or Hubspot) and load it into your warehouse.

With the appropriate data models (e.g., ARR, MRR, LTV, etc.), you can use Hightouch to pipe that data back out to your operational systems. Hightouch is entirely declarative, which means you simply have to define data and map it to the appropriate fields in the end destination.

You don’t have to build and maintain any workflows, and you can have all of your pipelines run automatically–thus, every definition in your operational systems will be identical to your data warehouse.

Whereas iPaaS pipelines are “one-to-one,” warehouse-based pipelines can be “one-to-many.” You can reuse the same models to sync data to your operational tools, and you don’t have to start from scratch every time you want to build a new data pipeline.

Instead of managing various dependencies, if/then statements, trigger clauses, and transformation jobs, with Fivetran and Hightouch, you simply define what data you want to send to your end system.

“Finding ways to streamline the flow of data is critical to our growth. Using Fivetran to get information into our data warehouse and activating that data with Hightouch enables the data team to deliver even more value to our organization and customers.”
- Josiah Brann, Director of Engineering at Discovery Education

Final thoughts

This shift from iPaaS to warehouse-centric pipelines is not just a fad. Employees in many companies are leaving because of the extra hours and manual work required to keep point-to-point integrations running.

There’s absolutely no reason to maintain a 25-step Workato workflow when you can just declare your end state using Fivetran and Hightouch. ELT and Reverse ETL are the future, and companies in every industry are simplifying their architecture to take advantage of the benefits this new approach provides because it ensures that every business team has access to the same core business metrics in the tools they use every day.

Commencer gratuitement

Rejoignez les milliers d’entreprises qui utilisent Fivetran pour centraliser et transformer leur data.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Data insights
Data insights

Replacing iPaaS workflows with warehouse-centric data pipelines

Replacing iPaaS workflows with warehouse-centric data pipelines

April 18, 2023
April 18, 2023
Replacing iPaaS workflows with warehouse-centric data pipelines
Welcome to the data movement movement. Use Fivetran and Hightouch to replace your existing iPaaS workflows and build warehouse-centric data pipelines.

In 1999, Salesforce created the first SaaS (Software-as-a-Service) business, giving birth to an ever-increasing trillion-dollar industry. Over the last 20+ years, the SaaS ecosystem has exploded.

According to the latest MarTech map, there are over 10,383 vendors in the SaaS space, and this number continues to compound yearly. Even more mind-boggling is that the average company has over 100 SaaS applications. For Enterprise-level organizations, this number can quickly reach the thousands. Thus, every organization (regardless of size) ends up having the same two problems: too many data silos and no ability to keep data in sync between systems.

For years organizations have relied on a “spaghetti” approach of custom pipelines and point-to-point middleware tools to create a 360-degree view of the customer. However, companies are now leaning on the flexibility of the modern data stack, with the cloud data warehouse acting as the nexus point to build and automate their data pipelines.

This blog post will provide real-life scenarios highlighting how organizations can seamlessly replace Integration Platform as a Service as a service (iPaaS) workflows with modernized warehouse-centric pipelines.

The rise of point-to-point integrations

iPaaS tools arose as the first native solution to address this problem of moving data back and forth between systems, helping companies manage their many data silos. These platforms quickly gained popularity because they offered a low-code interface where users could easily connect to various APIs without writing complex code.

Major platforms like Mulesoft, Dell Boomi, Workato and Zapier were born out of this era because they introduced what’s known as trigger-based workflows. For the first time, organizations could build simple, complex workflows with a point-and-click UI.

iPaaS tools powered by point-to-point integrations are popular because they solve simple problems quickly–like creating event-based workflows like providing real-time notifications to internal teams for every new “Closed-Won” deal in Salesforce or automatically creating a new invoice in Netsuite whenever a deal closes.

Point-to-point integrations use what’s known as an event-based ingestion model. They’re programmed to perform actions based on triggers (e.g., when a field updates in one system, that same field is updated in another system.) The main advantage of point-to-point integrations is that it allows business teams to solve use cases on an ad-hoc basis quickly.

Why point-to-point integrations fail

Although point-to-point integrations are helpful for “hacking” type use cases, companies that implement these brittle workflows at a wide scale to power mission-critical data pipelines inevitably end up hurting rather than helping themselves. There are a few reasons for this.

1. Triggers: Since iPaaS workflows are based on an event-driven architecture, workflows can quickly become highly complex. A trigger has to be implemented whenever a field needs to be updated in another system which means data can’t be sent between systems until that trigger criteria has been met. In addition to this, all of the transformation logic has to be housed within the individual workflow components, creating additional complexities and dependencies.

2. Custom Code: Below is an example of a relatively simple Workato workflow syncing Snowflake rows to Salesforce Marketing Cloud. In step two, the user is forced to write custom SQL, for users who were not expecting to define components in a non-technical way visually, this is not a welcome surprise.

3. Data Changes: Data in SaaS tools is constantly being updated and changed as individual users update fields daily, so it’s only a matter of time before workflows break. Maintaining this at scale is an absolute nightmare because workflows must be updated one at a time, and building a new workflow means starting over entirely from scratch.

4. Dependencies and Errors: Since each workflow step depends on the last, identifying, debugging, and fixing errors can be very challenging, and solving this problem at scale is a complete nightmare.

5. Failures: In many organizations, one person is usually responsible for a subset of workflows, so if that employee leaves, the organization is left picking up the pieces. If a workflow breaks, companies are stuck searching for a needle in a haystack, only to find the needle and not know what to do with it in the first place.

Warehouse-centric pipelines

Middleware platforms are essentially Swiss army knives. They offer many tools that can solve many problems, but they end up being very limited. Swiss army knives are great for quick fixes, but in most cases, purpose-built tools are best suited for the job.

Point-to-point integrations were standardized prior to the existence of the modern data stack. Before cloud data warehouses, companies didn’t have a single source of truth to maintain one consistent view of the customer.

However, when the modern data stack came along, it solved four key challenges.

  1. Data platforms like Snowflake, BigQuery, and Databricks helped companies consolidate their customer data into a single source of truth.
  2. Fivetran introduced fully managed ELT connectors to automate data extraction and ingestion from any data source.
  3. dbt made it easier than ever to orchestrate and run SQL-based transformation jobs directly in the data warehouse
  4. Hightouch solved the last-mile problem, allowing companies to activate the data in the warehouse and sync it to the operational tools of business teams.

The warehouse-centric approach to data integration establishes a declarative model where organizations can simply define their data and point it toward an end system for Data Activation.

This declarative model establishes a hub-and-spoke approach where everything flows in and out of the data warehouse–thus ensuring all operational systems are in sync with one another and powered by a single source of truth.

Replacing iPaaS workflows

For many organizations, ripping and replacing existing point-to-point workflows in favor of a warehouse-centered approach can feel overwhelming, so it’s usually best to start with simple pipelines. Here are two example scenarios:

Example 1

You’re a B2B business deploying a PLG motion. You have a workflow setup between your CRM and your billing tool. Every time a customer signs up for your product/service, this workflow triggers the creation of a new subscription in Stripe.

During this process, your users submit unique personal information (first name, last name, email, payment method, etc.) This event automatically generates a new field in Hubspot, thereby giving your sales and revenue teams a new account to track.

Six months later, that user updates their email address and payment information. Since this workflow is built on an event-driven model, Hubspot now houses an outdated view of the customer. Here is an example of a similar workflow in Tray.io.

This same use case can be applied to any point-to-point integration. Anytime core business definitions are managed across various “spokes,” it’s only a matter of time before the data gets out of sync. This problem is exacerbated by the fact that SaaS tools are constantly being updated manually by users on a daily basis.

Example 2

Imagine a customer increases their subscription amount from your “Starter” tier ($250 per month) to your “Pro tier ($600 per month.) All of your subscription data is captured in Stripe, but your customer records are maintained in your CRM (e.g., Salesforce.)

You have a point-to-point integration setup to ensure your “Customer_ARR” field is updated in Salesforce every time a new subscription is created in Stripe. Unfortunately, you realize that Stripe doesn’t support ARR, so you have to run some calculations in your workflow and multiply MRR by 12.

If your definition of ARR changes, this entire workflow breaks, and you end up having two completely different definitions. (Note: the intricacies of Stripe, in particular, are far more complicated than this example lays out.)

While it is possible to configure these types of transformations in an iPaaS tool, these same metrics most likely already exist in your data warehouse, making it very easy to create mismatched definitions between your warehouse and your downstream SaaS tools.

Here is an example of a similar workflow in Workato.

Modernizing workflows

Rebuilding these same pipelines around the warehouse is much simpler. Using Fivetran, you can automatically extract data from your sources (e.g., Stripe or Hubspot) and load it into your warehouse.

With the appropriate data models (e.g., ARR, MRR, LTV, etc.), you can use Hightouch to pipe that data back out to your operational systems. Hightouch is entirely declarative, which means you simply have to define data and map it to the appropriate fields in the end destination.

You don’t have to build and maintain any workflows, and you can have all of your pipelines run automatically–thus, every definition in your operational systems will be identical to your data warehouse.

Whereas iPaaS pipelines are “one-to-one,” warehouse-based pipelines can be “one-to-many.” You can reuse the same models to sync data to your operational tools, and you don’t have to start from scratch every time you want to build a new data pipeline.

Instead of managing various dependencies, if/then statements, trigger clauses, and transformation jobs, with Fivetran and Hightouch, you simply define what data you want to send to your end system.

“Finding ways to streamline the flow of data is critical to our growth. Using Fivetran to get information into our data warehouse and activating that data with Hightouch enables the data team to deliver even more value to our organization and customers.”
- Josiah Brann, Director of Engineering at Discovery Education

Final thoughts

This shift from iPaaS to warehouse-centric pipelines is not just a fad. Employees in many companies are leaving because of the extra hours and manual work required to keep point-to-point integrations running.

There’s absolutely no reason to maintain a 25-step Workato workflow when you can just declare your end state using Fivetran and Hightouch. ELT and Reverse ETL are the future, and companies in every industry are simplifying their architecture to take advantage of the benefits this new approach provides because it ensures that every business team has access to the same core business metrics in the tools they use every day.

Articles associés

It's time to ditch point-to-point data integration
Data insights

It's time to ditch point-to-point data integration

Lire l’article
Fivetran supports Amazon S3 as a destination with Apache Iceberg
Product

Fivetran supports Amazon S3 as a destination with Apache Iceberg

Lire l’article
Cost-effective ELT: Four factors to consider
Data insights

Cost-effective ELT: Four factors to consider

Lire l’article
No items found.
How do people use Snowflake and Redshift?
Blog

How do people use Snowflake and Redshift?

Lire l’article
What is a data lake?
Blog

What is a data lake?

Lire l’article
How Tinuiti meets the data demands of digital marketing
Blog

How Tinuiti meets the data demands of digital marketing

Lire l’article

Commencer gratuitement

Rejoignez les milliers d’entreprises qui utilisent Fivetran pour centraliser et transformer leur data.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.