Empowering analysts with new data models for ad reporting

Our six new models are aggregated on the most common granularities for ad-related metrics, so marketing analysts can hit the ground running.
September 2, 2022

On average, marketers use 10 digital touchpoints to communicate with their target audience. Each touchpoint generates robust data. And while it's nice to analyze them in silos, the real insights are gleaned when you compare and contrast performance, uncovering opportunities to optimize your budget and maximize your ROAS.

But that's easier said than done. It's hard to wrangle data from platforms and tie it together. This traditionally requires manual processes to cleanse, standardize and harmonize data from multiple sources — not to mention the responsibility of ensuring data integrity and quality along the way, or being locked into vendors with obtuse GUIs or blackbox logic.

At Fivetran, we wanted to build a different solution, one that accelerated marketers’ time to insight while also being accessible. That's why we built our Ad Reporting package — a data model built on SQL, a skill marketing analysts have, extended with the best practices of dbt.

Hundreds of marketing analysts found value in our original Ad Reporting package — but we felt like we could do more to help empower our customers with the right marketing data. After collecting feedback from customers and the dbt community, we’re ready to release a major update — in fact, we felt this update was so big it warranted a v1.0.0 release. It’s our very first v1.0.0 release ever!

The new models

We know that analysts want to view their data at various levels of granularity. With that feedback, we decided to introduce new models aggregated on the most common granularities for reporting ad-related metrics. This ensures that no matter the reporting need, your team can hit the ground running.

In addition to our existing URL-level reporting, we’ve added the following models:

  • Account report
  • Campaign report
  • Ad group report
  • Ad report
  • Keyword report
  • Search report

Each report provides the number of clicks (or platform-equivalent metric), spend and impressions aggregated to the specified reporting level by day and platform.

Platform models now utilize corresponding grain reports to create each respective model, given available data; otherwise, models will use the next available grain provided and aggregate up from there. We have found that, for example, using a platform-given campaign report for campaign-level modeling is more accurate than using, say, a derived campaign report from an ad report for campaign-level modeling, as some campaigns may not have corresponding ads, or vice versa.

Models by platform and new platform package releases

Ad Reporting 1.0.0 also brings with it a brand new package release: Apple Search Ads. We have been waiting to release this package alongside the Ad Reporting release and are very excited to welcome this platform into the Ad Reporting family.

You will also find that we updated the data models for all ad sources aggregated in this rollup. So no matter whether you're analyzing them in aggregate or individually, you will have access to new fields and more accurate modeling. For all reports in Ad Reporting 1.0.0, we have assumed the general hierarchy of Account > Campaign > Ad Group > Ad Report and all upstream models have been structured with this hierarchy in mind.

The matrix below captures packages and all available final model offerings by platform, as well as new additions and updates to existing platform packages. Respective platform-specific terminology is indicated below the checkmark.

As you can see, every single platform has been carefully reviewed and given quite the upgrade!

Please note that models previously named “ad adapter” have been renamed to “url report” for all packages and now add filtering logic to only capture records with non-null URL values. 

Passthrough metrics 

We know that your business might have specific ad reporting needs. While we aim to cover the majority of use cases, we also want to provide you with the ability to extend our models to meet your requirements. Through the use of our newly released passthrough logic, you’ll be able to add columns that are not already built into the package and take advantage of this flexibility for your reporting needs.

Say you want to add “conversions” to your Apple Search Ads Ad Report, give the field a new name and perform a transformation on that field. You may declare your passthrough metric variables in your project’s dbt_project.yml like so:

```
vars:
  apple_search_ads__ad_passthrough_metrics:
    - name: "conversions"
      alias: "better_conversions_name"
      transform_sql: "cast(conversions_alias as int64)"
```

Now you can refer to this field as “better_conversions_name” and make sure that it’s an integer for your downstream models!

Please refer to the package’s README.md for more information on passthrough metrics.

Table identifiers

All tables are not named equally. We’ve also updated our data model to include the ability to identify tables based off of your unique nomenclature. For example, you might upend the platform name onto your table resulting in 'apple_search_ads_ad_history' instead of pre-designated 'ad_history' referenced in the package. 

We've now made the reference names flexible, so you can specify your unique tables.

You can now very easily declare the following variables in your dbt_project.yml:

```
Var:
  apple_search_ads_ad_history_identifier: apple_search_ads_ad_history
```

Please refer to the package’s README.md for more information on table identifiers.

Better and more robust testing

We’ve added consistent testing across all packages that roll up to Ad Reporting, so you can have confidence that your data is exactly what you expect it to be for your analyses. Now, you can find grain testing for all staging models in the source packages, final models in the modeling packages and final models in Ad Reporting. Additionally, we’ve also added freshness testing to all platform packages, so you will know right away if your data is stale. This does, however, increase the number of tests performed in every package, and if you wish to disable or override any testing we have incorporated, please check out the examples below.

Here’s an example of how to disable all tests for a package:

```
tests:
  apple_search_ads:
    +enabled: false
  apple_search_ads_source:
    +enabled: false
```

And here’s an example of how to disable specific tests:

```
tests:
  apple_search_ads:
 dbt_utils_unique_combination_of_columns_apple_search_ads__ad_group_report_organization_id__campaign_id__ad_group_id__date_day:
      +enabled: false
  apple_search_ads_source:
    dbt_utils_unique_combination_of_columns_stg_apple_search_ads__ad_group_history_ad_group_id__modified_at:
      +enabled: false
```

If you wish to override the default freshness in our packages, you can refer to dbt documentation for instructions on how to do so.

For more information regarding testing configurations, please refer to dbt’s testing documentation.

Future improvements

Are there other features that you feel the community would benefit from? Please feel free to reach out to us here. We would love to hear from you!

Start for free

Join the thousands of companies using Fivetran to centralize and transform their data.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Product
Product

Empowering analysts with new data models for ad reporting

Empowering analysts with new data models for ad reporting

September 2, 2022
September 2, 2022
Empowering analysts with new data models for ad reporting
Our six new models are aggregated on the most common granularities for ad-related metrics, so marketing analysts can hit the ground running.

On average, marketers use 10 digital touchpoints to communicate with their target audience. Each touchpoint generates robust data. And while it's nice to analyze them in silos, the real insights are gleaned when you compare and contrast performance, uncovering opportunities to optimize your budget and maximize your ROAS.

But that's easier said than done. It's hard to wrangle data from platforms and tie it together. This traditionally requires manual processes to cleanse, standardize and harmonize data from multiple sources — not to mention the responsibility of ensuring data integrity and quality along the way, or being locked into vendors with obtuse GUIs or blackbox logic.

At Fivetran, we wanted to build a different solution, one that accelerated marketers’ time to insight while also being accessible. That's why we built our Ad Reporting package — a data model built on SQL, a skill marketing analysts have, extended with the best practices of dbt.

Hundreds of marketing analysts found value in our original Ad Reporting package — but we felt like we could do more to help empower our customers with the right marketing data. After collecting feedback from customers and the dbt community, we’re ready to release a major update — in fact, we felt this update was so big it warranted a v1.0.0 release. It’s our very first v1.0.0 release ever!

The new models

We know that analysts want to view their data at various levels of granularity. With that feedback, we decided to introduce new models aggregated on the most common granularities for reporting ad-related metrics. This ensures that no matter the reporting need, your team can hit the ground running.

In addition to our existing URL-level reporting, we’ve added the following models:

  • Account report
  • Campaign report
  • Ad group report
  • Ad report
  • Keyword report
  • Search report

Each report provides the number of clicks (or platform-equivalent metric), spend and impressions aggregated to the specified reporting level by day and platform.

Platform models now utilize corresponding grain reports to create each respective model, given available data; otherwise, models will use the next available grain provided and aggregate up from there. We have found that, for example, using a platform-given campaign report for campaign-level modeling is more accurate than using, say, a derived campaign report from an ad report for campaign-level modeling, as some campaigns may not have corresponding ads, or vice versa.

Models by platform and new platform package releases

Ad Reporting 1.0.0 also brings with it a brand new package release: Apple Search Ads. We have been waiting to release this package alongside the Ad Reporting release and are very excited to welcome this platform into the Ad Reporting family.

You will also find that we updated the data models for all ad sources aggregated in this rollup. So no matter whether you're analyzing them in aggregate or individually, you will have access to new fields and more accurate modeling. For all reports in Ad Reporting 1.0.0, we have assumed the general hierarchy of Account > Campaign > Ad Group > Ad Report and all upstream models have been structured with this hierarchy in mind.

The matrix below captures packages and all available final model offerings by platform, as well as new additions and updates to existing platform packages. Respective platform-specific terminology is indicated below the checkmark.

As you can see, every single platform has been carefully reviewed and given quite the upgrade!

Please note that models previously named “ad adapter” have been renamed to “url report” for all packages and now add filtering logic to only capture records with non-null URL values. 

Passthrough metrics 

We know that your business might have specific ad reporting needs. While we aim to cover the majority of use cases, we also want to provide you with the ability to extend our models to meet your requirements. Through the use of our newly released passthrough logic, you’ll be able to add columns that are not already built into the package and take advantage of this flexibility for your reporting needs.

Say you want to add “conversions” to your Apple Search Ads Ad Report, give the field a new name and perform a transformation on that field. You may declare your passthrough metric variables in your project’s dbt_project.yml like so:

```
vars:
  apple_search_ads__ad_passthrough_metrics:
    - name: "conversions"
      alias: "better_conversions_name"
      transform_sql: "cast(conversions_alias as int64)"
```

Now you can refer to this field as “better_conversions_name” and make sure that it’s an integer for your downstream models!

Please refer to the package’s README.md for more information on passthrough metrics.

Table identifiers

All tables are not named equally. We’ve also updated our data model to include the ability to identify tables based off of your unique nomenclature. For example, you might upend the platform name onto your table resulting in 'apple_search_ads_ad_history' instead of pre-designated 'ad_history' referenced in the package. 

We've now made the reference names flexible, so you can specify your unique tables.

You can now very easily declare the following variables in your dbt_project.yml:

```
Var:
  apple_search_ads_ad_history_identifier: apple_search_ads_ad_history
```

Please refer to the package’s README.md for more information on table identifiers.

Better and more robust testing

We’ve added consistent testing across all packages that roll up to Ad Reporting, so you can have confidence that your data is exactly what you expect it to be for your analyses. Now, you can find grain testing for all staging models in the source packages, final models in the modeling packages and final models in Ad Reporting. Additionally, we’ve also added freshness testing to all platform packages, so you will know right away if your data is stale. This does, however, increase the number of tests performed in every package, and if you wish to disable or override any testing we have incorporated, please check out the examples below.

Here’s an example of how to disable all tests for a package:

```
tests:
  apple_search_ads:
    +enabled: false
  apple_search_ads_source:
    +enabled: false
```

And here’s an example of how to disable specific tests:

```
tests:
  apple_search_ads:
 dbt_utils_unique_combination_of_columns_apple_search_ads__ad_group_report_organization_id__campaign_id__ad_group_id__date_day:
      +enabled: false
  apple_search_ads_source:
    dbt_utils_unique_combination_of_columns_stg_apple_search_ads__ad_group_history_ad_group_id__modified_at:
      +enabled: false
```

If you wish to override the default freshness in our packages, you can refer to dbt documentation for instructions on how to do so.

For more information regarding testing configurations, please refer to dbt’s testing documentation.

Future improvements

Are there other features that you feel the community would benefit from? Please feel free to reach out to us here. We would love to hear from you!

Topics
No items found.
Share

Related blog posts

No items found.
No items found.
No items found.

Start for free

Join the thousands of companies using Fivetran to centralize and transform their data.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.