Takeaways on AI from our recent webinar with Microsoft

Discover how organizations can harness the full potential of AI through a modern data stack, as we dive into the insights from our recent joint webinar with Microsoft
June 12, 2023

Have you started leveraging artificial intelligence (AI)? You’re not alone. In our recent joint webinar with Microsoft, Bob Muglia, former technology CEO and Microsoft executive, described AI as “the single largest change in computer science in my lifetime.” 

As AI tools continue to flood the market at a feverish pace, changing the way we work in the process, it’s important to understand how this innovation is possible. In the words of Lindsey Allen, General Manager at Azure Databricks and Applied AI and Data Partnerships at Microsoft, at the most simplistic level, machine learning (ML) is “data and code.” 

These fundamental components underpin the success of Large Language Models (LLMs) and even smaller, more niche models that are sweeping across industries. To fully embrace the potential of this computer-assisted renaissance, organizations must recognize the criticality of high volumes of complete, reliable and up-to-date data. 

As we navigate towards a future of enhanced collaboration between humans and machines, generating faster and more impactful business outcomes, it is imperative for organizations to understand their data stacks and leverage data for predictive modeling. 

Preparing your data stack for AI 

In the digital era, where innovative technologies like AI and ML are transforming the business landscape, organizations are facing the challenge of adapting their legacy systems. According to VentureBeat, 88 percent of technology assets are legacy systems, half of which are business-critical. This pressing reality underscored the importance of preparing your data stack to fully capitalize on the potential of AI.

One major obstacle hindering organizations' progress is the siloed nature of data, making it difficult to access and utilize for predictive analytics and AI. To ensure your stack is primed for this new and innovative use of data, it must include two crucial attributes:

1. Reliability in data quality and movement

AI requires large volumes of trusted, normalized data of various types, including both unstructured and structured data, for effective training. However, efficiently moving this data from disparate sources and storing it in a destination that’s readily available and easy to access can prove daunting and risky if your data stack isn’t robust, scalable and secure. 

To address this, George Fraser, CEO and co-founder of Fivetran, recommends “raising the level of abstraction by adopting tools and approaches that enable interaction with data sources and destinations tables without diving into lower-level technical details such as CPUs and Java settings.” 

By embracing this higher level of abstraction, organizations can overcome the risks associated with data infrastructure failures. Incomplete, inaccurate or unreliable data can wreak havoc on the output of predictive modeling, underscoring the need for reliability in the data stack. That becomes especially apparent when you consider the risk that failure can create within data infrastructure. 

When you’re looking to move and leverage data sets, you want to ensure that your methodology is idempotent. In simple terms, idempotence means that executing an operation multiple times should yield the same results as the initial execution. This is especially critical to prevent duplication and avoid the need for custom recovery procedures in the inevitable event that syncs fail because of network outages, hardware problems or software bugs.

If you’re leveraging legacy systems or DIY pipelines, this is difficult to achieve, as failures are bound to happen. Particularly for large-scale jobs, such as the volume required for AI, having the opportunity to start from the last good state is not only required — it’s critical. Leveraging an automated and fully-managed service, like Fivetran, which offers full idempotence, ensures data reliability — regardless of the inevitability of failures. 

“I think that a really good data plumbing should be like plumbing in your house. You don't really need to know it's there, you just know the water will come out of the pipe and is clean.” 
Lindsey Allen, General Manager, Azure Databricks, Applied AI and Data Partnerships at Microsoft

By prioritizing reliability, scalability and idempotence in your data stack, you can build a solid foundation for AI-driven success, ensuring the integrity and availability of your data for transformative business outcomes.

2. Accessible, flexible and ready for anything storage

To achieve the desired outputs in AI, regardless of its application, a continuous stream of fresh and easily accessible data from a variety of sources is essential. As AI models continue to advance, catering to multi-modal outputs, ranging anywhere from natural language to images, drawing and more — the flexibility and robustness of your architecture becomes paramount. 

In the words of Lindsey Allen, “The modern data architecture will combine the best of a data lake, data warehouse and ODS (operational data store) — you need the speed of a data lake and the robustness of a data warehouse in one.” 

This level of open flexibility is enabled by a data lakehouse solution like Azure Databricks. By adopting a lakehouse approach, you can eliminate data silos and centralize all types of data, including raw and unstructured data, in a single cloud-based location. This empowers your AI workflow by fostering a unified experience, enabling scientists, data engineers and data analysts to collaborate on well-governed datasets responsibly. 

With the cloud’s on-demand scalability combined with the flexibility and robustness of a data lakehouse, you can effortlessly maximize the value of your entire data landscape while ensuring that your AI models are trained using fresh and relevant datasets. 

Where does AI go from here? 

As AI continues its rapid evolution and advancement, one cannot help but wonder about its role in the future. George Fraser’s prediction of “No one knows. It’s a tidal wave, and we’re all just seeing what gets washed away” resonated with the uncertainty surrounding AI's trajectory. 

However, one thing is certain: organizations that have embraced modernized data stacks to effectively leverage this innovative technology are poised to reap the greatest benefits. After all, the effectiveness of AI hinges on the quality of the data used to train it — and that quality is heavily reliant on how data is moved, stored and managed. 

At Fivetran and Microsoft, we’re seeing joint customers leverage the power of AI across various use cases and sectors, ranging from predictive planning to improved analytics. While what lies ahead remains unknown, we find solace in the words of Bob Muglia: “it's a new world.” 

The possibilities are vast, and the potential for transformative outcomes is immense. 

[CTA_MODULE]

Commencer gratuitement

Rejoignez les milliers d’entreprises qui utilisent Fivetran pour centraliser et transformer leur data.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Data insights
Data insights

Takeaways on AI from our recent webinar with Microsoft

Takeaways on AI from our recent webinar with Microsoft

June 12, 2023
June 12, 2023
Takeaways on AI from our recent webinar with Microsoft
Discover how organizations can harness the full potential of AI through a modern data stack, as we dive into the insights from our recent joint webinar with Microsoft

Have you started leveraging artificial intelligence (AI)? You’re not alone. In our recent joint webinar with Microsoft, Bob Muglia, former technology CEO and Microsoft executive, described AI as “the single largest change in computer science in my lifetime.” 

As AI tools continue to flood the market at a feverish pace, changing the way we work in the process, it’s important to understand how this innovation is possible. In the words of Lindsey Allen, General Manager at Azure Databricks and Applied AI and Data Partnerships at Microsoft, at the most simplistic level, machine learning (ML) is “data and code.” 

These fundamental components underpin the success of Large Language Models (LLMs) and even smaller, more niche models that are sweeping across industries. To fully embrace the potential of this computer-assisted renaissance, organizations must recognize the criticality of high volumes of complete, reliable and up-to-date data. 

As we navigate towards a future of enhanced collaboration between humans and machines, generating faster and more impactful business outcomes, it is imperative for organizations to understand their data stacks and leverage data for predictive modeling. 

Preparing your data stack for AI 

In the digital era, where innovative technologies like AI and ML are transforming the business landscape, organizations are facing the challenge of adapting their legacy systems. According to VentureBeat, 88 percent of technology assets are legacy systems, half of which are business-critical. This pressing reality underscored the importance of preparing your data stack to fully capitalize on the potential of AI.

One major obstacle hindering organizations' progress is the siloed nature of data, making it difficult to access and utilize for predictive analytics and AI. To ensure your stack is primed for this new and innovative use of data, it must include two crucial attributes:

1. Reliability in data quality and movement

AI requires large volumes of trusted, normalized data of various types, including both unstructured and structured data, for effective training. However, efficiently moving this data from disparate sources and storing it in a destination that’s readily available and easy to access can prove daunting and risky if your data stack isn’t robust, scalable and secure. 

To address this, George Fraser, CEO and co-founder of Fivetran, recommends “raising the level of abstraction by adopting tools and approaches that enable interaction with data sources and destinations tables without diving into lower-level technical details such as CPUs and Java settings.” 

By embracing this higher level of abstraction, organizations can overcome the risks associated with data infrastructure failures. Incomplete, inaccurate or unreliable data can wreak havoc on the output of predictive modeling, underscoring the need for reliability in the data stack. That becomes especially apparent when you consider the risk that failure can create within data infrastructure. 

When you’re looking to move and leverage data sets, you want to ensure that your methodology is idempotent. In simple terms, idempotence means that executing an operation multiple times should yield the same results as the initial execution. This is especially critical to prevent duplication and avoid the need for custom recovery procedures in the inevitable event that syncs fail because of network outages, hardware problems or software bugs.

If you’re leveraging legacy systems or DIY pipelines, this is difficult to achieve, as failures are bound to happen. Particularly for large-scale jobs, such as the volume required for AI, having the opportunity to start from the last good state is not only required — it’s critical. Leveraging an automated and fully-managed service, like Fivetran, which offers full idempotence, ensures data reliability — regardless of the inevitability of failures. 

“I think that a really good data plumbing should be like plumbing in your house. You don't really need to know it's there, you just know the water will come out of the pipe and is clean.” 
Lindsey Allen, General Manager, Azure Databricks, Applied AI and Data Partnerships at Microsoft

By prioritizing reliability, scalability and idempotence in your data stack, you can build a solid foundation for AI-driven success, ensuring the integrity and availability of your data for transformative business outcomes.

2. Accessible, flexible and ready for anything storage

To achieve the desired outputs in AI, regardless of its application, a continuous stream of fresh and easily accessible data from a variety of sources is essential. As AI models continue to advance, catering to multi-modal outputs, ranging anywhere from natural language to images, drawing and more — the flexibility and robustness of your architecture becomes paramount. 

In the words of Lindsey Allen, “The modern data architecture will combine the best of a data lake, data warehouse and ODS (operational data store) — you need the speed of a data lake and the robustness of a data warehouse in one.” 

This level of open flexibility is enabled by a data lakehouse solution like Azure Databricks. By adopting a lakehouse approach, you can eliminate data silos and centralize all types of data, including raw and unstructured data, in a single cloud-based location. This empowers your AI workflow by fostering a unified experience, enabling scientists, data engineers and data analysts to collaborate on well-governed datasets responsibly. 

With the cloud’s on-demand scalability combined with the flexibility and robustness of a data lakehouse, you can effortlessly maximize the value of your entire data landscape while ensuring that your AI models are trained using fresh and relevant datasets. 

Where does AI go from here? 

As AI continues its rapid evolution and advancement, one cannot help but wonder about its role in the future. George Fraser’s prediction of “No one knows. It’s a tidal wave, and we’re all just seeing what gets washed away” resonated with the uncertainty surrounding AI's trajectory. 

However, one thing is certain: organizations that have embraced modernized data stacks to effectively leverage this innovative technology are poised to reap the greatest benefits. After all, the effectiveness of AI hinges on the quality of the data used to train it — and that quality is heavily reliant on how data is moved, stored and managed. 

At Fivetran and Microsoft, we’re seeing joint customers leverage the power of AI across various use cases and sectors, ranging from predictive planning to improved analytics. While what lies ahead remains unknown, we find solace in the words of Bob Muglia: “it's a new world.” 

The possibilities are vast, and the potential for transformative outcomes is immense. 

[CTA_MODULE]

Topics
Share

Articles associés

Fivetran now deploys on Microsoft Azure
Product

Fivetran now deploys on Microsoft Azure

Lire l’article
Fivetran partners with Microsoft for the Intelligent Data Platform Partner Ecosystem
Company news

Fivetran partners with Microsoft for the Intelligent Data Platform Partner Ecosystem

Lire l’article
Fivetran now available in Microsoft Azure Marketplace
Product

Fivetran now available in Microsoft Azure Marketplace

Lire l’article
Fivetran brings Azure data residency to Japan East Tokyo
Blog

Fivetran brings Azure data residency to Japan East Tokyo

Lire l’article
Fivetran adds support for three new Azure Cloud regions
Blog

Fivetran adds support for three new Azure Cloud regions

Lire l’article
Fivetran now supports Delta Lake on Azure as a destination
Blog

Fivetran now supports Delta Lake on Azure as a destination

Lire l’article
Webinar: Rapid prototyping in retail using real-time data insights
Blog

Webinar: Rapid prototyping in retail using real-time data insights

Lire l’article
Webinar: How to load data into Snowflake with Fivetran
Blog

Webinar: How to load data into Snowflake with Fivetran

Lire l’article
Webinar: How MVF used a modern data stack to uncover revenue opportunities
Blog

Webinar: How MVF used a modern data stack to uncover revenue opportunities

Lire l’article

Commencer gratuitement

Rejoignez les milliers d’entreprises qui utilisent Fivetran pour centraliser et transformer leur data.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.