Modernizing legacy data architectures for the AI era with CarGurus
Modernizing legacy data architectures for the AI era with CarGurus
Modern data architectures rely on purpose-built analytics data platforms, stakeholder buy-in, and flexible, automated data integration.
Modern data architectures rely on purpose-built analytics data platforms, stakeholder buy-in, and flexible, automated data integration.




More about the episode
As enterprises evolve toward AI-powered, insight-driven operations, legacy data systems are becoming a growing liability. Outdated OLTP systems weren’t built for today’s scale or complexity — and they’re holding organizations back from reaching their full potential.
In this episode, Parag Shah — data leader and transformation expert with experience at Rocket Software and CarGurus — shares how forward-thinking enterprises are modernizing their data infrastructure with unified analytics platforms, best-in-class vendors like Fivetran and Snowflake, and extensible tools like the Fivetran Connector SDK.
Key Takeaways
- Why OLTP systems no longer cut it: Learn how legacy, transaction-first infrastructure limits visibility and delays decision-making — especially for AI and real-time use cases.
- The case for unified analytics platforms: See why moving to modern, columnar data warehouses is about more than performance — it’s about unifying structured and unstructured data at scale.
- How to secure stakeholder buy-in: Hear how Shah frames modernization as a business enabler, not just a technical upgrade — and why alignment is key to long-term success.
- Smart vendor selection strategies: Discover why best-of-breed tools outperform jack-of-all-trades platforms, and how to choose technology that aligns with your architecture goals.
- Extend your data ecosystem with Fivetran’s Connector SDK: See how custom-built connectors enable full visibility into niche or proprietary tools — all while preserving Fivetran’s automation and support model.
Watch the full episode to learn how to accelerate your modernization journey.
Watch the episode
Transcript
Kelly Kohlleffel (00:00)
Hi folks, welcome into the Fivetran Data Podcast. I'm Kelly Kohlleffel, your host. On this show, we bring you insights from top experts across the data community covering AI, machine learning, enterprise data, analytics, and a lot more. Today, we are thrilled to have Parag Shah join us. He is a seasoned data leader who focuses on modern data, analytics, and AI solutions.
He currently serves as a VP of Data at CarGurus, where he's leading his teams to deliver data that drives business decisions and accelerates insights. Prior to that, he also held roles at Staples and Rocket Software, among others. Parag, welcome into the show. It's great to see you again.
Parag Shah (00:40)
Kelly, it's great to be back. Thanks for having me.
Kelly Kohlleffel (00:42)
Absolutely. You and I got to catch up a little bit. I guess it was maybe November of last year. And at that point, it sounded like there were a lot of interesting things going on. We will definitely get to those. But first, you've been doing a lot in a new role, new organization. Maybe give us a quick overview of your background and current role at CarGurus.
Parag Shah (01:06)
Yeah, sure. So my background is pretty simple. I've been in the data space my entire career. I started as a computer operator working in a computer room on HP 3000 mainframes and printing out green bar reports. And then I moved my way into being a developer. What now we would consider a data engineer back then was just called the software developer. And I was working in analytics tools, this tool called My Eureka, which nobody's heard of. And eventually we got over to Crystal Reports. Like these are all old names. You know, we were running on your standard old school OLTP databases. And I was essentially in that stack and in that space for quite a few years as an engineer. And I spent time as an architect, a data architect. And then every step of the way I ended up falling into leadership roles, and I decided that I would lean into that and start to focus on leadership and data strategy and being a thought leader in this space. And that's how I ended up where I am today as VP of Data at CarGurus, where we have a cutting-edge modern data stack here, and data is the lifeblood of everything we do. And honestly, it's the lifeblood of almost every business out there if you really think about it.
Kelly Kohlleffel (02:25)
Oh, it absolutely is. And we could probably burn up a whole show talking about HP 3000s, HP UX. You even mentioned one I've never heard of, which was, I think, My Eureka. I've definitely heard of Crystal. But we won't do that. We'll stick on a modern data talk. But it's cool that you and I will have to swap some stories at some point like that on some of the old tech that we've worked with. Let me ask you this, Parag.
A lot of different organizations, mature companies. How do you handle this? We all have these legacy systems that have proliferated, and they just stay around. They just work. How do you handle that when we're talking data?
Parag Shah (03:07)
Well, it's funny you bring that up because Rocket Software, where I was at before this, they legitimately make software for legacy mainframes. And a lot of what you look at when you have these legacy systems out there and you're looking at getting data out of them, but like when I think about the data stack itself, you think of databases that were written for transaction processing and not for analytics. So for the longest time, things that we've seen are people who focus, are companies that focus on trying to kind of, I guess, fit a square peg into a round hole where you're, you in the past we were trying to make these analytics use cases available via these OLTP databases. And as that has started to evolve, you start to realize that there is this concept of unstructured data and semi-structured data, and there's value in all of this data.
In the past, in some of these old legacy systems in the data stack, you really couldn't do much with that information, that data. It becomes irrelevant to you because it's not structured. And the way I see this has evolved and how I've seen this modernized and how I've modernized is by moving from some of these OLTP structures to a columnar database, like a Snowflake, and moving to a system that is designed for our analytics workflows.
The other thing that I've noticed is, over time, there's been a change in patterns and how data gets into those systems. In the past, as early as 10 years ago, ETL was a very common pattern and ELT was less of a pattern. And the reason I say that, you know, the reason I say that is because 10, 15 years ago, compute was very tough to get. It was not as scalable as it is today.
Everybody wasn't in the cloud. We're not using like the AWS's of the world, the Azure's, and the GCP's, and things like that. Now we have compute available to us. So we were able to move away from that kind of old pattern, which has now become an anti-pattern. And now the pattern is to use ELT and bring your data in and then transform it once you've brought it in. Now we have the compute to do that, and it's cheap enough that we can actually do that.
Kelly Kohlleffel (05:25)
Elastic scalable, scalable, compute changed everything. You also talked about this multi-data types. And I think that, you know, I'm even, I'm going to go back say 15, 20 years. I mean, we were dealing with unstructured data at that point. The problem was, and you talked about this, it sat on its own. Like it was not combined together with your structured, your semi-structured kind of sat alone. And I, what you're saying today is with where platforms like Snowflake are going, now I can take my structured, semi-structured, and my unstructured into one place to get value in an analytics or data product workflow.
Parag Shah (06:05)
Absolutely. I mean, the value is there and we have it now, and it all sits in the same place. You don't have to move data in between X system and Y system to get the insights that you're looking for. You have it all sitting in the same place. I mean, before we had our OLTP databases, then we had, you would have a whole team of big data engineers that's working with Hadoop and MapReduce and producing information that way. And now it's all being done in the same place, which is incredible to see how quickly this space has evolved.
Like you said, with the elasticity of compute, now it's just, we're moving at a breakneck pace.
Kelly Kohlleffel (06:42)
Yeah, it is. And it, I don't see it slowing down anytime soon. I think it's just going to continue to accelerate with all the new workloads that are available. So when you look at modernization across the board, and you've done this multiple times, what all goes into a modernization effort for you, feel free to talk about tech, people, process. And from a process and organization standpoint, Parag, maybe also talk about things like stakeholder buy-in. What's the implementation process and roadmap look like for you in a modernization effort?
Parag Shah (07:16)
Well, when I think about a modernization effort, the first thing I think of is getting buy-in up front, right? You have to explain the value of why you're trying to modernize your data stack. You're trying to explain the value of the insights that your stakeholders can gain that they maybe don't have visibility into today. So the first part is getting that buy-in. It's coming up with a comprehensive strategy, letting them know from a swag perspective how long you think something's going to take and why we're choosing the technology that we're choosing.
I brought in Snowflake to multiple companies. I've now brought in Fivefran to multiple companies. It's the software that I choose is the software that is A.) easy to use. B.) is a software that is focused on a single space because the way I look at this is I want a vendor that's innovating in one space. I don't want a vendor that is trying to innovate across multiple different channels.
And I do personally like from my perspective when I'm choosing software to go with the ones that specialize in innovating in that single space.
Kelly Kohlleffel (08:21)
I love that. And I think it goes to, to what I wanted to ask you about. You talked about with stakeholder buy-in answering the question of how long it's going to take. What are, what are stakeholders? What's a C-suite, what's their tolerance for buy-in or not buy-in, but what's their tolerance for how long is it going to take? Where, where do they stand today? Because my general sense is the window shrinks in terms of how much time people are willing to give you to deliver something.
Parag Shah (08:50)
Yeah, you have to move fast. If you're not moving fast, then you're in last place, right? Especially in a modernization journey, but you want to make sure you're making the right bets. It's not okay to make a negligent bet. It's okay to make a good bet, right? So we want to make sure that if we have all our cards on the table, we're playing the hand the best way we possibly can, and we're not taking undue risk.
So if you do that and you kind of identify that to them, that makes life a lot easier for them in terms of understanding what their risk tolerance is. And in terms of getting that buy-in and moving quickly, there are some sacrifices you have to make. In some cases, if you have to do a full modernization, you have to go through a full modernization journey. It can take time. And those are just some of the things that you have to make clear up front. And there's common roadblocks that you run into when you try to move from a legacy system to a modern system.
There's a migration timeline. You have to make a decision. Do we lift and shift? Do we go green field? Do we go with something that's a hybrid of both? And please don't ever go with a hybrid of both. It's the worst of both worlds. And that's really kind of what you have to do. You are essentially, you need to go in and sell your strategy. You have to become a salesperson as a data leader to sell your strategy and kind of how you get their buy-in is by selling that strategy.
Kelly Kohlleffel (10:14)
Hybrid can be tempting though. I'd love to hear why it's the worst of both worlds. Is it just like, you know, foot in both worlds and it's just make a decision and go with it? What's your experience with trying to go hybrid, and why you're recommending totally avoid it?
Parag Shah (10:31)
So to me, when I've seen this in the past, if you go with that hybrid approach, it takes you longer because you're doing more work to make sure that you can get some sort of new thing going while you have an old thing that's migrating in parallel. And now you're trying to figure out which one's the source of truth, where are you having duplicate data processing, where is it costing you more money? There's a lot of repetitive, redundant work that goes in when you choose to go green field plus lift and shift, and and that I guess that to me the juice just isn't worth the squeeze there.
Kelly Kohlleffel (11:06)
I was wondering too, one of the things that I've seen about why not to go hybrid, I think we make the assumption that a lot of what we do today is still relevant. Whereas if you really dig down into it, some of the, I'll call it the data workflows, maybe we've had over the last 10 or 15 years, nobody really even uses that downstream. But no, we got to keep this in place, you never know. So that could be another reason.
Parag Shah (11:34)
Yeah. No, absolutely. I mean, you're going to have these data retention policies that you either have in place or should have in place, where old data is being archived in some way. So you have that, but you also, for some cases, you need to save data for at least seven years, right? So in those cases, it might make sense to just pull your history over so you have it from audit purposes, but build with something completely new from scratch for what's going forward.
Kelly Kohlleffel (12:02)
Yeah. Yeah. Well, when you and I saw each other back in November, we talked a little bit about this new thing at the time that Fivetran had called the Connector SDK. I would love to dig into that with you a little bit. I've, from my experience so far has been fantastic. Of course, I'm internal to Fivetran, but if I'm a data engineer and I can build a custom data connector that integrates with the Fivetran platform overall.
I don't have to do, I'll call it two thirds of the automation process. Kind of gives, it expands my 700 plus prebuilt connector catalog to any other source. I mean, there's a lot of power in that. If I can get the same reliability, the same schema support from an evolution scheme, standpoint, CDC, all those things would love to hear your experience. What have you seen? Where have you used Fivetran Connector SDK? What do you see going forward?
Parag Shah (12:59)
Yeah. So let, you know, I've, I think I first requested the connector SDK four years ago. Right. And, and when I did it, it was, I was at Rocket. I was working with my, my data team, and we were, we were thinking through like, Hey, some of these connectors, you know, at the time you didn't have as many connectors in Fivetran as you've evolved, as you've created more connectors, as you've gone through acquisitions and things like that. You've added so many new connectors over the years.
But that takes time. And I'll be honest with you, there was always a workaround. We could always use AWS Lambda, and we would create these Lambda functions that would be scheduled and run on Fivetran. But now we have the ability, like you said, we can create a connector, and it falls under our support agreement. And it's all managed. I don't really have to deal with spinning up these Lambda services and things like that.
And it's a huge unlock for companies that may not have non-standard data sources. But that being said, even if there are standard data sources, I'll give you an example of my team at CarGurus. We have an integration that we built to GitHub. Our integration that we built open source to GitHub is actually better than the standard connector within Fivetran. So what we did was we took that logic and we built a custom SDK.
And now we're using Fivetran for our GitHub integration. And we were able to use our custom SDK to get some of the fields that we were looking for and get some of the functionality that we were looking for that wasn't there. The other thing that this thing drives, like this ability to have this SDK, is it drives community, right? So if you think about building maybe like an open source type community around some of these custom SDKs where people can start to leverage what other people have done, you can have sharing across the board. I mean, you see this pretty much everywhere. I mean, you look at a stack overflow, you're essentially just sharing code, right? That's what this is.
If you look at the different visualization options that are out there, you have an option to go pull from a library of visualizations. So one thing is it starts to democratize that data connector creation process. And you could say, okay, this one might not fit my needs, but this out of the box one is great. So let's go with that one. But if that one doesn't do everything we want and we think that there's one out there that somebody else has created, well, let's go grab that one and build a connector, our own custom SDK. And then it really unlocks connectors that don't exist. Like there are some fringe systems out there that you just, it's, it's, I mean, what does the software landscape look like today? Like there's thousands upon thousands of software tools out there. Nobody's ever gonna build a connector to all of them.
Kelly Kohlleffel (15:48)
Okay. That was incredible. You've got, I think let's go another two hours on this. Let me set the stage for maybe part of the audience is not familiar with this. So, and I'd like you to confirm the way I think about Fivetran Connector SDK. I think about in general, a data pipeline, Parag, I've got the first third of that process is I need to interact with the source. I need to figure out how am I going to interact with existing tables and schemas and API and all those things that enable me to talk to that source.
Secondly, in a data, if I'm building a totally custom data pipeline, I probably am going to need to do some processing on that data from the standpoint, not ETL processing or ELT processing, but data type mapping.
Maybe some minor enrichment so that I can get it prepped for the third part or the last one third, which is interacting properly with a destination like Snowflake. Well, Snowflake as a destination is different than Databricks is different than BigQuery, et cetera, et cetera, et cetera. So what's really cool and what I heard you describe with Fivetran Connector SDKs, you have this first one third that's the piece really that's custom. Once I hit the deploy button, those other two thirds, that's all Fivetran goodness from there. It's all the automation reliability security that you expect.
Parag Shah (17:16)
Yeah, 100%. I mean, what we're doing is we're creating that first piece. When you look at your current modern data lake or lake house architectures that you see out there, the start of it all is ingesting your raw data, right? And so having this idea of like the far left side of your pipeline, you can connect to anything you want to and then take advantage of everything else Fivetran brings to the table downstream. I mean, it's invaluable.
And I think where you see these SaaS tools that are out there that are really, really flexible are the ones that create these types of custom SDKs and allow you to integrate with more sources and integrate with your fringe sources because I don't care what company you work at, every single one of them has at least one fringe source.
Kelly Kohlleffel (18:02)
I would take it multiple steps beyond that. think large organizations, probably you're talking 100, 200, what you call fringe sources. These are sources that maybe I only use or to your point about, I think GitHub, maybe I want to use it in a particular way that the existing connector in the Fivetran connector library does not allow for, provide for, and maybe it's not in the roadmap. Both of these, fringe sources and then also maybe I want to interact differently with an existing source, I think really cool use cases on how you're getting value from the Connector SDK.
Parag Shah (18:40)
Yeah, it's a phenomenal tool to add to your toolkit. You're in the data space, and you're working with Fivetran and you have the access to this Connector SDK, I would encourage people to play with it, to use it, and build your own connectors. It's a phenomenal tool.
Kelly Kohlleffel (18:57)
The other piece I want to ask you about, you talked about this community and Connector SDK kind of driving this, you know, I've got this 700 plus, Fivetran first party, you will, connector catalog. But then if there's a community catalog, if you will, that if you want to, no requirement, I could submit my code to, or I could submit that custom connector to that actually builds out.
I would say to you, if everybody building custom connectors did that, it would absolutely dwarf the existing Fivetran 700 connector catalog. I think you're going to have thousands upon thousands of custom connectors out there.
Parag Shah (19:40)
Easy, easy. And I mean, that's it, right? Like you're essentially talking about, you know, creating this community in this library where people can go in and can share. It's like you're adding an open source, you know, type feature to your offering where, you know, we can share some of this code that we've written. And, you know, like some people think about it one way, other people think about it another way. And you might see two different connectors for the same source that are custom that were submitted. And you may see two people looking at it differently. And one might work for you and one might not. So that's where this community aspect really comes into play.
Kelly Kohlleffel (20:14)
I talked to a publishing company at a recent conference, and they use a specific ERP. It's a smaller ERP. Fivetran actually has a connector for it, but because they're a publishing company, they had built, I'll call it extensions or skins specifically for publishing on top of this ERP. Well, guess what? The regular Fivetran catalog connector didn't accommodate these extensions from an industry-specific standpoint.
So there's so many of these opportunities and possibilities, I think, to ultimately what we're all trying to do. And you talked about it earlier. I got to deliver data as quickly as possible that's high quality, trusted, usable to my downstream data products. And this is another great way to do it.
Parag Shah (21:04)
Yeah, and the other thing we talk about a lot is democratizing data, right? Like if you type that work into your Google search bar and just say democratizing data or data democratization and Google it, you're gonna get billions of results, right? It's what everybody talks about is putting data in the hands of the people who can get the most value of it as fast as you can. And having the ability to do that and to acquire new data sources into your lake and lake house architecture, that is, again, it's invaluable to be able to move quickly and to be able to support it with relatively lean teams.
Kelly Kohlleffel (21:38)
Well, I think I heard you say, and I agree with this, I think I heard you say no infrastructure. I don't need to spin up Lambda. Nothing like that. No infrastructure is pretty powerful. Get me out of the infrastructure game. I don't have to deal with it.
Parag Shah (21:54)
Yeah, 100%. Why should I be building infrastructure, doing upgrades, making sure that I have people that are online to be able to monitor all of these things and have all of that working when I can get that from a SaaS provider that is gonna host it for me, that's gonna provide me great SLAs, gonna provide me 24x7 coverage. My operating costs go way down.
Kelly Kohlleffel (22:21)
Absolutely. What, I think you talked about this a little bit, but what any advice to a data engineer, to a developer that is looking to build their first custom connector? I don't know how you and your team did this. I know there's a lot of documentation out there with examples, but what do you recommend? What's the best path to go?
Parag Shah (22:40)
Just get in there and do it. You know what I mean? Look at the documentation, start working on it. If you're a data engineer working in the open source space, you already know how to do this stuff, right? You've already done this when you built your open source pipelines, you're using whatever for orchestration, whatever tool that may be, you have the capabilities to do this. Get out there and work with it. Use the documentation. Heck, the way we're going with AI coding assistants, you can feed the documentation to one of these AI coding assistants and ask it to help you write the custom connector.
The possibilities are endless today. So just don't stop, get started quickly.
Kelly Kohlleffel (23:15)
Yeah, one of our product managers, Alison Klein wrote a blog post to that exact thing you said. She talked about in an outline how to use Claude code to essentially build very, very quickly a custom connector using the connector SDK. Also, I don't know, you may have talked, I don't know. This is Python based. I had somebody that asked me, Hey, do you have Java? You got something else, but no, this is Python based. And I think Python, no, no problems there. I mean, that is probably the most popular data engineering language today going.
Parag Shah (23:49)
Absolutely. If you're writing data ingestion pipelines today, there's probably a 90% chance you're using Python.
Kelly Kohlleffel (23:56)
I agree. I agree. All right. What's any, any other thoughts on that before we move on? I think you can tell I'm excited about this. The possibilities for you, for the community, for everybody out there to do some really cool stuff and just extend to thousands of potential options.
Parag Shah (24:17)
That's it, the potential that exists for creating a library of connectors that we can all use. I mean, you look at something like Hugging Face, right? Where you have all of these shared open source machine learning models, right? We could do something very similar where we have shared open source type connectors that you can just very easily import and start using in your Fivetran instance.
Kelly Kohlleffel (24:43)
And it could even be just a pattern or a template. You talked about your GitHub example. I gave you that example of an ERP that had a specific industry spin to it. So you may not have that ERP or it may not be GitHub, but the template or the pattern that you use for that, you may be able to take that code base and apply it to your particular source.
Parag Shah (25:04)
I mean, if it gets you 60% of the way there, that's 60% closer than you were when you started.
Kelly Kohlleffel (25:09)
Exactly. And probably a good friend, Claude or GPT-4 or whoever you like to use can get you most of the rest of the way there if you want to.
Parag Shah (25:17)
Kelly, I don't know if you've had a chance to play with the likes of like Cursor and Windsurf and GPT, like Claude Code, like these things that are out there, some of the things that you can do with it, it's unbelievable.
Kelly Kohlleffel (25:28)
You're trying to take me out of, feel like I'm a pretty good prompt engineer. You're trying to even take that away from me now, you know?
Parag Shah (25:35)
Well now prompt engineers can write code. It might not be good code though. That's the thing you gotta be careful with. And that's, know, even if you're using something like that for a custom SDK, that's one thing I would say is like, even if you're writing a custom SDK and you're using some sort of coding assistant, that means you should, you still need to review the code. Just don't expect it to work out of the box.
Kelly Kohlleffel (25:54)
100%. Absolutely. But again, kind of to your point about get me 60, 70, 80% of the way there. And then if you're a data engineer that understands Python, I think you're going to take that last step very, very quickly. Yeah.
Parag Shah (26:10)
I mean, if you get me 80% of the way there, right? And now I can create five pipelines in the time that it took me to create one in the past. What does that mean for speed to market? You're flying, your teams are so much more efficient.
Kelly Kohlleffel (26:23)
Yeah. And your data team, the executives that are sponsoring the data team, and anybody downstream that's using these data products immediately becomes that much more relevant.
Great discussion. I would like to spend maybe the last few minutes that we have, Parag. You keep an eye on everything going on, any emerging trends that you see shaping this data world that we're in. There have been so many changes happening just in the last six months.
Constant change, literally almost every day, anything that stands out to you, then how should we as data leaders prepare for just the onslaught of this continuing to happen? What do you do? How do you do it?
Parag Shah (27:03)
Yeah, I mean, I'm not talking about the last six months. I'm talking about the last six minutes. Keeping up is really, really hard. First things first is I would say keep up with the newsletters that are out there. Personally, I use this newsletter. It's the TLDR newsletter for data and AI. There's two separate ones. It's how I start my day every single day. And I read through some of these things and try to keep track of what's going on and where things are changing. And I think as much as I hate to say this because everybody is talking about AI, that's the direction that we're going. And that's something that your data leaders need to think about because when you have this influx of either AI tools or these AI features that you want to build into your product, the source of all of that is good underlying data. Right?
So think about that and think about data quality and think about data governance and think about how you're going to make good data available through some of these tools and features. Like even if you look at something like Snowflake and their Cortex feature, you wanna have hyper-curated data that that Cortex feature can use. You wanna have clean, consistent data. Otherwise, you're not gonna be able to get the amount of value that you should be getting out of something like that. And that's really where I see us going is, and then I also see the future is changing rapidly.
I see this idea of agentic artificial intelligence is going to start, it already is playing a huge role in software companies across the board. And you're seeing small companies that are reaching massive amounts of ARR with small teams using some of these coding platforms. So I think staying on top of the shifting of language models, staying on top of hardware and how things are changing with, with compute and GPU as opposed to just CPU in the past. We're looking at all these different things now that we really didn't think about as much before. And the one thing I will say is there's a lot of hype around these large language models and you know, your chat GPTs and your, your Claudes and things like that and Geminis, but where I really see this going is I see the opportunity for us to start building specialized small language models that act as experts in a specific space and having those models talk to other models. And that could be the evolution of like maybe some sort of microagent architecture.
Kelly Kohlleffel (29:35)
I'm right there with you. That to me, which you just described, that is the most exciting. Like don't make me be an incredible prompt engineer to get good value out of my data and out of AI. Build that for me in the background as a data team so that I can click one button and solve a business problem that I have in 45 seconds versus taking hours or days or weeks. That to me, whether it's with agents, whether it's with using an LLM in the background with an incredible… it's been built to solve a business problem. Sky's the limit. There is so much opportunity there. We were talking about hundreds and thousands of data sources. I think there's hundreds and thousands of those types of fit-for-purpose AI applications in every organization right now.
Parag Shah (30:29)
Yeah, and the biggest piece of those applications is your underlying data.
Kelly Kohlleffel (30:33)
I agree. I agree. Praag, this has been fantastic as always. Thank you so much. I really appreciate you joining the podcast today.
Parag Shah (30:42)
Thanks again for having me. It's always a great time, Kelly.
Kelly Kohlleffel (30:44)
Absolutely. Thanks so much, Parag. I really appreciate you joining the show. I look forward to keeping up with everything you're doing at CarGurus as well. And a huge thank you to everyone who listened in. We really appreciate each one of you. We'd encourage you to subscribe to the podcast on any of the major platforms, Spotify, Apple, Google. You can also find us on YouTube and please visit us at Fivetran.com/podcast. Also, you can send us any feedback or comments at podcast@fivetran.com. We'd love to hear from you.
See you soon. Take care.


