Skip to Content
Behind the Deal Season 4 Episode 3

Scaling Enterprise Software with AI: Holden Spaht with Anaplan CEO Charles Gottdiener and Coupa CEO Leagh Turner

In this episode of Thoma Bravo’s Behind the Deal, Managing Partner Holden Spaht sits down with Charles Gottdiener, CEO of Anaplan, and Leagh Turner, CEO of Coupa, to discuss how enterprise software leaders are turning AI innovation into real business impact. Drawing on their experience as both builders and users of AI, they share how AI is driving productivity, reshaping operating models, and expanding operating leverage—while highlighting the importance of governance, disciplined pricing, and domain expertise as AI adoption accelerates.

Disclaimer

This podcast is for informational purposes only and does not constitute an advertisement. Views expressed are those of the individuals and not necessarily the views of Thoma Bravo or its affiliates. Thoma bravo funds generally hold interest in the companies discussed. This podcast should not be considered as an offer to solicit the purchase of any interest in any Thoma Bravo fund.

AIR DATE:

February 5, 2026

LENGTH:

34:48 minutes

LEAGH TURNER (00:02):

Delivering higher value for our customers faster. That's like the main thing. So we use that platform in our support organization in order to find root cause answers to issues that are surfaced by our customers. We do it about 53% faster than we used to as a result of leveraging this technology.

ORLANDO BRAVO (00:27):

Last week you heard me and IBM CEO Arvind Krishna as we discussed innovation and leadership in the time of AI. This episode is a conversation between Thoma Bravo managing partner Holden Spaht and two top SaaS CEOs from our portfolio, Charlie Gottdiener of Anaplan and Leagh Turner of Coupa. They explore how leaders are navigating organizational change and adopting AI both as technology providers and as internal operators. You'll hear how Anaplan and Coupa are using AI to boost engineering productivity, improve customer support, accelerate compliance, and rethink core business processes. Anaplan, a leading business planning and decision-making platform, is using AI to aid in scenario planning and analysis that helps connect people data and organizations. Coupa has been a standout in spend management software for more than 20 years and uses AI to maximize efficiency for their operations and their customers from early real world ROI to the long-term vision of enterprise operations. This discussion offers a candid look at where we really are in the AI adoption cycle and what leadership in this new era demands.

HOLDEN SPAHT (01:49):

Well, thank you both for being here. We just heard from customers. I think what's interesting in talking, I know you're introduced as CEOs of two amazing SaaS companies, both of which I have the privilege of working with -- which you are, but you're also, I think people remember, you're obviously a provider of AI software to companies, but you're also a customer, right? And in many ways tech and software tend to be early adopters of software and tech. So we glean a lot from you as customers too. I think the way we're going to organize the discussion is to start there with your experience as customers of, and then we'll get into the innovation and the roadmaps and the things you're providing to your customers and the value that they're seeing. Charlie, maybe start with you - talk about how you (Anaplan) use AI internally.

CHARLES GOTTDIENER (02:44):

Yeah, we've been on this internal AI journey for about six months and we've really leveraged AI in a couple of different ways. So first of all, we've issued a tool Google agent space to everybody in the company so they can start experimenting with AI. It's important for us to have a sanctioned tool so people don't go out and use tools that we don't want them to use because this meets our security protocols. So that's one thing. The other thing is just experimented with either tools that we've jointly developed with vendors or those that we've just adopted. And I'll give you two examples. In go to market, we've adopted some tools in the pre-sales world, we've adopted an AI tool that helps us automate the building of demos so our pre-sales people don't have to spend all this time building demos. They can be spending a lot more time in front of customers talking about the value proposition.

(03:37):

So that's one area. The second area is in customer success where we build business cases and have for quite a while showing the value of our platform, quantifying the value of our platform. We're using AI to automate some of that, not the math behind it so much if we do that in our platform, but it's really building everything around it. And that's also the same thing for customer presentations. So the engineering world, we've been at this for a little longer, probably a little bit more than six months, and there are two areas that are worth talking about. One is engineering productivity, and the other is our software development lifecycle process efficiency. It's very hard for us at the moment to isolate the impact of AI and engineering, but what I will tell you is that we're going to do 6,000 releases this year in our engineering team.

(04:29):

Last year we did 4,000, so the productivity is improved by 50%. Now, I can't attribute all of that to AI because it's process change, it's other tooling, it's other training that contributes to that, but we've only grown our engineering team by 10%. So AI and the process changes that we're doing are all working together from a productivity perspective on the software development lifecycle efficiency, this is the thing actually our head of engineer is most excited about. So for those in the software development process, it's pretty complex, right? One of the things that happens is you define requirements and find requirements have to get translated into stories that coders building it well. Stories can be pretty inaccurate, right? The translation from requirements to stories can have low quality levels if you will. We are doing thousands of stories every year, so four or 5,000 stories at a time.

(05:29):

So it's impossible for a human or a set of scrum masters to quality inspect every story. It would just be humanly impossible. Enter AI. And so now we can look at the quality of the stories and the stories that are low in quality we can deal with on the front end because when you have a poor quality story go through the software development process, you have a ton of rework that has to happen, which is why so much code gets refactored. Now with AI, I've got visibility into the quality of each of these stories, so I can eliminate or minimize all this rework and just have a lot more efficiency in the software development process, which is really our manufacturing process for building software. So those are some internal examples.

LEAGH TURNER (06:16):

Why don't I grab that, Holden, and just give you the same lens. I'm going to do it the same way Charlie did things that make our company better and then inside the company things that make our engineering team better. We did exactly the same thing. We chose a sanctioned tool for internal use across the company. We chose the Google AI platform. We use it in a variety of different ways. Let me give you a couple examples. We're largely focused on delivering higher value for our customers faster. That's the main thing. So we use that platform in our support organization in order to find root cause answers to issues that are surfaced by our customers. We do it about 53% faster than we used to as a result of leveraging this technology. So that's a really good thing. It satisfies our customers faster. The second thing that we do is we use a technology called Pitch Monster. For anybody that knows it, it's a tool that allows you to certify your skills.

(07:14):

Rather than doing certification of people who are customer facing one-to-one, one person to another, you do it in front of your Zoom screen effectively and it reads your body language and your voice intonation. It makes suggestions for you. It customizes your training in real time. And what we found is that 53% more of our people have certified their own skills than they did before we released the tool. So that's good. It means that the people that we move out into the field to interface with our customers are better trained and better articulate at what it is that we do. And so as a result, we can deliver more value to our customers faster. In terms of our own internal use for engineering, we leverage technologies like GitHub and Confluence and Copilot, Claude, and we do that in order to try and do two things, really increase the productivity of our developers and decrease the amount of defects, as Charlie said.

(08:13):

What we've seen is that we can release code about 20% faster. That's good, and we can reduce the level of defects by about 30%. That's also really good. It means that the code that we release to our customers is of higher quality. We can do it at a faster rate, so we can release code faster than we used to. I'm going to give you one specific example and then I'll stop. In the land of enterprise software, you need to be compliant. It's critically important. It's like the right that you have to do business with your customers. Compliance these days is a really tricky thing. It changes all the time, and there is a new compliance standard in the EU called DORA. We are working to comply against that standard. Let me explain to you what's required in order to be able to do that. So in application software, we have a variety of different layers that need to be deconstructed and fields that need to be added in order to be compliant.

(09:13):

Those layers are a database layer, a workflow layer, an API layer, a UI layer. Every single one of those layers has thousands of fields, sometimes hundreds of thousands of fields, and every one of those fields needs to be changed by 60 to 100 parameters in order to be able to comply with this DORA standard. That would take our team about eight to 12 months typically. And our customers, European customers are saying, are you DORA compliant? We want you to be DORA compliant. Now, the compliant standard was released not very long ago, and they want us to move really quickly. Good news is as a result of leveraging AI, we will be DORA compliant in about three months, so a quarter of the time that it used to take us, and instead of using a team of eight developers full-time, we can now do it with four developers full-time in three months. So that's real benefit to the way that we deliver software to our customers and stay compliant, which is the single most important thing we can do.

HOLDEN SPAHT (10:18):

Amazing. Those are great examples. By the way, I'm realizing now that my agenda, I'm probably not going to be able to ask both of you every question I had to get through this. I was wondering how I may have to alternate, but a quick follow up, and maybe I'll try and obviously add Leagh if you have, but with these productivity improvements, how does that change the way you think about future hiring or the long-term margin potential for your company?

CHARLES GOTTDIENER (10:45):

You're always looking for ways to shift resources to the highest priorities in the company, and I don't think AI is any different. We're going to be shifting a lot more resource to AI over time because it's going to accelerate our business, but we're going to keep on adding headcount, probably not at the same rate, and so we may have some margin opportunity or reinvestment in growth. That's how I would think about it. So I think if you looked at our headcount growth curve, it will grow, but at a slower rate over time and we'll be more investing in tools and really directing more on what AI can do to help us grow.

HOLDEN SPAHT (11:25):

Yeah, okay. I'll make sure Tara wrote that down for our budget discussion. So Leagh, maybe one thing that we haven't talked as much about but I've heard from several CEOs is that everybody internally, they love to talk about embedding AI into the products and the customers, but the change management on the user side is not as fun for people to talk about efficiency and governance and frameworks and how do you make sure your data doesn't end up in the wrong place. So maybe just talk about how you, does it make you nervous? How are you dealing with it? Do you have somebody who manages that? How do you talk about the change management side.

LEAGH TURNER (12:06):

I'm going to talk about it internally and externally and I'll do it very briefly because imagine Charlie will want to get in when we talk about leveraging technology for the sake of our customers. We trust really big brand name companies. We partner with Google and with Anthropic and with Microsoft, and we do it because of they are really well funded, they're on the really front edge of trust and governance, which is new and forming and changing and the single most important thing we can do while we maintain, I think I failed to address the amount of data that we manage. We have about $8 trillion of transaction data that we are responsible for. And so it's really important to us that we never let our customer down. So when we release code to customers and we leverage partnerships to be able to do it, we do it with the world's biggest brands internally.

(13:02):

Let me talk a little bit about that. How we manage change. We do it in a variety of different ways. I'll say the first thing we do is we have a relentless commitment at Coupa. I imagine Charlie does as well to evolution. We're going to be on the leading edge. We don't have the right to go out and talk to our customers about change unless we're changing ourselves. So that's really important. And we manage AI deployment or governance top down and bottom up. So top down, we've developed an AI, COE, we deploy funding to that COE. The COE determines what gets funded based on the highest ROI. We're really rigorous about that. We have AI champions who live in our business. These are the people, Rob and I actually were talking about this, who want to live out on the front edge, who are on the forefront of this technology, who drive change, who are inspiring, who ask people to reinvent what they do rather than just evolve what they do.

(14:01):

And those people live in our business and we manage AI with a central governance policy so that everything flows up. We use the same governance standards that we would for anything in our business to be able to measure whether or not we're doing things right or wrong. In terms of bottoms up, I think this is important. We ask every single function in the business to build their own AI strategy. The reason that we do that is because we believe that the best skills live in the practitioners in the function and because we want to make sure that people feel accountable for driving change. So that's the way that we do it. Budgets, as I said, they get released into this AI COE.

HOLDEN SPAHT (14:42):

Then do you have an AI budget?

LEAGH TURNER (14:44):

We do. We do. And we also ask our functional leaders, as you well know, to get better and faster every single year. And we contain their budget so that they must do that, and we ask them to really justify every change that they make so that they have the opportunity to think about what technology could be leveraged in order to get better rather than just using the same methodology that they might have in the past. The simple reflex action is to add headcount. They don't necessarily need to do that these days. So that's what we do.

CHARLES GOTTDIENER (15:16):

Okay.

LEAGH TURNER (15:17):

Charlie, did you want to add any?

CHARLES GOTTDIENER (15:19):

I think you covered that quite well. We have a very similar approach, top-down and bottoms-up. I would say the bottoms-up is more about getting the early adopters to engage by giving them the tooling. The top-down is a little bit old school, I would say, where I pushed my leaders pretty hard and took money away from them and said, go solve that problem with AI to get them to really lead from the front. And I constrained things rather than giving them budget for headcount.

HOLDEN SPAHT (15:48):

Okay, let's move to innovation, which is more fun generally for most people to talk about. Charlie, why don't we start with you? And there's some really big, I feel like we're just now in this phase where we've done a lot of, I'd say incremental as I think across our SaaS portfolio, but now really meaningful innovation and ROI for customers. So you want to talk about a couple of your big initiatives?

CHARLES GOTTDIENER (16:14):

Yeah, we think about AI as an evolution. We've had forecasting and optimization products based on AI and ML for quite some time now because as you might imagine, the business we're in is scenario planning and analysis and forecasting, time series forecasting, optimization equations, and doing that in real time is important to our customers. So that's embedded in our platform. Our customers pay us for that. Sven mentioned syrup, which is another version of that. It's times series is forecasting for retailers leveraging POS data, so it's more specific to an industry. So that's an area that will continue to grow. We've got over 15 million of a RR in those product sets today, and we're making them better over time. That's not the stuff everybody talks about though, right? Because everybody's talking about Generative AI. And where we are on that is we've got quite a robust roadmap building out agents, and so we've got what we call role-based agents that our agent think about them as analysts in each of our domains.

(17:22):

So we have a finance analyst that works with our applications to answer questions. And what the finance analyst might do is if you wanted to ask the finance analyst a question, what was the highest grossing product last quarter in this division? It will give you an answer and then it will suggest the next best action. Do you want to know what drove that result? Or do you want to send a workflow to the person that was in charge of that business? And so it suggests answers and questions like that. Those role-based agents will evolve over time and become more and more powerful. And we have a role-based agent in each of our domains. The other thing that we're doing, so those are in the market. The other thing that we're in beta with right now, which is probably really one of the most exciting things we're doing is we have a product called co-modeler, which as I said earlier, building the digital twin of your business process on Anaplan is building something called a model.

(18:26):

And co-modeler builds models with AI, and it does what would take a model builders, think about them as our equivalent of a coder. It could take them weeks or months to build. AI can do it or co-modeler can do it in minutes or hours. And it's a breakthrough piece of technology. It will make our ecosystem because there are thousands of model builders, Anaplan model builders out there, much more efficient and effective and productive, and we will charge for that obviously because it's high value added. So those are the here and now. What's coming is autonomous agents around things like anomaly detection and then agent studio, which is our ability to build custom agents on our platform, giving our customers that capability.

HOLDEN SPAHT (19:14):

And you're developing that injunction with some of your largest customers, including JP Morgan, who we heard from earlier. Do you know how are you going to charge for those or about customers' willingness to pay? Have they gotten comfortable that's coming and they are going to start paying?

CHARLES GOTTDIENER (19:33):

That's part of the beta, but we've actually sold some ahead of the official launch of the product. If you think about co-modeler, just as an example, it will be consumption based. So think about tokens, which I think people are getting familiar with. We will have modeling actions that are an aggregation of a set of tokens and that's how we're going to charge for it. But the value proposition for our customers is we are effectively giving you the productivity of four times the productivity of a model builder. So four for the price of one, if you want to use a used car salesman approach to this. But that's kind of the simple value proposition. And for the role-based age, the analyst agents, it will be very similar in terms of logic and pricing.

LEAGH TURNER (20:21):

First of all, I'm going to say Charlie, we're a proud Anaplan customer and much of what you say we find very true in the day-to-day. Let me talk a little bit about Coupa. Coupa has been gaining insights from our customers' data for years, detecting anomalies, leveraging those anomalies and detections to make recommendations leveraging AI. So AI is not particularly new to us, although I thought spends sort of three categories -- enablement, optimization and reinvention were really good. Let me give you some examples of things we've been doing for a really long time. We've been extracting data from a PDF and matching it or leveraging automation rules to proliferate or fill out an invoice. Doesn't sound like much. Here's what it does. It improves the speed at which you can generate that invoice by about 85% and it reduces the errors in the invoice by 70%. We've been doing that forever.

(21:24):

Essentially since the genesis of the company, we've been generating AI banners at the top of a requisition form. Essentially it says what the requisition is for the amount that's approved previously, and what it does is it speeds up the requisition flow by about 25% that leverages AI. Those things are optimization, I think I would put them in spends category of optimization. Customers are totally unafraid to use that type of technology. When you talk to customers though about agents, all of a sudden sort of the sensory overload comes to play. People really care about governance. And the reason that they do is because you're disrupting their human workflow. You're changing the way that the workforce actually works. And I can talk to you about the fact that we have agent technology. We have four in market today. We'll have five more in January. We'll have 25 by next December. What is an agent? An agent is actually nothing more than a skill. So let me give you an example of one of the agents that we have in market today, which by the way, we developed with one of our customers, UPS. And here's what the agent does. The agent goes into our system, which again is buyers and suppliers sharing transactional data with one another in that data store are RFXs. We generate about 22 million RFXs a month in our system. That agent is a bidding agent. Here's what they do.

(22:54):

They assess what direct material the customer wants to buy, they assess what suppliers are capable of supplying that material. They assess what items from the supplier that the buyer wants to buy, and they make recommendations to that sourcing agent so that that event can occur faster. That's what the bidding agent does. You can imagine how useful it is if you have a requirement to be able to source something fast against the sum knowledge of what is happening across 22 million RFXs if you have an agent who knows that information. So that's something that we do.

HOLDEN SPAHT (23:38):

Do you charge for that?

LEAGH TURNER (23:38):

We do charge for it. Here's how we price.

HOLDEN SPAHT (23:41):

Do we charge for that? Sorry?

LEAGH TURNER (23:44):

We do. And customers are willing to pay because it delivers great value. I would say that's the single most important thing that you have to ask yourself when you determine to price or not to price, it's how much value do I generate for a customer? Is it unique and differentiated? Meaning can they get it elsewhere? In the case of these 22 million RFXs and some knowledge of what happens across 3,500 customers and 10 million suppliers, that's a really unique offering that in fact you can charge for. We embed AI in our SKUs, which we charge our customers for. We charge for these agents if their skills are unique and differentiated. And we are dabbling with this concept, as Charlie said, of what I would call value-based pricing, which is if a customer used to be able to do it and it would take them three days and all of a sudden they can do it in two days, can I charge them for the cost of that day that they saved or some fraction of the cost of the day? And so we're trying to figure out how to do that. We think it's the holy grail of SaaS pricing and eventually all software companies will do that. If they have the right amount of data, they really understand the value that they drive.

HOLDEN SPAHT (25:01):

Do you see a day where when you describe it, and Coupa really was early in the data rights game, which is a massive advantage. Do you see a day where, because there's so much talk about AI or SaaS and all this, but where companies are spending a lot more on labor than they are on SaaS, is there a day where Coupa is the outsourced procurement department for these enterprise companies or where the labor is reduced by 70%

LEAGH TURNER (25:32):

Without question. And we can even see that today we have the benefit of having these wonderful customers, AstraZeneca, as I said, UPS, Sanofi, Rolls-Royce. We get to go talk to these customers about what they intend to do. They allow us to come in and effectively do time in motion studies with them, watch the way that their teams work, and try to help augment those teams. And when we have those conversations, which is the only way, by the way that we build technology, by watching our customers, imagining with them how they could do things better and then actually delivering that for them, these customers say exactly that. It'd be really great if you add all this data and you could do many of the things that we do today without us having to deploy a labor force to do that. So I would imagine we increment into that over time.

HOLDEN SPAHT (26:18):

And why, and this is a question for both of you.

LEAGH TURNER (26:20):

Please, Charlie.

HOLDEN SPAHT (26:21):

But what do we say here on a heater is why can others not replicate that? Is there new competition in this world of AI for Coupa? What are your moats? And then the same thing for Charlie.

LEAGH TURNER (26:38):

Yeah, I thought it was really interesting the way that in the prior panel they talked about this harvesting layer and about how you did, about how much noise there is there. When you get down a layer and you start talking about data and real business process change, there is far less noise. And so that's where our moat comes in when we can talk to customers about the fact that there's a $8 trillion transactional data moat that they have access to that is highly trusted and permissioned and that they can make decisions based on the sum total of that knowledge. People are really interested. And as long as we stay there as opposed to getting up into this harvesting layer and the competition that exists there, I think we're in a really good place and that's where the real value is.

CHARLES GOTTDIENER (27:29):

I would say ours is twofold. The way we think about AI is LLMs are probabilistic in nature. So the really good at predicting the next word, they're really good at doing work, ask it to build a business plan. It'll build a business plan. Now you'll have to work with it to get the right business plan, but it can do a ton of work. And we like that. What's unique about Anaplan is that calculation engine I talked about earlier. And so the way we construct our AI offerings and value to the market is we take the probabilistic value that's generated by an LLM and we marry it with the deterministic answers in our platform. And for planning, answers have to be right. Can't you heard Arvin? I can't have a forecast, a financial forecast. That's sort of kind of right. It's got to be a hundred percent right?

(28:21):

And the answers that are in our platform based on our calculation engine and based on customers and other data are always right. So we're marrying those two and we're bringing them together through our MCP, which is an instruction set that allows the LLM to communicate with our calculation engine and get the right answer. And so that's a very powerful combination. That's at the root of Co-modeler and all the other agents that we have. And that's one big moat. The other moat goes back to data. Well, we don't have proprietary data. We do have a data ontology and we do have pre connected planning flows. So I can make those connections across the enterprise planning decisions. And that's based on the 2 million models, the knowledge that we have and the 2 million models that are on our platform. And so LLMs don't have that knowledge. That's proprietary to us. And the data ontology is really important because it effectively creates a data schema and structure to the data in our platform, which is the context that we need to leverage AI.

HOLDEN SPAHT (29:24):

I think another important point there, and it was because of your domain and how much domain knowledge you have and all the experience, you're able to leverage SLMs

CHARLES GOTTDIENER (29:35):

That's right.

HOLDEN SPAHT (29:35):

To do those things more cost efficiently than a startup would be able to do.

CHARLES GOTTDIENER (29:40):

That's right. What you do. So training those analyst agents is really about an SLM. It's SLM logic.

HOLDEN SPAHT (29:47):

Perfect. Maybe the last question, and we'll go to Q&A but because we've talked so much in this innovation, when I hear you both talk, it is really incredible the companies are doing. But where would you say we are in the overall, in the enterprise AI adoption cycle? I mean these visions of no procurement people. Are we in the first inning? Are we in the third? Are we in the fifth? Where are we,

LEAGH TURNER (30:12):

Charlie?

HOLDEN SPAHT (30:13):

Sure, I'll grab

CHARLES GOTTDIENER (30:13):

It. So I think we're crawling. I think we're very, very early. If you just think about what's happened over the last two years in AI, two years ago when I went and talk to customers about AI, it was more of a curiosity. Today when I go talk to customers, it's what are you going to do for me to help make your platform more productive with AI? And I'm willing to spend money on it, right? It's a very, very different conversation. And so that customer demand is going to drive a lot more innovation because customers are going to demand us to get ahead. They're going to demand that we take advantage of all the latest technology that they're experiencing, and they're going to demand that we do it in a secure, compliant and ethical way. And so that's just raising the bar, raising that bar will drive more innovation. But I think we're very early.

LEAGH TURNER (31:07):

I'll say I agree with Charlie. And in preparation for this, I had the opportunity yesterday on my flight to read the most recent Wharton publication, which moved everybody along from the MIT study that came out a little earlier. And what they said, first of all, they classified this period as accountable acceleration, which I thought was a nice definition. And they also said, counter to the previous report from MIT, that companies are now seeing real positive returns. Oh yeah. About 74% of companies that are asked are actually seeing benefits. So I think we're starting to inch into this period where when you approach someone and you talk about business process change, you talk about workforce transformation or reinvention, you talk about people doing work differently, they're not as afraid. They're open to the idea. They recognize that change is happening and they want to understand how you might help them think about it.

HOLDEN SPAHT (32:08):

Well, thank you both for your comments. It's amazing. Should we go to Q&A? Q&A from anybody in the audience?

Question (32:14):

What AI related issues are you most nervous about?

CHARLES GOTTDIENER (32:19):

I'll start.

LEAGH TURNER (32:21):

Thank you Charlie.

CHARLES GOTTDIENER (32:21):

Nobody seems so, I do worry a lot about governance. So even though we've got governance policies, it's part of our annual compliance program, and others talked about this on some of the other panels where as we get into a world where agents sharing information, I don't think we know how to control that yet. Just full stop. We as an industry. And so it's a really bad day if an agent gets a hold of comp data and sends it around to the company, or it's a really bad day when an agent releases financial information before an earnings report comes out. And so I think there's risk that that happens. The JP Morgan executive talked about those risks, which is why they and other regulated industries are taking a really measured approach to AI. So that's why I worry about it. It's the unintentional consequences of this innovation. Moving too quickly,

LEAGH TURNER (33:20):

Maybe I'll just agree, and Charlie said, as an industry, I would say we as software companies who deal with our customers worry about that. And every type of customer worries about that. So I would broaden it to just simply say, I think we're all concerned about that, and governance isn't evolving as fast as the technology. I think that that's the gap that we all feel. And that's why there's reluctance in the market. Maybe I'll try to go to the positive side. I do believe, I know we're saying that this is evolutionary, not revolutionary, but I do think in many respects that it's like the industrial revolution. It's like one day you went out and you tilled your field by hand, and then all of a sudden you were able to generate enough capital to go buy a machine and you never did it the same way again. And I think that's what we're in the middle of. And I think that businesses and companies that help provide offerings to them will never work the same way. And I think that embracing that and really being open-minded about it is critically important. So if I had a worry on the counter side, it would be that we're just not as we could be.

HOLDEN SPAHT (34:33):

Perfect. Thank you very much. Appreciate all the insights.

LEAGH TURNER (34:36):

Great. Pleasure. Thank you.

ORLANDO BRAVO (34:43):

Listen to Thoma Bravos Behind the Deal, Season 4 on Spotify, Apple Podcasts, YouTube, or wherever you get your podcasts.