People & AI - AI Enhancing Customer Interactions with George Davis

Karthik Ramakrishnan sits down with George Davis, the visionary founder and CEO of Frame AI. George shares his journey from academia at Carnegie Mellon to revolutionizing customer support with AI through his company, Frame AI. The episode dives into how Frame AI leverages generative AI and real-time analysis to transform unstructured data from customer interactions into actionable insights, vastly improving customer relationships and reducing operational costs.
April 26, 2024
5 min read

Listen on

Apple podcasts: https://lnkd.in/ga4t4WuZ

Spotify: https://lnkd.in/gBzmKsDE

YouTube: https://youtu.be/5LLdCVpFTlA

Transcript

Karthik Ramakrishnan [00:00:07]

Welcome to another episode of people in AI, where we dive deep into the intersection of AI customer experience and how startups are innovating in this space. We'll also discuss broader themes in AI ethics and data privacy. Yes, today I'm very excited to have George Davis, the founder and CEO of Frame AI. FrameAi is a proactive AI platform for enterprise companies and major brands to execute custom AI strategies, transforming very important unstructured data into proactive insight. Frame AI solutions are powered by steam triggered augmented generation, a breakthrough architecture designed to apply generative AI to massive volumes of streaming data proactively. This is a breakthrough from the industry standard of rag or retrieval augmented generation, which relies on reactive data pulls based on the user specific need. Today, we're going to dive much deeper into what all of that means, how it's actually implemented and built, and how it differentiates itself from RAC systems that we are very familiar with today. George, first of all, thank you so much for joining me and really looking forward to this conversation.

Before we get started, George, would be great if you could share your journey from your academic pursuits at Carnegie Mellon to founding frame AI. And what was the, I guess the pivotal or the seminal moment when you realized that you had to start frame AI. Would love to learn that.

George Davis [00:01:36]

You know, it's funny, so I'll get to skip a lot of things in the middle, but there's one sort of thread for me that I've been lucky enough to spend most of my life basically chasing data around. As far as I'm concerned, if you're predisposed to work with computers and to enjoy AI and machine learning, which I was lucky to be exposed to at a pretty young age. We've just lived in this amazing time where we're gathering more and more information about the entire world, but also about how humans interact with each other. And so I've spent most of my career working on that, first in my PhD program on various sort of organizational behaviour and defense intelligence projects, then moving up to New York and automated trading and digital education. Frame actually came out of a consulting operation called Damyata, where we were essentially trying to work with large enterprises to solve a variety of data problems on their behalf. And that moment that you're talking about, it really happened while we were working on a project with a Japan based insurer company, working on how they were supporting Singaporean car insurance customers, which is fairly random. But what was really interesting about it was that they were probably the fourth or fifth company that we had talked to over that year. That was working with an extreme volume of communication data that they were doing very little with, especially this is back in 2015 and 2016, where messaging and sort of conversational commerce was first really taking off to an extreme degree in Asia, less so in North America.

But we were seeing firsthand what it looked like for consumers to be connected to services and brands all the time. And we were seeing that they were preferring to use their own words to communicate in a variety of circumstances. But that was generating this enormous amount of data exhaust for or businesses that they didn't really know what to do with. They didn't know how to use, what their companies were willing to send them over, messaging and survey channels and things like that. And so we got really interested in this problem of how do we turn the massive. This is a fundamental change in how humans interact with each other, right? We're spending, for better or for worse, we're spending more of our time communicating with businesses. How do we make the businesses pick up their side of that equation in terms of using this to provide better services and to actually listen in a meaningful way to these customers? And so it was really kind of that fundamental. It was.

There's a lot of this data out here, and for the first time, we are seeing the advent of natural language technology. That was right around the time that attention is all you need. And the transformer architecture and Bert based models became a viable way to do really robust natural language understanding. And we said, okay, we have a new human problem, which is this enormous amount of communication data, and we have a new technology that can help solve it. Let's build a hammer and go look for nails. And that's really what we did as a company. And it's been a fantastic journey for me, my first time, sort of in a go to market seat as a technologist, but to figure out how do we actually bring this technology to enterprise, how do we make it useful to people in a way that they can adopt efficiently?

Karthik Ramakrishnan [00:05:01]

That's pretty cool. And I'm glad you went into, because, as you were saying, this was popping up in my mind, which is generative AI sort of captured everyone's imagination the last 18 months. But to your point, transformer models, with the tensions all you need paper which came out 2015 or 2016, around that timeframe, right? And it was sort of that transition from NLP based models, and then that fed how we kind of the evolution from those burnt based models to these genitive or foundation models is quite interesting, but also phenomenal. Are you saying that you started working on this even before generative AI, this kind of whole hype started.

George Davis [00:05:42]

Yeah, absolutely. So, I mean, what's really funny is it's not that generative models weren't around back then, but we really didn't take them seriously because the outputs were so bad from the early models, it hadn't crossed that threshold of relevance and. But I think I'm grateful for the chance to geek out about this part of the journey because we don't always talk about this now that generative casts such a long shadow. But honestly, having worked on probabilistic graphical models for a decade prior to that, what happened with the transformers and the Burt based models was really the sea change that most people didn't get to see. It was really for the first time, natural language processing went from being sort of a grab bag of really bespoke tools that you had to kind of twist and turn into a Rube Goldberg machine to get working right into something that was robust enough that natural language understanding at least, was suddenly really viable. And that was a big part of founding frame for us, is we knew that the difference between making natural language understanding useful and not was going to be how efficiently could we adapt to individual customers needs? Every business needs to know different things about what their customers are saying. You can't just train one model and apply that same thing to every single business. And the transformer architectures and the Bert based models for the first time made it really plausible for us to, with relatively little example data, suddenly build really effective models that were specific to each business.

Karthik Ramakrishnan [00:07:14]

So help me make a connection between probabilistic graphical models and maybe we can get even deeper on the technology side or technical aspects. Now, how does of probabilistic graphical models and your background is you've also worked on social processes and mechanism design, but make a connection for us from that to a generative AI model.

George Davis [00:07:38]

There's a two step thing, and it's funny, working in transformer models and then watching the generative revolution take off and kind of work on the back of that, that's kind of one of the second big technical revolutions that I've gotten a front row seat for. The other one. If you rewind back to 2010 or so before, some of our listeners, I'm afraid, have graduated high school in this case. But if you rewind back to 2010, we really thought probabilistic graphical models were going to be what deep learning ultimately became. They were a flexible framework for doing many different kinds of machine learning tasks. But most importantly, and this is what attracted me to them, they were the probabilistic graphical models. So these are things like Bayes nets, Markov networks. You used to hear a lot more about these in text circles than you do now because of the spotlight on neural networks.

But what they allowed is they allowed an expert to sort of describe what they already knew about a system in a way that guided a machine learning model, and that I've always been very fascinated in that aspect of what it means to make a machine learning model useful. You know, there's domain knowledge that every organization, every area of study has that needs to be properly represented. And so PGM's for me were a way of working in machine learning in a way that I could take the knowledge of the people that I talked to. So when we were working on analyzing shipping traffic and trying to predict accidents like the ones that happened in Baltimore, when we were working on those problems, you needed to be able to represent the structure of understanding how do ships make decisions? Who are the different actors involved in this, etcetera. We were able to do all that in the context of PGM's, and I see that as a very important phase. Since then, obviously, PGMs weren't as effective at utilizing hardware. They weren't able to scale as effectively as neural networks to use more data. And for that reason, we now use neural networks for everything in machine learning, because it's been so much more possible for us to build effective models.

But what I see, one of my missions at frame is to bring some of that thinking that we had in the PGM days into this generative era. So how are, if we're going to apply large language models, both to understand language and in some cases to generate it, to summarize information that people are going to use, how do we bring the essential domain knowledge into that process? How do we get not just give sort of a generic summary of a customer conversation, but exactly the summary that's relevant for a product manager at your company. Exactly the summary that's relevant for a compliance manager at your company. How do we pull out the things that are specific to your workplace? So that's something that, for me, that opportunity to express that structure is something we've had to graft onto large language models in our technology.

Karthik Ramakrishnan [00:10:38]

Some of the fundamental things that were being done 20 years ago or even 30, 40 years ago, we're finding an outlet now through AI. But I'll stop there because I think we can go on to that. That's a whole other show by itself. But let's come back to frame, though. Frame AI, described as the proactive AI platform for the enterprise. Now, for our ls listeners, including me, we're hearing this for the first time. Can you explain what it means and how does it differentiate frame AI in the AI landscape?

George Davis [00:11:12]

Yeah, absolutely. So let's pick that apart. The first thing is we call ourselves a proactive AI platform. The AI platform is something a lot of businesses are familiar with. Now, we've got wonderful tools like databricks and tycoon and other functions that are basically helping large businesses in the process of developing and deploying models that help them with tasks around the enterprise. And we now have about ten years of experience in the enterprise with using these tools, mostly around structured data. Hey, how can we make a better credit model? How can we make a better prediction of a lead score for a sales lead or a marketing lead, etcetera? Those things are out there. They're in production, those AI platforms.

The capability at an enterprise exists, the function that it fills. You can have AI features built into a tool, like your contact center. But the reason that you need an AI platform is to centralize some of your investments, and especially the kinds of investments that lead you to be able to trust the system. The difference between trusting an out of the box feature that's built into your help desk or your contact center, versus being able to tune and train and evaluate a model yourself, is really, really important at an enterprise scale. Building up that trust, which I know that you're in the middle of as well. Karthik that's what AI platforms do for enterprises. They let them centralize their investments in AI that affect their operations. And the proactive part of that that we're trying to add is that we're specifically focused on making a very short feedback loop between data coming into the system.

That unstructured data stream that I was talking about, the natural language that's coming out of tens, hundreds of thousands, millions of customer interactions every day for many of our customers, that's generating these streams of unstructured data, that typically the process by which they affect a business can take quarters, not days, not weeks, not months, but multiple quarters, for someone to recognize something important there, build some kind of system and process to analyze it, because it happens at the speed of anecdote. So our proactive AI platform is about in real time, analyzing the trends and the individual experiences that are coming through these streams of communication you have with your customers, and using that to feed into processes that either make those customer relationships better or reduce the cost of serving your customers. So it's proactive, because we're taking the data from wherever it comes in, support surveys, your contact centre, even internal ticketing around your customers, taking it from there to the team that can act on it as fast as possible.

Karthik Ramakrishnan [00:14:05]

Great. If I can summarize this and how I'm groking this, if you will. Rack based systems retrieval augmented generation systems are looking more at static content. So if you have documents that policy documents or whatever they may be, they can retrieve that information and produce outputs based on that particular context. But it's static stag, and you'll have to help me with the full form of stack here. But stack is more for real time content. So in real time, you can affect change in how you respond to the particular context. Maybe it's a call centre, maybe it's something else. So am I correct in saying that just so?

George Davis [00:14:53]

Yes, exactly. So. Rag retrieval augmented generation, which is becoming the most accepted framework for applying large language models to enterprise tasks, and for good reason. It's a great framework, it's a great way to build query response tools, but it involves these implicit steps of curating a data set that you're going to retrieve from, and very carefully organizing the prompts, and the way that that's going to be set up so that it generates something in response to a question that somebody has and an existing process. So essentially, rag answers the questions that you already knew to ask and prepare, which is important. There's 100 use cases for that around the enterprise. But there's this whole other set of things which are, what about the things you didn't know to ask about because they're emergent? What about the streaming data that you have coming in, where you need to be proactively informed about what's happening there, as opposed to asking a question about it after the fact. That's what stag is.

Stream trigger augmented generation, which means that we basically push through a series of models that identify in real time subsets of the data that are worth analyzing, push that through an LLM, and bring the result to a team that can actually act on it. So you get a proactive report that says, hey, there's an emerging pattern with this type of user that's coming into your marketing chat, and they are asking about this Taylor Swift event. Here is a proposed answer based on the conversations with customers that went well, that you could ask to your knowledge base, or you could develop a marketing campaign around or so on, that gives you a proactive response to what's happening today.

Karthik Ramakrishnan [00:16:32]

So you just illustrated that was going to be my next question, like, I'm trying to visualize actual use cases where this would be useful. So you just gave us a marketing context where, let's say, in real time, I'm trying to book tickets for Taylor Swift. And of course her concerts are sold out way before or in seconds. So you could actually take that information and provide it to, let's say, Ticketmaster agents when they get a call and be like, yep, it's gone, they're gone. Here's what you could do or react to that. What other types of use cases could someone think of for using stack?

George Davis [00:17:09]

Yeah, so a lot of our use cases fall into support, marketing and compliance use cases. The large businesses that we work with often have all three of those. So within support teams, what we're doing is analyzing what are the trends that are emerging that are either driving costs or poor customer experiences. And the feedback loop there is being able to identify those rapidly and propose either new knowledge based content or changes to your process that can help reduce those costs or improve those outcomes. So that's something that stays within a support or a customer facing operation. Then on the marketing side, use cases include things proactively identifying campaigns and also identifying traits that make somebody relevant for a given expansion in the conversation. So if someone's into Taylor Swift and they're asking about a particular Nike shoe that she wore, they may also be interested in some other product. Maybe you have some other product line that's relevant to swifties.

Now that we're stuck here, I'm going to have to have all my examples be Taylor Swift related.

Karthik Ramakrishnan [00:18:15]

There's nothing wrong with that.

George Davis [00:18:18]

It's been going through my household, but that kind of cross sell identification is actually really important in some industries. So if you look at financial services, in health, et cetera, the customer, both the customer experience and the value of the customer relationship really depends on them having a holistic experience. And it's very frustrating for a customer. If you're a customer, you experience one relationship with your bank, even if you're using that bank for both banking, for mortgages, for investment. And so it's frustrating as a consumer, whether we, even if we intellectually acknowledge that it's different to talk to the mortgage department than to talk to the banking department. It's kind of irritating when one side doesn't know what the other one is doing. So by being able to sort of identify traits in those conversations that are relevant across those teams, we're able to help businesses give a more coherent experience and find opportunities to serve the customer better.

Karthik Ramakrishnan [00:19:16]

Awesome. And so now this is going to take me to the sort of the next step where just picking that example that you just mentioned, is there an opportunity for rag systems and stack systems to work in conjunction? Is that a better experience? And does stag end up improving the accuracy and relevance of this type of content?

George Davis [00:19:45]

Yeah, yeah. And relevance is this keyword, right? Like, I think it's an interesting aspect, getting back to this theme of trust. I think this interesting aspect that a result can be wrong because it's actually factually wrong because it's a hallucination of an OM, or it can be wrong because it's actually obvious or irrelevant from the standpoint. You can have accurate information that's just not very useful. And so, in practice, in most enterprise applications, what we struggle for isn't whether we find something that's true, it's whether we can make that true thing useful and narrow down to the true things that are useful in that context. The job of Stag. Stag has a hard job in a rag based system. You curate your data and then you get to wait for a query, and then at the query time, you know a lot about what the customer cares about and you're able to get exactly what you want.

In a stag based system, it's kind of reversed. You have to know about all of your queries ahead of time. I have to already encode into our systems what marketers care about, what support leaders care about, and we have to do as good a job as possible at identifying that, so that our insights are as relevant as possible to them. Rag is really useful in the last mile after you do that. So given how hard it is to find those useful items, prioritize that task of finding relevant information. And rag systems get to be a good way of acting on that information. After the fact, they get you to say, okay, given what you've told me, here's what I want to do with it. Can you help me with this task? And so we have actually several customers where our stag based insights feed into a rag based action system that helps them, for example, design a marketing campaign or co write that piece of knowledge based content that responds to the trend that we've identified.

Karthik Ramakrishnan [00:21:35]

That's awesome. Now, I do see and help you understand this. There's a risk here, too. Typically with rack systems because of static content, there's a whole concept of guardrails. So putting guardrails around the content is produced. So we're not veering off the course that we said, but that's because the content, to your point, is curated so we know what's there and the output is going to be along those lines. We just need to control for hallucinations. You put guardrails in to protect against that. With stag, it seems like you actually have very little control over the content because it's real time, it's not curated. So how do you deal with, again, that guardrailing of the outputs?

George Davis [00:22:21]

Yeah, it's a great point, and I think one of the most challenging, important parts of designing an effective system here, I wouldn't say that you have. The real issue is that it's just aligned differently than when you're curating a dataset. You have a chance to curate it. In a way, during that curation phase, you get a chance to really think hard about who should see what. What is this minimal set. You can do things like redaction along the way, and exactly as you say with stream generated analysis, you have to worry that at any point in the stream, somebody could really say something or include personal information, for example, that's not acceptable to a given audience. So we depend a lot on using redaction capabilities as well as different kinds of alerting capabilities in the stream in order to triage that information. And there's aspects of sort of integrating with other enterprise controls.

So, for example, making sure that an insight that incorporates a stream that has privileged information in it is only visible to the users who are authorized and have an appropriate use for that information and say, in a healthcare context. So that's something that we've wrestled with a lot in regulated industries like healthcare. And what you end up needing is you need a subsystem that just manages permissions alongside the rest of the streaming architecture to understand what is actually what might be present in each of those streams so that you can restrict access appropriately.

Karthik Ramakrishnan [00:23:56]

Awesome. And as a framework, are you providing the end application or how does one consume this as an enterprise?

George Davis [00:24:05]

Right. Yeah, yeah. So we work, you know, I often call us a piece of infrastructure. So we have, there is what we call the frame hub, which is a business intelligence tool that you can use to explore all the data that we're annotating and what we, and the insights that we've generated. And that's where a lot of this comes in, of how do we actually restrict access to the right people and the right to the right types of data. But in practice, the use case for the frame hub is usually on the data teams or sort of the top level business stakeholders who are first setting up a system for the most part, what frame does is push data out into the systems you already use. So if you are using Salesforce service cloud to interact with your customers and we have insights that are relevant to your service team, then those will be pushed out through webhooks and then adapted into your Salesforce instance so that they can basically help teams where they work. We have a poster we send around that says I can't wait for one more tool. Said no team ever. In practice, if we're trying to help operational processes, the last thing most operators need is a new place to go.

Our philosophy is very much, you don't have to interact with frame. Frame is going to analyze the data for you and push the thing that you need into the system where you're working.

Karthik Ramakrishnan [00:25:23]

Interesting. And I think last question on this front, how much customization is involved when you put a system in place, right? Is it the context of the use case, the data, etcetera?

George Davis [00:25:35]

Yeah, I think that's one of the most interesting lifecycle questions, I think for almost all AI businesses because the fundamental promise of AI is it's going to tell you something you didn't know on your own, right? And given that you're basically promising someone, in most products the goal is to minimize surprise. But in AI products you better be able to offer some sort of insight and then by definition your customer doesn't really know what that's going to do for them until they try it. So it's very important that AI be fast to demonstrate value so that your customer can understand and have confidence that it's worth their investment. And on the other hand, what we've discovered, this comes back to the trust and accuracy side that really making a AI useful to the enterprise does involve customization. It involves adapting it to their use cases. It doesn't do any good to be notified about a generic interest in shoes. If you're Nike, you need to have an interest in specific product lines and the details that you use to segment customers. So our approach to that is that the frame platform ships with a lot of built in models.

We have models that help you do predicted CSAT, help estimate amounts of effort that customers are using both online and offline. That is expressed in their communication, the types of metrics that are very relevant across industries. And we also have some industry specific packs to analyze, things like app usability, et cetera. So we have these pre built models that show you an idea of what the AI is capable of. Then the next phase of it is that frame is a very extensible platform where we can rapidly train models. On a very small number of examples that are specific to your business, you end up with scores and alerts that are aligned to your taxonomy. How do you segment customers? How do you differentiate one type of support issue from another? So that type of customization is something that we find unlocks. We want to be positive value with the out of the box models, but it's really ten x the value with an incrementally additional work to customize.

Karthik Ramakrishnan [00:27:34]

Awesome. Do you think Stack is, are you the only company working with this sort of kind of approach, or do you think this is going to be an industry standard that's going to come through similar to how racks become sort of a stand de facto approach? Right?

George Davis [00:27:54]

Totally. So we coined the term stag in part to sort of announce and attract the community that's working on this. I have no doubt that there's a latent community of people working on specifically the challenges of bringing streaming data into generative systems. We've seen versions of people, different names, of applying rag to streaming data and things along those lines. There's a few different papers like that out. I think this is a fundamental pendulum swing I've had the privilege of seeing in several technologies over time. You go from focusing on database management systems to event processing systems. Eventually you do enough complex event processing and everybody ends up back in the database world saying, hey, how can I just answer this with a query instead of pre registering all these events? And so this is the, as humans, we constantly face this sort of pull between doing things on a query basis and then doing things on a proactive basis.

And in practice you need to have both. So I have zero doubt that there are other people working on and that it will emerge as a standard that doing streaming analysis of streaming data using generative systems is an important use case. And I hope that by sharing this term stag, we're able to build that community around it so that we can share best practices. Because there's a lot of things that are different about stack building an effective stag system than a rag system.

Karthik Ramakrishnan [00:29:23]

Yeah, definitely. And look, I think it's important where a lot this world of today that we live in is increasingly real time. The faster you can consume the amount of data, that's just like the tsunami of data that we're seeing today. It's important to be able to parse that as quickly as possible and then drive insights. It seems like this is where that's going with what you're doing.

George Davis [00:29:50]

Totally Karthik and yeah, when I think about what's our most contented resource is attention. The big challenge with systems that start with a query. Even if you can make the query as simple as pushing a button, how many times do you have to get dragged into different apps on your phone just to push one button? We need to have systems that spend less of our attention and do more proactive work on our behalf. When I think about the AI I want, yes, of course, everybody wants Iron Man's Jarvis to be out there, but the coolest moments, even in those movies, are the ones where the suit does something on your, you know, proactive on your behalf. Right. We need AI that actually, you know, a good friend or good coworker can tell you the things that you didn't know you needed to know, and, and they can restrict themselves to not taking your attention when they don't need to. So I think if we're going to survive the sort of attention bottleneck we're at right now, that's one of the major places we need to apply AI is deciding proactively what needs our attention.

Karthik Ramakrishnan [00:30:52]

And so like I, I feel like said, this is where search for search today could be seen as sort of static too, right? Like you're basically looking at what's out there. But could this be attached to search whereby you're augmenting search results with this real time information? I mean, perplexity should be looking at, should be all over this. I don't know.

George Davis [00:31:14]

But I totally agree. I think I would love to have, I mean, many of us have set up alerts for different searches, right? Like email feeds for this or that. And doing that with a combination of AI would be really powerful. Maybe we should, we should, maybe we can co fund that. Karthik, since we're both busy.

Karthik Ramakrishnan [00:31:32]

I'll ping you right after this.

George Davis [00:31:36]

I think there's a catch, though, you know, back to this issue of trust. Proactive systems require arguably a lot more trust than query driven systems. Think about what happened when Facebook transitioned to the feed, right? It immediately became this question about whether the choices the feed was making on our behalf are the choices that were healthiest for us as well. And I've had people work on both sides of that problem, working hard on how to optimize it in different ways. And. Yeah, so I think the same thing is true. Like, we're going to have to, if we're, we need AI to be able to help us manage our attention by sitting on top of search and things and things of that nature. But if we're going to do that.

We need it to be very trustworthy. We need to have ways of evaluating. Is this really working on our behalf? Is it working towards the goals that are specific to us, or is it working towards a generic set of product goals, or even worse, on somebody else's behalf, using our attention?

Karthik Ramakrishnan [00:32:30]

Yeah. So you're hitting on a very. Exactly right. Obviously, Armilla, what I've been thinking about for the last few years is about that. And so it's very close to my heart. So how do you ensure. Right. And I think you touched upon it earlier in the conversation, but can you kind of peel the layers back a little bit? And how do you think about ethics? And ethics is a very amorphous term.

George Davis [00:32:58]

I know.

Karthik Ramakrishnan [00:32:59]

Right. So how do you think about the trust building and let's just say guardrails or ensuring the ethical use of your systems and their.

George Davis [00:33:09]

Absolutely. I mean, I think it's interesting. I think we're fortunate that we're in a time where pragmatism and ethics are actually very well aligned because people are waking up to how important these systems are and how much they can affect them. So we have a lot of privacy regulation worldwide. We have a lot that is getting stronger and stronger from both a business perspective on behalf of our customers, for our enterprises, we have to help them act ethically, even just so that they can pass regulation. And then also we want to act ethically ourselves in terms of applying AI in ways that actually benefit people's human experiences. And so anyway, I think it's a time where there's a lot of regulatory scrutiny on this, and so there's a lot of market demand for us to invest in that. And the way we're investing in it, one thing is by, we really focus on these in consent environments, right, like where customers are aware that they're communicating on a channel that is being monitored, that is going to be optimized and is going to be used on their behalf. Right.

So we have shied away from use cases. There's all kinds of use cases for stack technology on internal processes and on places where people aren't necessarily expecting to be monitored, even if they should. We're not interested in those. We are working on those use cases where it's understood that there's a process here that's a business transaction that people are trying to optimize and trying to do better on both sides of the equation. So that's one thing is just starting where you're welcome in terms of the data flow and being very able to meet businesses where they are in terms of maintaining those privacy restrictions. So as an example of that, like something we've had to invest a lot in, is a bring your own cloud architecture where we can, for regulated industries, we can deploy inside their VPN and allow them to have complete control of data governance. We don't train across customers on each other's data or anything along those lines. So helping them maintain that isolation and working where they have consent from their customers to do analysis, and then actually making sure we're using it on customers behalf, making sure that part of what we're doing, there's a lot of ways that you can use AI to dramatically reduce the cost of a customer experience.

For example, by automating it, but also many times reduce the quality of the customer experience and push a lot more work back onto the customers. Those are use cases that we also are not focused on. We're focused on increasing the value of customer relationships. We do also help make more efficient operations, but not by automating away the existing workers, but by helping to route the whole system so that there's more effective answers earlier for the customers. So again, it's a long answer because there's so many different types of ethics, the privacy of the situation, there's what are we doing actually on customers behalf? How do we interact with the existing labor force that faces customers? I think the good news is there are really, really positive roles for AI to play across all of those channels in terms of basically helping information go where it's wanted and needed faster.

Karthik Ramakrishnan [00:36:18]

That's exactly it. I think informed consent is one thing. There's implied consent in certain situations, as you described, where value is understood to be derived by sharing information, by utilizing the information. It's when it's interesting. So Astro systems provide more and more of this intelligence. What is the role of the human? What's the role of the customer support agent? Now, if I extrapolate, let me walk you through. Let's walk through a scenario. You've got your systems which are producing these outputs, and let's say we have a deepfake customer voice engine, and I wouldn't say deepfake, but just a voice generation engine or a video generation engine that could just take the information and repeat that back to a customer.

The human is completely out of the loop here. And that's kind of an extreme scenario and likely scenario too, at some point.

George Davis [00:37:18]

Happening already in terms of, we deploy alongside customer service automation all the time. Although, although we don't directly automate that service, we complement it by feeding data back, by feeding content back into it that's relevant and so on. So, yes, exactly. As you say, there is an ongoing process by which AI is replacing a lot of customer service interactions.

Karthik Ramakrishnan [00:37:41]

Okay, so you acknowledge that, but how do you see the customer service agents role evolving, or is it going to be eliminated?

George Davis [00:37:49]

Yeah, so I believe always in approaching these problems, honestly, I'm a little bit privileged to do so because we're not necessarily in the middle of that process of automating those jobs. But realistically, from where I sit, I do see a lot of that automation is happening, and it's going to affect the number of customer service jobs that exist worldwide over the next few years. So what do we do about it? Well, we want to make sure that there are new opportunities being produced by AI, and there's a lot of lip service given to this, but there are ways to really do it also. Right? So if you can find ways that better, fine. Touched interaction with customers actually improves value for the business, then who better to move into those roles than people who have already gotten experience with those customers, etcetera. So we're working with, this happens a lot, especially in the financial services where customer experience is a really, you have a very commoditized product for the most part, and you really differentiate on how well you can serve your customers. If we're able to show opportunities where a high touch, high expense personal interaction is worthwhile, then that becomes an expansion in the outbound call centers and the relationship management roles, etcetera. We'll see, hopefully customer service being turned from a minimum viable product approach of outsourcing and having the least amount of training that can possibly man the phones to being something that's perceived as a source of revenue and something where there are a new type of job that people can move into, that the most motivated and effective people can move into, that is understood to produce value with that interaction.

That's a level beyond what a real time AI is going to do. Now, I'm not going to make any claims about where that's going to fit in the broad macroeconomics of the workforce, but I think what we can do is we can try to find those new opportunities that do create opportunities for as many people as possible.

Karthik Ramakrishnan [00:39:48]

Yeah, true. And the jury's out. But I mean, I think I'm with you in that. I believe in human ingenuity. And I think, you know, throughout history, from day zero, I think this has.

George Davis [00:40:02]

Always been the case.

Karthik Ramakrishnan [00:40:03]

We've evolved. I mean, constantly found technology takes away some things, but it really opens up the aperture on what else one can do. Right, I'm going to do that.

George Davis [00:40:15]

And I think the secret to making sure that that maintains is to keep the technology available to as many people as possible. So that's one reason we're very focused on participating in the open source LLM ecosystem and so on. If you have a great new automation technology, but it can only be applied by a small number of companies in a small number of places, the incentive is really not there to find all of those new use cases that give people new roles to play in this new economy. I think, in contrast, the more widespread it is, the more new roles will be discovered.

Karthik Ramakrishnan [00:40:47]

Yeah, look, let's shift gear. Yours a little bit. So this has been awesome. And you successfully transitioned from academia to finance and then to now finding a tech startup. A lot of the folks in academia are making that transition too, especially, as you said, like ten years ago, people who were in high school, but they're coming in with their PHDs, they've spent a lot of time researching and building new approaches, like stag that you've done. What challenges did you face in this journey, particularly in scaling in an AI centric company, in this competitive market, but before that, the whole transition in and of itself?

George Davis [00:41:31]

Oh, man. It's really funny because I'm definitely waxed and waned in terms of feeling like, was it more of a handicap to overcome or something? You know, an academic mindset is wonderful in this one way, which is the academics I know who have made the transition to startup world are the ones that have let their curiosity stick with them and move, and because there's so much to learn. I think that's the biggest. My biggest takeaway from my own experience is that, like, it's very easy for academics to assume that things outside of their expertise are, you know, easily knowable, like how to do go to market, how to do sales, how to what actually governs. You could fill books and books with what I didn't know about how enterprises actually need to buy technology. And it's not just because it's easy to grouse and say, oh, they can't see the value or something along those lines. But no, these are human systems that need to. The way that they operate, they need to be able to buy things in certain ways.

They need to be able to make good decisions collectively together. If you can respect that and treat it with curiosity and learn about it, then you're gonna. You're gonna eventually prosper. And I think that's the hard part. If anybody, if you leave academia for industry, don't do it. Bring, don't feel like you're bringing this big backpack of expertise. Feel like your biggest asset is the curiosity. You've demonstrated art, you know, that is.

Karthik Ramakrishnan [00:42:53]

I think that you nailed it. I think it's that there's a bit of mandalorian here. This is the way, right, like that academics bring to the table. I know this is the way that it can be done, it should be done versus. Okay, how do you adapt what you know to the situation, to the environment, to the customer context, all of these things, and that adaptability? If you have the curiosity to learn, then you will have the flexibility and resilience to adapt. Right. And it's not. And be open to challenged.

To be challenged. Right. To your assumptions about things. No, I think that's absolutely right. But I'm not saying everyone is. But I do see academics struggle with that a little bit in the initial days, but the ones who make a successful transmission are the ones who roll with those punches. Now, what advice would you give on finding the right problem to solve as an academic? You come in, you've done a bunch of research, and let's say in this case, the stack based systems that you've been researching on. How do you take that research and finding a real world problem to apply it to and solve for, get revenue that actually has a market and secure funding, and not that funding is the goal, but really, then you're able to get the funding to be able to deliver that to the world.

How did you think about that? And what advice would you give to academics who are struggling with that?

George Davis [00:44:30]

Maybe very tempting to. When you come in with a lot of expertise and you're suddenly faced with everything, you don't know. One answer to that into avoiding that problem is to say, we're going to create something new and people are going to love it so much that they're going to come into it and they're going to see that this is a great way of doing things. As you said, this is the way, you know, building new processes. I would put a strong emphasis on identifying an existing problem, an existing decision that you think you can help and really anchoring on that first. You may have a totally new way of doing it. You may have new data to bring to the problem. You may be what you're doing, your methodology may be newly available or whatever, but in terms of the interface it has with the world, make sure it's something people already wanted, that people already were trying to do themselves.

If you can start with that and start with that process, you'll gain that credibility to guide them into something else if that's something the world really needs. But you also gain the knowledge about why they're not there already. In general, my experience is that ideas don't happen once in one place at a time. Whatever great idea you've had probably is around. And if it's not present yet, then it's probably faced some critical blockers. And you need to. You need to go for something that is known enough so that you can learn where those blockers were if you want to bring about the change that you're trying to bring about.

Karthik Ramakrishnan [00:45:55]

Cool. I mean, couldn't agree more. I think this is a standard story as well. It's not just about the idea. It's really about the applicability. And what do you do to get the real world feedback and again, opening up the aperture to understand what problems exist? Or can you do something that's being done today better with what you have starting there, and then, of course, evolving from there.

George Davis [00:46:25]

The cliche version is first time founders focus on product, second time founders focus on distribution. And I think that's actually, we talked earlier about how. What was the difference? Why did Bert not pick up the public imagination the way AI did? The thing about the generative evolution is how accessible it was instantaneously. Anybody with no background was interacting with these things and finding uses for it. That's a distribution innovation.

Karthik Ramakrishnan [00:46:51]

It's productization. Well, I think it's a bit of both. Having been around, having worked with these models and type applications. I think the fundamental, it was not just distribution, but it was also the productization of a model. We're all focused on the models themselves. How good are they in nerd recall, et cetera, et cetera, and then customizing them to every single use case. The generalizability is what the first breakthrough, and then the productization of that in a question answer format, to ask it anything, and we'll try to give you an answer. And then distribution. Put it on the web. Just put it on the Internet, and then everyone can experience it.

George Davis [00:47:40]

Preparing for distribution is great productization. You're totally right. It wasn't until the product worked in a way that was attractive to interact with that it could be done in that sense. But, yeah, I think, again, the revolution is in the surface area of how many people can appreciate it. And for academics, we're very often used to doing things that only a few people can appreciate, and that's not how you want to build a business. You want to build a business by finding a large group of people who can appreciate the thing that you're bringing to them and then really focusing on how you do it.

Karthik Ramakrishnan [00:48:09]

That's it. Okay. Looking into the future, maybe let's get some rapid fire. Looking into the future, how do you see, what do you see where AI is going to go next? Are we right now in transform models like generative AI? Is this it? Where do we go from here?

George Davis [00:48:26]

Well, I obviously have my axe to grind about proactivity and sort of applying AI and streaming use cases, not over utilizing our attention, and instead proactively working on our behalf and finding those opportunities. We've given that a lot of air time. So I'll say the one other thing is, I would say it's in this sort of how do we express ourselves into the AI? Not everybody is going to train the large language model, but the systems that we interact with have to know. We don't want to re express everything we care about in every prompt that we ever perform. We want to have AI that has context about our lives. So in my mind, it's bringing that structured information into the processes, both as individuals and as businesses. We're going to see a lot more of AI that combines unstructured data with structured configuration and structured information in order to get things done.

Karthik Ramakrishnan [00:49:18]

Amazing. Any future features projects that we should be looking at from frame AI? What's next for you guys?

George Davis [00:49:28]

Yeah, no, I mean, we're really excited. We're working a lot right now on making it, on being very easy to deploy for a larger number. Right now, we're growing at the pace that we can deploy as opposed to growing at the pace that there is demand for to have. Yeah, yeah. It's certainly better than the alternative, but it's still a blocker. So we're trying to make sure that we're interacting with some of the marketplaces. And I think exactly the direction that we want to go is we're trying to do better at feedback loops right now. I think when you have something, imagine receiving an insight from one of these systems, like the Taylor Swift one I gave you.

Hey, it turns out a lot of people are buying these shoes because Taylor Swift wore them in the music video. What else can we talk to them about? That's a very reasonable question to ask a little bit of the first time that comes up to you as a marketer, you may have a great piece of feedback to feedback into that that should inform all future interactions that frame brings to you. We're working on closing that loop so that it's very efficient for somebody to say, hey, this insight was wrong because of this or it was right because of that, and then that becomes part of what we use in the future. We already have that loop, but it's a long feedback loop now, and we're trying to make sure that that's something that can happen relatively instantaneously for users.

Karthik Ramakrishnan [00:50:48]

Interesting. So making it a learning system, that more proactive learning system.

George Davis [00:50:55]

Yeah. I would argue that where we are is kind of the learning version where we retrain models. And I want it to be explicitly and actively coachable. I want it to be something that's interesting.

Karthik Ramakrishnan [00:51:04]

Okay, that's a good clarification. Where can we learn more about frame AI and stay updated with the latest development?

George Davis [00:51:13]

Thank you. I'm glad you asked. It is just head over to frame AI. That's our website and you can also so follow us on LinkedIn. We're just frame AI and looking forward to seeing more of you and your users. I can't wait to see what's in the future for AI and trust.

Karthik Ramakrishnan [00:51:32]

Amazing. Thank you, George. This is one of the most fascinating conversations I've had. It's really cool to see what you're bringing to the aspremai and you are bringing to the table. I think this is super important, redefining customer support and loved hearing your journey of getting to frame AI. So for our listeners, thank you for tuning in. Don't forget to subscribe to stay updated on the latest trends and conversations in AI and beyond.

George Davis [00:52:01]

Thank you.