Omri Allouche: Gong AI

An interview with Omri Allouche, the VP of Research at Gong, an AI-driven revenue intelligence platform for B2B sales teams.

Omri Alluche: Gong AI

We’re excited to welcome to the podcast Omri Allouche, the VP of Research at Gong, an AI-driven revenue intelligence platform for B2B sales teams. Omri has had a fascinating career journey with a PhD in computational ecology before moving into the world of AI startups. At Gong, Omri leads research into how AI and machine learning can transform the way sales teams operate.

Listen on Apple, Spotify, and YouTube.

In our conversation today, we'll explore Omri's perspective on managing AI research and innovation. We'll discuss Gong’s approach to analyzing sales conversations at scale, and the challenges of building AI systems that sales reps can trust. Omri will share how Gong aims to empower sales professionals by automating mundane tasks so they can focus on building relationships and thinking strategically.

Let’s dive into our conversation with Omri Allouche.


Transcript:

Dave Edwards 0:06
Welcome to Artificiality where minds meet machines. We found it artificiality to help people make sense of artificial intelligence. Every week we publish essays, podcasts and research to help you be smarter about AI.

Dave Edwards 0:29
We're excited to welcome to the podcast Omri allouche, the VP of research at Gong an AI driven revenue intelligence platform for b2b sales teams. Omri has had a fascinating career journey with a PhD in computational ecology before moving into the world of AI startups. At gone on release research into how AI and machine learning can transform the way sales teams operate. And our conversation today will explore ovaries perspective on managing AI research and innovation will discuss bongs approach to analyzing sales conversations at scale, and the challenges of building AI systems that sales reps can trust. Omri will share how Gong aims to empower sales professionals by automating mundane tasks so they can focus on building relationships and thinking strategically. Let's dive into our conversation with Henri Aleutian.

Dave Edwards 1:21
If you could could you start off by describing your journey, we're sort of fascinated your PhD in computational ecology, I believe it was through to where you are now doing AI research for Gong. Imagine there's a there's a there's a there's an interesting story behind that path of how you got from point A to point B.

Omri Allouche 1:42
Yeah, definitely. So I was always intrigued with like problems and various solutions in into solving them. So it was all around like finding interesting problems, interesting unsolved questions. I actually started in the in the IDF, do an algorithmic walk, and then studied cognitive sciences and biology. But I was fortunate to complete my undergrad studies in two years, and then joined an ecology lab just as a developer, and got into like research. And quite interestingly, interestingly, we got into a very interesting urban urban phenomena where we saw that the way those models were created was not correctly evaluated. And so I came up with a better way to do the evaluation, published one paper, and which got, you know, a lot of a lot of traction over 4000 citations by now. So it's one of that top articles in the history of ecology. And then I actually realized that it's all about like, you know, not the solution. But the way you frame the problem, and how you look at that, okay, we actually studied together, how to best create nature reserves. And in order to do that, you need to understand which species you want to preserve and whether they can live. And in order to do that you need to do modeling, because you can go to, you know, all areas of the country and mark all species over there. And then you start modeling. Okay, and we actually did machine learning before. That was, it was called, like, machine learning. I didn't even realize that I'm doing machine learning because of that. And it was always what is the problem? And how do I get to solve that. So I've been around algorithms for most of my life. Then, after completing my PhD, I worked for, as an algorithm, an engineer, for big defense company. And actually, what intrigued me over there was that I was working really hard on big problems that nobody could solve in the world. And for the managers, this was just one line in a very large Gantt of all of the things that need to happen for the customer. And I was, I was actually quite frustrated by you know, they promising that I would be able to deliver on things that I didn't know how I'm going to solve them. And they actually told me, it will be fine. You will solve this with time. And then I started thinking not just about just the problems, but actually how do you manage research? How do you manage innovation? How do you keep people innovating and keep people delivering and solving things? Where you can you don't wait for the eureka moment, and then you start working with things but you plan and you act accordingly which was fascinating. Me. So actually ended up starting a startup funding founding a startup, as a sole founder, not an experienced, I will recommend to many others, but a great opportunity for me to make all of the potential mistakes and then add some. So for me, it's always been like a learning experience. And when I, when I sold the startup and tried to started to look for my next adventure, I talked to like various people. And they kept, I saw that all of them are kind of like, become data scientists. This was 2016 2015. It wasn't like a big buzzword. I actually asked them what is that and they said, that's something you've done, you've been doing on your life, but never called yourself that way. You look at data, but you're being very critical about that you understand the data, you build the model, you understand the world better with that. It's kind of like the entire scientific raising that you had, is born into those things. So I joined gone pretty early on as one of the first employees. And soon after I started managing the data science team here at Gong is an AI first company. So it's really cool to what we do, and cool with the company. Up until recently, I managed gongs AI division. And now I'm gong Steve scientist, but it's always been, what are very cool problems that we can solve with, you know, research science and technology. And how do we do that in a way that is predictable, manageable and and actually delivers a lot of value to customers?

Dave Edwards 6:48
So what are the what are the intriguing problems that you're trying to solve? Now? You know, as chief scientist, I don't know if that's different from what it was when you're managing the team before, but what are the problems that have that have captured your attention and imagination today?

Omri Allouche 7:05
So what maybe I will start with just describing briefly what Gong does, so to give some context about the conversation. So Gong is an AI platform that transforms revenue organizations, there are a lot of companies doing like sales online to other companies mostly. And, and this involves a lot of like human interactions and conversations, we learn that in order to actually you know, be able to sell you need to identify the pain of the other side, you need to identify the decision makers, you need to understand how decisions are made and what value you can actually create. And this involves a lot of conversations, a lot of interactions, emails, etc. And in the past, this was done completely based on like opinions, okay, you joined the company, as a salesperson, you wouldn't know what you need to do, they would maybe introduce you to the product, but you, you would go out and just started having conversations. And with gone, you're able to actually take, take a sales conversation and understand what's going on there, and how you can improve in the way that you do you manage those conversations. So we provide guidance and automation, both to sellers and to leaders understanding, for example, should you talk about your competition? And how, what are the top questions customers ask about? What do they care most about? How can you help your team be better? How do you coach them? How do you mentor them? What are the changes in the market in in how should you respond to those things? There's plenty of data around when what Gong does is help teams be able to better make sense out of all the data in collections that they have. So, you know, as an example, we can have a conversation with that's an hour long, but with gone would be able to understand who spoke when and if that's a good balance of conversation, what were the topics and the transcripts? Did we discuss pricing? Did you raise any objections? are we actually talking to the right people, the right decision makers and all of those things and then we can act based on them. And obviously, there was a lot of automation that can be done in all of those things and like processes. So we kind of look at that always is like autonomous driving like five levels of autonomous driving. And in the beginning, we can like show you will things out at just give you an ability to view to view the drive listen to the call again. And then we are able to identify the interesting elements can see Tell You okay, this is a point where there was an interesting discussion, this was a question that you didn't really reply to properly. But then we're able to actually help you, you know, make better, better way and navigate better the deal by understanding where you want to get to and where you are having that. And obviously, recently, Jenny, is that completely transforming? What can be done in those aspects. So, in Ghana, we have seen this transformation, where we have always been an AI first company, trying to bridge the gap between what can be done, and what people actually in what can be done. And understand that there is something that would bring value and can be solved with technology can be AI or some other way. But a lot of the work that I do these days is kind of like, try to see how far we can take an AI? And how do we actually make the best use of that? Well, the capabilities and, and where you can actually count on those things. I can give you like one example, we have a feature where you can go into a call or even a business opportunity. And you can ask any questions that you want. The former technology standpoint, it's not that difficult, in a sense, because you could do something like that by actually taking all of the data that you have, and put that into GPT GPT. And as the question get a response, it will generate a response. When we do that, we see that in some cases, it provides great responses, but in many does not. Okay, and people actually ask things that are challenging, like, what do I need to do to win this sales opportunity to win this deal? Okay. So if you ask, naively, GPT how to do that it fails. But if you're able to identify the important factors for human, if you're able to teach the model and train the model on what actually happens in successful deals, if you can take the data that you have, that we have in gone from over 4000 customers in what actually works and what doesn't work, then it can learn. And we can use that incredible brain strain to actually provide a lot of value to customers. So for us, a lot of the research is kind of like to understand what are the boundaries of what can be done now, what will be the boundaries of what can be, which we will be able to do in a year or so. And how we will actually go in into, you know, always leading the way into, into empowering the various tasks that people do, and how that changes the way that business are done.

Helen Edwards 13:07
It seems to me that there's a core tension and these kinds of, you know, these kinds of products where you want to support human decision making. But at the same time, there's this there is a surveillance aspect, you gathering data from interactions, how do you support the the individual sense of agency? What's that process of, of giving people sort of more empowered decision making, as opposed to sort of prescriptive decision making? How do you resolve that? How do you manage that tension?

Omri Allouche 13:45
Yeah, great question. So I think it all starts with a basic understanding of like, the limitations of AI and machine learning, calm is not used to actually making decisions on your behalf. It's kind of like a flashlight that you can point and use that to actually ask the questions and understand what, how you can help people improve based on those things. In the very early days, when the company started in 2015 2016, it wasn't obvious for the people who they actually want to be recorded. But that's surprisingly, non limitation people understand, especially like young people these days, that this is actually a tool that can help them, you know, improve and be better. You prefer to learn what you need to do to improve by, you know, showing other intellection then learning after a year that you didn't maybe do so well, in the things that you you're trying to achieve. I think that a lot of like, what we're trying to do is actually take the side of supporting and empower and understanding what You want to do? And not say, let us take the driving seat, but actually say, how can we help you? How can we assist you in all of those things. And that's genuinely like a good pattern for AI adoption, where you're saying, Okay, we will not take it over, because it's not good enough in those cases, but at least try to see how we can, you know, try to make all of those a lot of the challenging tasks that you need to do as Elson simpler. So if you're having a conversation, and I need to take notes about the action items, I'm not as focused as I can be in what you're saying. But if conch takes those notes for for me, and the action items, and I know that the call is recorded, and it's summarized for me, then then it's much easier for me to focus on what actually is going on in the conversation. And then, you know, if I forgot to send the email, which is an obvious because maybe I just had a follow up call, like after that. And you know, I've, I had some, some, some emergency that I needed to cater for, God would remind me of doing that. And where it would remind me, it would even offer to hide the email for me. But when doing that, it would actually always explain to me why those things were created. So in a sense, it would show me these are the action items. And so this is the email that we suggest that you send, can you will discuss these topics. So this is why we are suggesting that you do that. So it's not like, you know, we send the email and sign the contract for you. But it's always how would you like us to, like help you and empower you in those?

Dave Edwards 16:48
I want to go back to something you said before, which was part of your role is considering what's possible today. And what's possible, you know, in a year or some longer period of time, I can remember exactly what the different ones so but I'm curious. What you say is that as the answer to that, you know, what, what is possible today, especially the world for Gong has changed quite a lot in terms of the technical possibilities in the AI space, right? When you when Gong was started in generative AI wasn't a thing. I mean, it was the transformer paper, I'd guess had been published. Now I can't remember even 2017. So not long after, but the the, the the world of AI has changed dramatically since you'd since you started. And so the possibilities of what you can use AI for has expanded. But I'm really also curious to see what you think of in terms of looking forward, right, one of the things that's been sort of pervasive in our world of of being involved in the AI space for quite some time as the speed of change is it feels to be accelerating. And I would imagine that makes your your role of trying to figure out what will be possible that much more difficult.

Omri Allouche 18:07
In a sense here, but I can also argue that it actually makes it simpler, because I can think of like the craziest, I think idea and then it will happen with a commonplace code base of events. A lot of a lot has changed. I think that maybe one of the topics that is sometimes overlooked is not from like the AI capabilities. But in terms of our, the what humans expect the AI to do, and how people are willing to give AI, you know, actual control. And then we briefly touched on that. Previously, in the beginning, there was a lot of hesitation about how you actually make recommendations with AI, you need to explain them, you need to be very obvious about those, you know, being based on specific data, you need to actually let people decide if they want to see those or not. Like now it's quite amazing how fast people are saying, You guys have AI, right? Why do I need to do that myself? Can Can we let AI take care of all of those things for me. It's it's moving much faster than I would anticipate. And people know that it's not perfect. But they're kind of like judging it based on kind of, do I want to spend my time doing those things? Or do I prefer to let ai do it. And so in many cases, people are willing to let AI be the driver. And then being you know, the co pilot instead of like the other way around. So a lot of times it's not I would write the code and you would like assist me it's kind of like you would write the code. And I would, you know, give you guidelines and help you form that. Which is quite interesting. Maybe a higher level we'll definitely seen with Gen AI To movement from a community of creators, to curators where we used to be the creator or self doing the key creative work and lighting all of those things. And right now it's more like aI generated those three different versions for me. And I'm curating and choosing how I stitch them together to stuff that I can use or making small edits. So that's definitely something that we will seen. And we'll try to see how far we can go with that. And how to best do that, because it's, when you build in a product, a lot of that is about gaining the users trust. Okay, and you need to be very open about what you can do and can't do. I think that many companies get the salon and they'll say, Oh, our AI is so smart, it can do this, and that and you see a beautiful demo, but then you actually experiment with the product, and it's not living up to the promise. And what actually works better is that you've been very open about what AI what it can do, what it cannot do, what you think it will be able to do. And then people kind of accept that and learn how to make the best use of that. And utilize that

Helen Edwards 21:14
I'm curious about because you have a you know, you have such an interesting background, kind of straddling both humans and machines and, and, and other life systems. Take a take this example of moving from creation to curation. And I'm gonna make a big assumption here, which is that people who are learning and coming up from more junior positions and developing their expertise, that the creation step is actually a precursor to being a good curator. And if you don't actually do the creation, you don't actually learn and build an intuition for what you're creating and how you create things. How good can you be as a curator? Are you? Does it make you over trust the AI? What are you seeing empirically in in that sort of situation?

Omri Allouche 22:17
It's a great point. I think that generally, this is what really sets us apart as humans still, okay, we're able to not just learn from those different examples and come up with something that's unique, but is based on tracing to things that were done, but it's kind of like completely way off, in a sense, kind of like Picasso or Fosbury, back, backflip, jump, okay, which is a completely different, you don't really see that. But I think that in many of those cases, we as humans have limited the ability to like experiment, and we are trying to see how we bring our own touch into things. Okay, so it really helps a lot. If I need to write an email and I get some ideas about, you know, I know that I want to make like a reference how sales is like, you know, how elite managing a winning a blowout sales deal is kind of like winning the Superbowl. Okay, given the limited time that I have, I would probably say, sit on that for like, five minutes, I don't have a good image, I would move on and try to do something else. But Gen AI allows us so quickly to come up with ideas and test them and try them and see how they work. That it really, in a sense changes the process. Now I need to be able to write some of those things myself. I totally agree to that. Okay, but a lot of that, you know, if I'm starting, if if I want to become a writer, then I need to do two things. One is to write a lot and the other is to read a lot. Okay, and by doing both of those to get those things together, and always learn it from the other side and take that, take what I read from others internally, and understand the difficulties in my life and then see how others solve that, then I'm able to learn. So I think that we can with Gen AI, make, you know, make the stance work better for both for both of those steps simultaneously.

Dave Edwards 24:38
You talked about trust, which is one of our obsessions is understanding how we trust or don't trust machines and especially generative AI it's top of mind now because everybody's heard about hallucinations and the some level of error rate which is part of the system itself. If it's a predictive system, it's going to have its operates on probabilities. It's that's the way these things work. You're operating in a world where the individual needs to figure out a level of trust of the system, the system saying, Hey, you might want to try this, in order to be able to close the deal. In the organization, the overall customer has to be able to trust that the advice that's been given to the individual is going to be accurate enough or and or the person is going to be able to be skeptical enough to find the error. So that, you know, we don't run into more problems like happen in the news right now, Air Canada got gotten into trouble because they had a chat bot that gave traveler bad advice about how to get re compensated for bereavement leave travel. And it apparently gave some advice that someone screenshotted. And it's, and I told him that he could get refunded as long as he did XYZ, which was out of policy and not accurate. But then the small claims court has said, but you're, you're obligated, because this is what the tool said to the person. So you're in this world that is, you know, the individual wants to be able to make their make the deal, you know, for their own accomplishment, the organization wants to make sure they make the deal. And they want to make sure that they're doing it all in a way that's responsible, right, that that the deal is actually what you want the deal to be. So you're dealing at a very high requirement for trust. And I'm curious how you think about that, from a technical perspective of how do you think about what these tools are generating through to how do you design and present the recommendations to individuals that they know a level the accurate level of trust to give in the system?

Omri Allouche 26:47
Sure, I can attest that it's a major challenge. And I think that's one of the places where gone excels. By gaining the trust, and actually letting people be confident enough in trusting the AI. In order to actually do that, we, we usually would start with being really open about like what the AI does, okay, and how well it performs. So there's a lot of work that we do internally to ensure the quality, okay, it's very easy to feed, a call into GPT asks the model to summarize the call for you, and give it to people. Okay, and it will read beautifully. But if you will, on the call, and you know, what you care about, you will start spotting places where it's not really been accurate. And internally, we spent a lot of effort into making sure that it meets our high bow of quality. So people can actually, you know, trust this and see how this, if you look at scientific papers, and there is a number over there, or kind of like how accurate the model was usually articulate from like zero to 100. In the industry, what actually is more important is how embarrassing your mistakes are. Okay, so a model of like 70 can be better than a model 85, because it's actually less confident about things that you as a human would not be sure about. And it's not making embarrassing mistakes. So we actually spend a lot of effort into understanding, well, we actually, well, it's important for the US for the model to be light. And if the model is not gonna be sure, in many cases, you can choose how you would want these play. So we give you the opportunity, for example, to only get the next steps that are will 100% sure that they are next steps, and you need to follow on them. Or maybe get, you know, like more or more items that can have some arrows in them. But it's important to you that we cover all of the cases, for example. So by even giving you this simple decision, you're more in control. And we can better fit your use case in what you care about. Okay, and how you actually look at those things and measure them. Whenever you walk in in gone about like a certain model. And in gone, we allow we have a feature pretty unique and cool feature called Smart trackers, where we actually completely democratizing the NLP text classification, and say that you know what you care about. So why don't you be the data scientists and build the model? We'll show you some examples from actual conversations. You let us know if this is something that you want the model to capture, or no, so you can label the data. And after a few rounds of that we actually build a machine learning model for you And so but when we do that, we need to be very open about how good your model is. And we actually show you real world results of the model. And so you can decide, am I happy with what I get to I trust the model right now? Do I want to continue training the model and improve that, or, you know, maybe that's not really getting to where I want. So I think that a lot of Transparency is key to doing those things. And being pretty open about how those things work out,

Helen Edwards 30:29
we think a lot about that issue of embarrassment. And where she, you know, we talk about it a lot with with when we're running workshops, and what have you, which is to focus on that as being the measure. Because one, people understand what it is. And two, you never forget it once you've been embarrassed by making a mistake off the back of church EBT. And it is a it is a better way to think about reliability and trust, which is if you have to back off the the accuracy of the model just to make sure that the human knows to step in. I think that's part of thinking that that's an obvious place to go if you if you are thinking and sort of five levels of autonomy, if you're thinking about the way a self driving car works, you know, you want to, to make sure that the the human knows and is able and prepared to step in. So Driver Assist if you like being more reliable. When you when you think about how this evolves? And what the next sort of step is the next breakthrough? Do you think about it more cause and effect reasoning support? Do you think about more? Sort of personal growth and coaching? Do you think about development of expertise? What what are the ways that you conceptualize how AI can make us better thinkers and better decision makers?

Omri Allouche 32:14
I think that all of the above. But I think that one of the key things that people want is to focus on the things that we as humans are good at, and take a lot of like the mundane tasks. The interesting thing is that the definition of what's mundane is like being expanded because Jen AI can do so much these days. So in the beginning, like what usually worked for me it was trying to find the correlate between the LLM level and the age of people with the same like mental likability capabilities. When GPT three was out, it's kind of like it was around like my three and a half year old kid at the time. Okay, right now, I would say that it's, you know, in many cases on like college education. And sometimes it's very, very smart, but it knows doesn't know enough about your own domain, your own problem, and what's good and what's bad for you, and how you actually walk around around that. So I think that for us, we're trying to always see what AI can do based on like, an analysis of what how does your day look like? What do you do? Is it Bilson? Okay, we're actually sitting down this standard, poor duct practice, for understanding the pains of the customers. What do they do? Why do they do that? When do they send the email? Why do they send the email? Who how do they write the email what's over there, and then try to understand if AI can replace that. And we see that in many cases, what humans need to do is kind of like be the strategist and decide, okay, who am I actually going to talk to? How do I get them to do in all of those things? How do I walk around or have those complex human interactions? And how do I actually build the rapport and connection? The interesting thing about sales is in the domain called Inside Sales, you will communicate and over zoom over a large number of conversations is that people don't buy features and don't buy products, okay? They also they build they buy the trust and the personal relationship they buy from a company where they that they trust would provide them with like real value would build what they need. They trust the other person on the other side. There was a gone research for example that did in an analysis of over 1 million sales calls and found out that when people cares, the likelihood of closing the deal increases. Okay. And obviously, if you're thinking about cause and effect, I'm not saying that the next time you go on a sales call, you need to close, but it actually works the other way out. When you feel comfortable in a conversation with a prospect, okay, then the deal is more likely to close and you're more likely to kill us. Because you feel in you know, there was, there was a friendship, almost there friendship, like relationship. And it's interesting to look at those things by, by the way, one of the early analysis analyzes of Khan at the time was actually looking at how women and men sale sell. And they will, you know, not very significant vanishes in differences. Women actually sound better than men. But they don't really follow the rules of sales that much. Because what the studies have shown is that women actually listen better to the other side, and are more focused on understanding the pain and needs. And then they can cater when they're talking about the solution. It's not just the list of like the features, but actually how we can solve the problems and pains of the other side. But I think that those things are more kind of like, you know, Popular Science. And we just pick and Terrier, and not something that you would see like across the board,

Dave Edwards 36:42
how do you find analyzing beyond gender, but also cultural differences in different, you know, different regions or communities, different works in language? How do you manage the? Well, first of all, I would imagine that there is differences in terms of effective selling techniques based on different cultures and languages. Tell me if I'm wrong there, if I'm off base, but if I'm right, how do you how do you work with that in your research? And how do you think about providing the right answer to an individual based on their cultural background and the you know, the person they're talking to and their background?

Omri Allouche 37:24
Yeah, it's super interesting. Quite surprising. To me, there aren't a lot of like, large cultural differences. In many of those aspects. I think that a large part of this is that I need to bond with you, we need to build a connection. Okay. But in all of those cases, it's not about the social cultural ceremonies, but it's more about actually asked like, connecting, and in me understanding how I can help you. And we work together to solve those things. So for example, I'm Israeli, and well, you know, shooting straight, not a lot that sugarcoating full of those things. And, you know, I am like, my wife is very happy that we have GPT, like now, and then she sends an email to, you know, customers in North America, she knows how to like the heel, basic email, and then it writes it in more, you know, polite business poses. But still, what is important in all of those things is not the small things of like, it's, you know, interesting or not interesting in many conversations. When we have a sale called, if the customer is saying, Oh, that's very interesting. They're not going to buy. Because if you're going to spend your money, you want to know that it's going to work for your specific use cases, with all the differences, difficulties that you have. And so you're going to ask difficult questions. And you want to ensure all of those things well, so we do see that people are, you know, like nicer. In some cultures, more polite, the general sentiment is more positive. But in order to actually close the deal, you need the same thing, you need to identify the pain, you need to understand who's the decision maker, you need to understand how those things work. There are small differences in France, for example, a director who is more senior than a VP Okay, so if you're building a system that kind of like, understands who you need to talk to, you need to take this into account. But quite surprising for me and we see this in gone. We actually have managers that manage people in languages that have conversation in languages that don't, they don't understand. And they use gong to get those calls recorded, transcribed, and then they can ask questions about how the call went or get a sound You have the call in English when the call was in completely different language, and they actually can work with those people to, you know, understand how to win the deal and how to, to navigate their own challenges. So it's actually quite a surprise in there isn't a lot of cultural difference in the way we see conversations been held.

Dave Edwards 40:23
Thanks so much for joining us. This has been a great conversation and we're really glad you joined us.

Transcribed by https://otter.ai


If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.

Subscribe to get Artificiality delivered to your email

Learn about our book Make Better Decisions and buy it on Amazon

Thanks to Jonathan Coulton for our music

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.