[AUDIO LOGO] My name is Andrew Hartnett, and I run R&D and engineering here at One Identity.
AI is being used currently in several of One Identity's products. What's very interesting and something that most of our customers don't is that we have been using models in the background to do things like risk scoring, things like being able to-- being able to diagnose issues and things like that, and that has been in the core of several of our products now for years. So this is not new to us. So when I think about how users are going to come in and consume our products, I see them coming in through API, through UI, and through AI.
Our approach to AI is slightly different, and the first is that, while we did a lot of machine learning in the background, doing what I call predictive AI, we didn't come out and really bet the bank on it, if you will.
I'm Brian Chappell, and I'm the VP of product management here at One Identity, leading the product function across the globe. So across the portfolio we have in One Identity, AI has had a long and distinguished history with us. Probably more than six or seven years, we've had AI in many of our products. We use it in our Privileged Access Management product as user-behavior analytics. So AI is perfectly positioned to look at the many, many data points that can be gathered while somebody's actually interacting with the system, and we can do really cool things. We watch how you type on the keyboard, the gaps between the key presses, how you type words, how you leave, how you have the interaction while you're typing, and that gives us an enormous amount of information. We can say, is it still that individual on the end of the keyboard?
We do exactly the same thing with your mouse, if it's a graphical interface. We can watch how you use the mouse, how accurate you are in clicking on things to build a picture so that, throughout an interaction, we can be confident it's still the same individual at that connection. If it's not, obviously, or it changes halfway through, we've got a really good signal to help us then identify that something unruly is going on.
But as far as our future goes, I mean, the sky is the limit. So many different applications are coming for AI. Most people are very commonly aware of generative AI, your ChatGPTs of the world. And we forget about things about-- we forget about things like predictive AI and causal AI, which can help us in making much better visibility across the information that's being gathered by the system, and use that information to provide actual help to our customers, real value that they can then make good decisions very quickly. Which, given that we are within the cybersecurity space, when something bad is happening, you need to make good decisions quickly. And the more AI can help us bring those information, pull that information to the surface, the better it is for our customers.
The thing that AI is absolutely fantastic at is looking across a billion or a trillion pieces of data and remembering every single one of them and being able to make analysis across that, to be able to draw inferences, to spot patterns, and that's where the real value comes. It's an augmentation to the person who's using it. It gives you a much bigger set of processing capabilities to be able to get to those end points.
And then be able to look at, often, how other people have dealt with these scenarios and be able to link those things together. And certainly when we're thinking about all of the events that are happening in your environment and being able to spot unusual activity from normal activity, especially where you don't have great controls around that already, the kind of things that we can-- our portfolio really helps you with. Where you don't have those controls, It very quickly allows you to zero in on the abnormal activity so that you can then take quick and appropriate action.
We saw it in things like blockchain earlier. Now AI came along, and certainly the large language models, the ChatGPTs, the Geminis of this world have taken the headlines. And lots of organizations have just gone headlong into those. And in a lot of cases, they're just doing the same thing over and over again. And it's about having that phrase in your marketing materials, being able to say, we have AI.
One Identity, I like to think that we are much more considerate. We see this as a technology that we can use to provide extra value to our customers. So within that, we want to make sure that where we apply it really does deliver additional value.
We've chosen to focus on the areas where it really does help you make better decisions. It gives you the opportunity to explore the data around your environment in more interesting ways-- to be able to ask natural language questions-- that's where we really apply the AI-- about the huge data warehouse you probably have behind your implementations of our portfolio and be able to get to the conclusions again really, really quickly because then that's time to value for you. That's, then, time to resolution of issues. Anytime you can foreshorten the workload of actually moving forwards in your role, AI can help in that, and those are the real places that we want to invest our time.
In terms of being considerate within our use of AI, I think it's important. There's two sides. You do have the correct application of it so that it's got good value, but then there is that fear around the use of AI. And a lot of that really focuses in the concern that I'm going to feed my information into this AI system, and perhaps somebody is going to be able to extract that information back out to where it's learned from it, and it's using it as the basis for some of its responses.
I think the truth of the matter is that, in reality, that chance of the AI reproducing your information in any usable form is actually very, very small. But being aware of that and thinking that problem through to its endpoint means we understand how taking a customer's data into these things-- it's vitally important that we anonymize where it's appropriate. It's an opt-in situation, not an opt-out situation. And understanding the problem means that we can be very focused in actually making sure we're only taking the right data. We're using it in the right ways and always with the full permission of our customers.
So I think in terms of being able to balance how we're using AI in our portfolio with that responsibility, I think that consideration that I talked about is the key to that. We're not running headlong into AI because it's the latest, greatest thing. We see it very much as just one of the tools we use in building our products.
And just the same as if we're writing code or we're building a database to back end it or we're building some new service that works with it, these are all just things that augment the solution as a whole, and so it's important that we use them in the right places. AI has got a set of skills, which it's absolutely fantastic at, but there are other areas where it could be applied where, actually, there are simpler solutions, more predictable solutions, cheaper solutions. And so it's important for us to always think at every stage, Is this the right technology for the problem we're trying to solve? and being very objective about that.
And that's why we may not be the most recent to come to the AI space, despite the fact that we've had it in our products. But now customers want to know when it's being used so they can see it's being done responsibly, so we're definitely making it very clear how we're using AI within our products. We're even talking about how we're using AI within our organization so that customers can feel comfortable that we've really thought about the problem. We've really come up with the best possible solution, and it's going to provide them benefits without increasing the risk profile in their environment.