Subscribe: iTunes* | Spotify* | Stitcher* | TuneIn* | RSS
85% of AI and machine learning projects will fail to deliver. That’s according to a Gartner study projecting AI adoption and success through 2022.
It’s a brow-furrowing percentage. And even though the 2022 data aren’t in yet, we know that most companies playing in the AI space claim their products are *easy*—easy to learn, to use, and (most importantly) to realize amazing results from. But is that true?
In Aible’s case, all signs point to yes.
In this conversation, Intel sat down with the founder and CEO of Aible to discuss the company’s empirical success in helping its customers achieve AI value in less than 30 days, including:
- Its embrace of failure (and failing fast)
- Redefining what “easy to use” actually means
- Holding a head-to-head contest—Aible product-trained high school kids versus bona fide data scientists—where the kids won.
They also discuss AI bias, its future, and how it has the power to transform how we work and live.
Featured Software
Additional Resources
-
[Blog] 30 Days to AI Value: Development Best Practices from Intel and Aible
-
Case Study Videos: How to guarantee impact from AI in 30 days
Tony [00:00:04] Welcome to Code Together, a podcast for developers by developers where we discuss technology and trends in industry.
Tony [00:00:11] I'm your host, Tony Mongkolsmai.
Tony [00:00:17] Recently, we've talked a lot about the cool technology experiences built on modern A.I. solutions. One of the interesting challenges for businesses is how to harness the power of A.I.. According to Gartner, 85% of AI and machine learning projects fail to deliver, and only 53% of projects make it from prototype to production. Another study concluded that a mere 10% of organizations achieve significant financial benefits. Aible is an enterprise AI solution that is solving this problem, guaranteeing business impact in just one month. Today, we are lucky enough to be joined by Arijit Sengupta, the founder and CEO of Aible. Arijit is the former founder and CEO of BeyondCore, a market leading automated analytics solution that is now part of Salesforce.com. Arijit co-created and co-instructed an AI course in the M BA program of Harvard Business School as an executive fellow. He has been granted over 20 patents. Welcome to the podcast Arijit.
Arijit [00:01:15] Thank you, Tony. Really appreciate the opportunity.
Tony [00:01:18] So the really exciting thing here is that we always talk about how amazing AI will be for various different business industries. But the real challenge that we've seen is that it can be very hard for people to actually take their ideas about how to utilize A.I. and actually get to the point where they're getting that impact in business. That's precisely why you created Aible and so why don't you tell us what led you to kind of come up with the solution?
Arijit [00:01:44] I actually ended up writing a book called AI is A Waste of Money, where I took a thousand projects I had done by that point in time and just thought through everything I had done wrong. And this was right after I was leaving Salesforce and right before I was starting Aible. And it was a good time.
Arijit [00:02:01] I basically took some sabbatical to really think through what were the things going wrong? And one of the interesting insights I got was it had nothing to do with tooling. The existing process was fundamentally broken. So just adding new tools to a broken process doesn't solve anything. So what typically happens is a business user says, I want to do X. Another data scientist runs around trying to find the data to enable X, but that data may not even be available.
Arijit [00:02:30] So you starting with somebody saying, I wonder flying car from Back to the future. Well, however much effort you put in, you're not going to be able to give them that. But maybe they would have been perfectly okay with the car that drives at 50 miles an hour or a board that goes at 40 miles an hour. You don't know because you started from, I wonder, flying car. Right. And the other thing that happens is, you know, once you get to this predictive model, you're talking about this language of precision, recall, log loss, all that stuff. And people just want to know how it's going to affect their business. They don't care. And this is one of the things, as developers we've always heard, it's doesn't matter how cool your code is, if nobody will use the program, who cares? Right. You wrote this beautifully elegant code, but it doesn't create any value.
Arijit [00:03:17] So that's what we focused in on. We basically said, if we're going to create value here, we have to start from what is the business KPI a business user cares about affecting. Then go look at the data and there might be many, many different ways to affect that KPI so they can be different. So let's say you want to increase your revenue, you can achieve that by reducing customer churn. You can improve that by improving your partner's sales. There are many different ways of increasing revenue.
Arijit [00:03:48] So instead of saying, I'm going to do lead scoring, if you say I'm going to increase revenue and let's go look at my data and try five, six, seven, eight different data sets and find the use case that can deliver that impact and then explain the value of the AI in the language of that business use case and the business outcome. This is how much your revenue will increase and this is how much more you'll have to spend in marketing, in partnership, etc.. So that becomes actionable and useful. But that was not how the AI systems were set up. They were set up for business users to throw it across a wall to a data scientist and the data scientist to make a model, throw it over the wall to IT, and IT to then say, Well, maybe I can get it working, maybe I can't, and then throw it back to the business user. You can't do it without collaboration across personas.
Tony [00:04:35] So you're really talking about bridging the gap between what somebody actually wants to solve and then making sure that the data scientist understands that. So rather than somebody saying, I need a model that does X. What you're saying is you help, and I'll say this kind of in the funny Office Space way, you know how to talk between the person who is the customer and the person who needs to do the implementation it sounds like.
Arijit [00:05:00] Yeah. One of the funniest things that I would see happen is there are organizations right now who are trying to teach their business users data science so that they can come up with better use cases. I'm like, what a useless thing to do, right? That business, that business user is going to forget everything in the data science class five days after they got out of their class. Right? That's not their day job. Why are you teaching people to speak AI? We should be teaching the AI to speak people. Right? And that's what we said. We said the ecosystem should be able to ask a salesperson questions in the language of sales, a marketing person in the language of marketing, and then adjust itself to make the biggest economic impact. Why should I have to teach a marketer how to think about AI use cases?
Tony [00:05:48] That's a great point. So then what actually do I get when I actually work with Aible? How do you actually bridge that gap? So you're talking about it in a way that that totally makes sense. But as a user, how do you guys go in and actually make this happen?
Arijit [00:06:03] So we broke it into three distinct products. First is called Sense, which is an augmented data engineering product. So what are you doing? There is I got 500 tables in Oracle or I've got 200 tables in Snowflake. And I don't know which of these datasets are worth analyzing, What Sense will do as they'll very quickly go through all of the data sets in a matter of minutes, scan them, do a small model on each of them to figure out is this what analyzing is this what doing prediction projects out of? It will do feature engineering. It comes back and says, these data sets are worth working on; these ones are not. What you just did there is you failed fast. Out of those 200 datasets, maybe only five are worth pursuing, but you didn't waste time on the 200. You had an automated system bring you down to the five.
Arijit [00:06:49] And then we have something called Explore, which is augmented analytics because, you know, the dirty secret of data science is that you can't do data cleansing without involving the business user, and business users don't want to help you clean data. So what we did is we said, how can we make that attractive to the business user? And we got inspired by something like Sid Meier's Alpha Centauri or Civilization. If you played those, you're basically in this hexagonal map and these advisors are telling to go in this direction to find more drill down charts and the other direction to find more group by charts. Some other direction for statistical associations. And essentially with the help of multiple people, you can explore that area, explore the patterns and some of the patterns the business user will say is a data quality problem. Like, Hey, these are refunds, they're not purchases. They should not have been in the data set. Much easier for a business user to spot that once the AI flags that pattern and say this is a data quality problem.
Arijit [00:07:46] Another way of thinking about it is they're playing Jeopardy! So the AI has given them the best answers in the data, but the AI doesn't tell you the question it was trying to answer. It just saying, hey, this is a really useful, interesting answer. And the human says this answer is useful, this answer is not, and this answer is just a data quality problem that we figured out. So we use that Explore thing to further define the use case. So figure it out now because we have analyzed five data sets using Sense. We are now going to these five data sets and finding the use case. This is the moment where we say, Hey, we are actually going to do a customer retention project instead of a lead scoring project.
Arijit [00:08:24] The third part of the product is called Aible Optimize. And what Optimize does is, of course standard data science and machine learning. Of course it's doing AutoML of course is generating models, but the big difference is it starts out by asking the business user what is the benefit of a correct prediction? What's the cost of an incorrect prediction? What kind of capacity do you have to actually act upon these predictions? And then it designs the AI to make them money. And one of the fundamental things we have found is a model can have really bad log loss, but we actually very, very profitable as a very simple example.
Arijit [00:08:59] Imagine the benefit of a sale is $1,000 and the cost of pursuing a customer who doesn't buy is just $1. You did a quick call and you hung up, right? You would want a very aggressive AI, you actually don't want a very accurate AI because you're willing to take 999 misses to get that one win. Now, I tell you that you have only two salespeople and you have 2 million potential customers. Now, you actually don't want that aggressive an AI because you can't make that many calls. Now, the things I just told you, AI doesn't consider today. A traditional machine learning system doesn't consider cost benefits and doesn't consider capacity constraints. But in that simple example, you can see how important that is. And that's why 80%, 90% of the AI projects don't get you the economic value because they're not starting from and focused on delivering economic value. And that's what we did.
Tony [00:09:49] Yeah, that's actually very, very interesting. As I sit here and think about it, especially, I work with a lot of teams that do product management. As we, Intel, tries to build up its software capabilities, it's very much the same thing as when we think about when we ask engineers to do something, you know, AI or engineering. We say go build us X and the engineers go build the best X they can build. But what you're saying is sometimes I don't need that. What I really want is this outcome. What you're saying is the model is here to do the job, and what we help you do is focus on what is the job.
Arijit [00:10:22] And by the way, interestingly enough, I studied under Clay Christensen at the Harvard Business School. In fact, he was my mentor in the Edwards business talent contest when I first built my first company Beyond Core and Clay has this very important concept called jobs to be done. And I think engineers really need to pay attention to it because you're not building technology for the sake of technology, you're building technology for the job to be done. So how can we ever say, give us the data and get out of our way, we'll make the A.I. because you're not focusing on the job to be done.
Arijit [00:10:52] And one of the important things we also found in this process is we were also de-risking the projects by bringing the failure points earlier in the process and automating all the steps up to the failure point. So the most common reason the AI projects fail is because the data is not there to sustain it. Well, here you're figuring that out in the first few minutes in a completely automated way.
Arijit [00:11:15] The next failure point is business stakeholders don't act upon the insights. Well, you're figuring that out pretty early in the process because the business stakeholder was intimately part of this process. Or they don't trust the model. Well, they're intimately part of the exploration that led to the model, right? All you're doing is you're taking your failure points and bringing them to the beginning of the process. So even if you fail, you've lost an hour or two and it's not a catastrophe. Just move on to the next one.
Tony [00:11:40] Wow, that’s really smart. That’s really smart. So everyone says that their AI flow has significant business impact. Can you talk about what proof you guys have? What are you proof points that show that kind of this bringing these failure points to the front actually make a difference in building the correct AI flow for business?
Arijit [00:12:03] Yeah. One of the problems we ran into was people just didn't believe that we were doing this. So we build our company on this principle. So we first went in and said, If you do not create value and if we don't create value for you in 30 days, you don't pay us. And then customers say, What do you mean by create value? I'm like, whatever you define to be create value, you decide. If you decide we don't create value in 30 days, don't pay us. And they're like, Well, you mean I have to give you the money and then you refund it to me. I'm like, No, don't pay us. We will not invoice you until 30 days in. But even then, Intel actually heard about the story, this was the disruptor program, an offer was made to us that, well, why don't you do it 25 times?
Arijit [00:12:42] So we have now published 24 of those 25 case studies already, where in each case the customer saw value in less than 30 days and often in five days. And in each project we actually put out a timeline like what was done in what order? And you'll see actual work was much faster than 30 days. Often we were just waiting around for meetings.
Arijit [00:13:03] The other way of coming at this is even before we released our product, even before we went GA. Because everybody says their product is easy to use, I'm like how do we define easy to use? So UC Berkeley was having their AI Summit and we went and did a contest there and we said, we're going to take a bunch of high school kids, history majors and MBAs, give them 2 hours of training on Aible and put them up against expert data scientists with their favorite tools.
Arijit [00:13:30] So the first time we did it, the high school kids beat every data scientist. So the data scientist got pissed off. They said, Well, you've got to give us like five days. Like, great, here's five days. After five days, four out of 11data scientists beat the high school kids, but remember that means the high school kids still beat seven out of 11 expert data scientists. And these are like expert like they're at Berkeley they're like really good, really smart people.
Arijit [00:13:55] Second year we did this, the best data scientist had a better log loss by 20%. They had significantly done better than the high schoolers. The high schoolers still beat them on the thing that mattered, which was the business impact of the project. So it was a healthcare case where you're trying to decide which patient you're going to discharge. And we told them what's the cost to the organization if they keep a patient at the hospital unnecessarily, and what's the cost of the discharge of the patient and they're readmitted. And we told them how many hospital beds there were in the hospital, right? So the high schoolers beat the data scientists on economic value, even though that data scientist did better on log loss and that trend has continued. So that's what we mean by easy to use and getting to value. The proof is how many times we have done this over and over again. The likes of UnitedHealth Group and Cisco publicly talked about this at Gartner Summit earlier this year. So people are beginning to figure out that this stuff is actually real and it's reproducible.
Arijit [00:14:56] I'm just curious, when the data scientist lost the competition to the high schoolers, did they actually go then and look at your systems and see what the high schoolers were doing, where they were curious about how is it the high schoolers is beating me?
Arijit [00:15:10] Yeah, no no, it was really cool because the first time around we actually told them how we did it. Because, see, our goal was we didn't know how good Aible was. We were not GA yet, like my team was sweating bullets when we went to Berkeley and did this. They're like, Why are you putting us in this embarrassing position? Because if this blows up, everyone will know. I'm like, No, if this doesn't blow up, everyone will know too. If you believe in your product let's go prove it out. So after the the data scientists failed the first time around, we actually told them exactly how Aible beat them and that's what they were able to fix. Like we used something called a customer loss function which trains the AI to make economic do economic value, so that AI has being trained to make you money. We had all the data scientist that that's the reason why the data scientist, four out of the 11 managed to beat us. But we wanted to see if we made the contest as unfair as possible, then how does Aible do?
Tony [00:16:07] That's really interesting. It's just a fun thing to think that you've got all these experts or people who are who are experts in a certain area, and then you put them up against someone who theoretically is not an expert and you get these outcomes, I'm sure, as an expert myself in my own field. I would be very surprised and be like, How the heck did they do that? It's a really interesting conversation. It's really cool that you've been able to actually capture that value. I know that that Aible is what you guys call a serverless solution. Can you talk about what serverless means to you and why that adds value to your customers?
Arijit [00:16:40] So serverless is the promise of the cloud realized, if you will. So when the cloud first started, if you remember, we were always told that it would be like electricity. You would buy electricity when you need it. But it wasn't quite like electricity, it was like renting a electricity generator, right? You rented the server whether or not you actually did anything with it. Serverless is truly renting electricity. So you're only being when your code is being actively executed by the system. And once you're done, it's cleaned off and it's okay. You're not paying for it anymore. Right now, the reason this is interesting is if you look at the premise of the, people use data warehouses, what they really said was like Snowflake would be a good example of this. BigQuery would be a even better example because they're actually serverless. But what they basically said is, why are you paying for these databases when you're not putting queries in? So I will only charge you when you're putting the query in.
Arijit [00:17:34] Now think about what is the equivalent in the world of analytics or the world of data science. There isn't one. The first thing a data scientist does when they're doing a project is they bring up a server, put the data up on it, start doing exploratory analysis, start doing feature creation, start doing all the stuff. And that server is running for six months. Now if you take a serverless approach to this, what do you do is, when you're actively doing a lot of manipulation, you will spike up serverless instances, do that manipulation and shut it right back down. But every other interaction, when somebody is doing a transformation, when they're doing an exploration, they're actually interacting with metadata. And all the work is happening in the browser and it's communicating to metadata in the back end, which might be in BigQuery, it might be in S3 bucket, wherever you want to put it, but you're changing the paradigm. So instead of a server running for six months, it might be running for 6 hours for the exact same project.
Arijit [00:18:33] And the other interesting thing is, even when you're running head to head, we did a benchmark with Intel where we took the exact same TensorFlow models in our serverless environment versus a normal server environment. And we did something very unrealistic, which is we brought up the servers and shut them down immediately. Even then, serverless was three times more cost effective, 3 to 4 times more cost effective. So it's not just that it's 6 hours versus six months, it's an additional three X cost reduction on top of that. And in the current market condition, where AI analytics is going to be the biggest driver of CIO budget increases, they're really concerned that they're being asked to cut budgets, but the demand of AI analytics is going up. I don't think people can avoid thinking about serverless approaches to analytics and DS/ML. If they are, they're just basically saying, I'm okay with paying several hundred, several thousand times more money for what I could have done using a slightly different architecture in a much more efficient way.
Tony [00:19:33] That takes us to the topic that's always near and dear to my heart. It's performance architect, which is performance. And obviously this is kind of where the partnership between Intel and Aible comes in as well, not just the us asking you to give us good proof points that your solution works. But how have you been able to use some of Intel's products in order to make this spin up and spin down more efficient and faster for your customers?
Arijit [00:19:55] Well, even before the spin up and spin down when my team started playing with the Intel technology and they were like kids in candy stores. Like they would come in and say, hey, you know, we tried this AVX-512 thing and the performance just got much better and we just didn't have to do that much work. Or they came...there was one where they were using OpenVINO to compress models, and I think we managed to get this model to run into serverless architecture and we had never been able to do it. Basically what we realized is Intel has this amazing set of innovations you have done. But today what happens is individual engineers have to learn that innovation and do it. So you have to have the human go in and unlock the value of that innovation.
Arijit [00:20:38] What Aible has done is like when we implemented AVX-512 optimization or if we have implemented the OpenVINO stuff, none of my customers have to think about it ever again. They just get the benefit of it. So when in AWS, when we are coming up, we go in and we say, Hey, we have done that performance thing and we go in and say, Hey, give me the Intel processor. The moment I got the Intel processor, I got my optimizations working, right? And that's the way we're approaching this, is how do we bring Intel's innovations to the market. Without the customer having to do any extra work. And we look like, you know, such smart people because our code runs really fast, but really we are just standing on Intel shoulders.
Tony [00:21:20] I think everyone everyone that does engineering on our team would be very happy to hear that as our team thinks of you as our customer. I think that's a great story, and I'm sure everybody will be excited to hear that you're able to kind of leverage that for your business case. That's actually very cool.
Arijit [00:21:35] And the other impact of this kind of performance improvement is think of it this way. The biggest problem in serverless is memory capacity and running. So it typically a serverless instance will be shut down after several minutes. And it's different on different clouds, but it's in a matter of tens of minutes let's say. The problem is I'm constrained by time. So if my processor if my optimization gives me a 3x performance improvement, which is one of the other things we found that we were able to show. Or now with Sapphire Rapids, we got an additional two X improvement on top of that. What are you really saying is, either think of it as I can do twice as big a data set in the serverless instance, right? Or I can...the performance turns into the art of the possible. What kinds of data sets can be done in serverless? And as performance gets better and better. I think all realistic data sets can be done serverless. And in places like GCP and Google Cloud, where BigQuery itself is serverless, we can push a lot of the workloads to BigQuery where we get benefits from serverless right in BigQuery in that context.
Arijit [00:22:43] So as Intel technology allows you to run things faster and faster, how do you guys actually take advantage of that capability? What does serverless allow you to do because you're getting this better performance? Essentially, as technology improves.
Arijit [00:22:59] Essentially we get to take a ride on the improvements to the platforms that our partners are making, right? So think about Moore's Law. When people started writing to the x86 architecture in the earlier days, they could just count on Intel bringing out newer and newer processors and the code is running faster and faster because they wrote it to the Intel platform. In fact, if you put in all the effort to customize yourself to a specific processor, that might have been a wasteful approach, because when the new processor came in, you didn't get as much benefit from it. In the world of serverless, the early critique was that serverless was only for toy projects. It's small, it can only do small amounts of memory. It can run only for short lengths of time. But as these processes are improving, the optimizations are improving, cloud providers are giving more and more time, they're getting heftier environments, and we are doing interesting things with chaining serverless instances together, federating work across serverless instances. I think what you're going to find in the very near future that serverless will be the completely dominant approach to the cloud. And one of the interesting things that happens when you truly move to a serverless cloud is that there was always this argument that, yeah, I can run it better on prem because and I'm still just renting a server on the cloud, but when it is truly serverless, that argument just completely goes away because you're truly not talking about electricity instead of a rented generator, you can make the argument that I own my generator. He rents a generator. It's okay. But if they're getting electricity from the grid and I'm owning a generator, doesn't work. So you're going to see the serverless shift to the cloud also force much more work to move from on prem to the cloud because of serverless.
Tony [00:24:44] So I'm going to step back a little bit. And when we consider kind of the state of AI where it is right now, there's a lot of buzz around things like ChatGPT, Stable Diffusion. I actually just did a podcast with some other folks around, what types of things will that generative AI enable for for people and for society? And it's interesting in this case, because you guys are kind of already doing that from a business perspective. You don't actually need these complicated generative AIs that are huge. You're actually extracting the value out of things that already exist, right? And things that are tractable for people. I'm curious where you think Aible will be able to provide value going forward, how does Aible fit in; how do you extend your capabilities to provide the next generation of business opportunities around AI models?
Arijit [00:25:36] So I'm going to see if you're going to see the logo there. But you will see tagline has always been I am Aible. And in fact, that's the reason we started because our concern was that AI, on the path it was going, was going to be AI of the few that applies to the masses. So few data scientists, few experts would go create the AI and everybody else would be passively affected by them. So even if you look at the generative models, for example, if you're concerned that the models are misogynistic and you're seeing images that you're not happy about, there isn't much that you can do about it.
Arijit [00:26:10] There's been some really cool articles about there was this guy who asked for his own obituary and the AI happily made up all the data, even when it could have looked it up, just made up the data, right? What is happening is these generative AI right now are being trained to look smart rather than be smart. And they are not trained in a way that you can influence what has been done to them. These are black boxes. You don't know what's in them, right? You can see some glimpses of it as you interact with it. But even if you feel like you're using it, you're actually a passive consumer of that AI.
Arijit [00:26:47] Our principle was that we talked about how the AI gets created, but we also made sure that the end users could adjust the AI to their preferences. So imagine the sales the use case. I have a salesperson who loves to crank through deals and is very aggressive. He wants a lot of deals and wants to get going. I have a different salesperson who loves every deal to death, and he likes to work on fewer, better deals. Why should the same AI predict for both of them? It doesn't make sense. They need very different AIs. And what Aible allows you to do is you can just go in and move a slider and say, this is too aggressive or this is too conservative. I'm feeling burnt out. Give me less. They can do a thumbs up, thumbs down and provide feedback. And as we observe that human behavior, we observe how the human is reacting to the AI. We can adjust the AI for the individual, or we can go to the organization level and say, Hey, we made a mistake. We are burning out our employees. We can't do this. Let's pull back at the organizational level. So AI actually becomes like... We are empowered to affect the AI that affects our lives. And until we do that, you're going to have really interesting, fun toy projects. But to truly be transformative, we have to figure out how these systems fit in with our ethical beliefs, our goals for society. Because otherwise, you're passive. You're being impacted by AI. You're not empowered by it. And that's my fear. That's my concern.
Tony [00:28:19] So as we consider how able is able to affect business and AI in business going forward, are you concerned about the use of AI in businesses? So again, around the various generative AI's, there's tons of copyright concerns. There's always concerns about the explainability of AI. Do you have any fears around the regulation and how that will affect your business and the business around AI within the world?
Arijit [00:28:45] So first, let's acknowledge that there are fundamental issues with AI around bias, right? So for example, if your data historically you were not giving giving women loans in the same rate as for men, guess what? An AI will look at the data and say, hey, it might be easier, it might be better to give loans to men, because that's what the data shows. The thing that we need to recognize is you cannot eliminate that kind of bias. So a lot of people will say, I'm looking at whether variables like gender or ethnicity show up in my data. Doesn't matter, because if you remove gender and AI smart enough to use something like job title as a proxy for gender. So it's like being Whac-A-Mole and you keep digging out the variables. And the AI will always find a different way to get to that bias because it it doesn't know that it's biased, right? Think of AI like a child. And imagine you were teaching your child by constantly telling it. No, no, no, no, no. As opposed to telling them what they should be aspiring to. What are the good things? What are the thematic elements?
Arijit [00:29:47] So what we are working very hard on is how to define fairness in AI. So instead of trying to detect and eliminate bias, can we go in and say, here is our fairness goal? That might be that the approval rate for loans for men and women should be the same. Or it might be. We have instead of just having a fairness goal, we actually want to have developed like we actually want to promote one group over the other because they have been previously underdeveloped, right under supported. You can do that. You can the AI this is my goal for what the balance of this metric should be and the AI can train to that. That's a very easy way to do a thing to do in custom loss functions. But we need to start shifting away from this mindset of fear where if you look at the regulations that are coming up in Europe, the AI Bill of Rights that's coming up in the U.S., it's coming from a place of being afraid that the AI is going to be biased.
Arijit [00:30:44] Can we start thinking about the potential of AI to reflect our societal goals, reflect our societal beliefs and standards at scale? Because it's very hard to get people to make sure they do things consistently. It is much easier to make sure the AI is actually making recommendations in a very consistent manner and then observing the final results and confirming that your ethical goals are being met. So I hope the regulators will stop being afraid of AI and start from the potential of this technology to transform us, because things that are seen through the lens of fear never transform societies.
Tony [00:31:25] And I think that concludes our time for today. I'd like to thank Arijit for joining us today and telling us a little more about Aible.
Arijit [00:31:32] Thank you so much, Tony, and I think we talked about a lot of fun stuff beyond Aible. I hope you guys stay curious about the world of AI. The only thing that an AI does not have the ability to do is to be curious. So stay curious, my friends.
Tony [00:31:46] There you go. So stay curious. And for our listeners, also stay curious. And if you're interested, you can check out Aible. We'll have some links at the end of our podcast so you can see what Aible actually can do, and you can see how Intel has helped enable Aible to help solve customer problems. Until next time when we talk more about technology and trends in industry.