Overview

In today’s fast-paced digital landscape, AI presents a wealth of opportunities for IT leaders to drive both innovation and profitability. This panel explores the critical steps to overcoming barriers to AI adoption, focusing on cost-effective strategies that align with C-suite priorities like revenue growth and operational efficiency.

Register Now

Transcript

Anybody else? Oh, you're all so quiet today. Did we not give enough caffeine to that coffee? Did they accidentally put decaf out there? I don't know what's going on, y'all, but we are excited to welcome up our next group of panelists.

So I see them at the back there. Guys, come on up. I'll talk while you all get your seats there. All right, so coming on up, we have a panel presentation. We have a group of brilliant minds coming up to tackle the topic of AI implementation strategies.

They're going to talk from cost savings to revenue growth, which is obviously a super important topic.

So joining us today as they get settled, we have Husam, the CIO of go right fleet solutions, Mohammed, who is the AVP of enterprise architecture and IT management for CIBC, Mellon and James Scott, the CTO at Dell Technologies.

And here to moderate this panel is former BNN news anchor and our contributing editor, Michael Haynesworth, give them a round of applause, please. And Michael, it's all yours. Well, thank you so much.

I'm so glad you pointed out with some really smart people up here on the dais, plus me, for the record, as a lifelong computer geek, my knowledge on this topic is about a mile wide and about an inch deep, which is why these people are up here and I'm not providing you with the keynote presentation.

We're going to be talking over the course of our time together about the fact that AI is providing us with a wealth of opportunity for you to drive innovation profitability, but there are some pretty critical steps that are necessary to overcome the barriers to entry for the adoption.

We need to ensure that we have cost effective strategies that align with what the corner office is looking to get out of the technology.

We need to understand how to leverage and optimize cloud costs, scale operations, enhance that kind of productivity, and have a robust data as a foundation for a successful AI strategy. So with James, Mohammad and Husam here, gentlemen, thank you so much for joining us today.

This is going to be a fascinating conversation. James, perhaps we'll start with you. What do you see as the most common barriers to AI adoption, and how do we overcome them?

I think one of them we've touched on a lot during the sessions today, and that's the importance of good quality data, well labeled data.

And I think that the biggest benefit that every, every organization should be thinking about as a good data management, data governance strategy, working out who is responsible for the data, how I manage and control that data over time.

I think, I don't think you can drive any AI system without good, high quality access to data. I think there's some other ones in terms of probably second to data, is just listening to the users.

I see challenges and barriers all the time where there's a big push to drive new AI capabilities and implementations, but what we have to constantly be doing is listening to the end users. And it was a comment again, it's been made multiple times today.

Everyone got a chance to go and play with these AI capabilities. They got a chance to go to chatgpt and see the capabilities. And I think everyone immediately in their head thought, I see how I can make this relevant for me within the business.

So we have to be asking those users, like, how can AI help you? And how can we make sure we're lining up our business strategies to support your support your gains?

Well, I have to, just to say, like, it all starts with understanding the problem that you're trying to solve here. Once you understand the problem, it could be really surprised sometimes that you could solve the problem, not even using AI.

You could use it, you know, solving it by a traditional BI report. Once you understand, you know, the problem that you are trying to solve, I think the second step is education. And I think it's was mentioned all morning and afternoon, education, education.

Some people like use the AI and Gen AI, and honestly, they don't know, like, what's behind the AI and Gen AI. And then once you start educating, even starting starting from the top, you will be surprised that those use cases will come.

You know, natural when you are saying, okay, I can build you an AI, which is that will allow you to determine go right or go left. This is classification, let's say model, if you are, if you are trying to to go after counts and amounts.

This is regression model. If you are trying to to, you know, you are seeing your your graphs up and down and you cannot explain it. This is anomaly detection. So this kind of education, if you want something to move by itself, learn by itself. This is like reinforcement.

Computer Vision is a totally different world. So once you start educating people, you'll see that the use cases start flowing very smoothly. And then I think the most difficult part is attaching a business value to those AI models.

This is, I think, the toughest part, where how to convince the business that it has a huge ROI behind it, how to attach a business value to it. Mohammad CIBC has received accolades and awards for its implementation of AI technologies.

How did you overcome some of these initial issues? Yeah, thank you. So, as everybody mentioned, setting expectation is important. So one thing, I am not CIBC. CIBC is our parent organization.

I am CIBC man, so I've been with CIBC, so I can a little bit answer that things are changing, and as a as a banking industry, the CIBC would need to move forward.

And one of the items that they're working towards, to be on, to be in the market, and the trend trying to do that, as you guys have seen, they're hiring 200 plus AI expert, and now I'm going to talk about CIBC Mellon.

So in CIBC Mellon, I need to set expectation with with my leadership, that you know, what's the goal? What are we gaining from this AI? Because, as everybody mentioned, data is number one, and we call it fuel for the AI.

And moving, moving the corporation forward, and we need to know our data so and on the other side, we need to understand or the our problem. Because, as I mentioned, not everything going to be solved by AI.

And unfortunately, the expectation comes that, you know, AI is a silver bullet, and it's going to solve all the problem, and everything is going to be automated, and there is no human in between, and that's not going to happen.

So there are a lot of education that I'm providing with my C level. When you educate, do you also educate to the reality that seven out of 10 AI projects, generally, within an enterprise, tend to fail.

How do you get C suite on board when they're like, Well, what? Where's the revenue growth that's going to come out of this?

When you have to admit maybe we're not going anywhere with this particular project, absolutely and, and this is the part that you know on AI's AI journey, I would say culture is important.

So when we try experimenting something new, we need to have expectation that it may not go the way that we want it.

So not look at the failure as it wasn't delivered at the way they wanted, but they look at looking at it as a experience that we gain from it.

So within the AI, because, you know, there are many, many use cases for AI, and within our organization, we have a practice that I'm building for the triaging AI, the use cases that, what should we invest on it, because there are a lot of AI solution that it's becoming commodity.

Why do I need to invest and build that AI model myself, where, when it's already exists, I can take the native capability of IBM Azure AWS, and get those things going. I hope I'm answering your question.

We're working towards it, James, what is your take on that, on that whole focus on ROI in an industry that is still, well, relatively speaking, still rather nascent, I think, and I suggest every organization I work with to stop worrying or thinking about this in terms of proof of concepts and just stopping there.

Every single one of those proof of concepts really needs to be proof of value, and we have to attach that proof of value to some type of metrics. It's great to go into these projects and say people liked it.

What I want to know is, to what extent, what are the what are the actual detailed metrics that we can implement as part of that POC?

And I don't think that means every POC needs to be a complete success, but it does mean that when we come out of it and we have a challenge or it didn't work as expected, we have some details as to why it didn't work, and some lessons learned.

So then, what kind of KPIs ought we be applying to a new project?

I think some of it is going to be, if we're looking at things like an improvement in productivity, put some percentages behind that, even if you're wrong, even if it's an estimate, we think we're going to put this, we're going to implement copilot in our in our system to help speed up the way our support team responds to emails.

How much quicker, how many more emails do we think we're going to get through? And that way you've got a baseline, and you can work from that baseline up or down, but at least you have a target to hit.

I think, like talking about the KPIs, I think you have to agree from the beginning, once you start to the implementation, you have to set the expectation.

What would the business is expecting here as a KPI is if they're expecting accuracy of 100% of predicting or forecasting, then it's a failure.

From the beginning, it's not going to be 100% at all, unless you are going for a reinforcement model, like a Tesla, like AI, the failure has to be like .00001 and this is maybe the bar that the business is setting for you.

So I've been into several businesses where, when you get accuracy 50% you know, we celebrate. There are other businesses when you get like 90% it's still like, we need more we need more accuracy.

So it depends how the business is setting the bar, how high it is setting the bar for you.

So as we think about the issues of return on investments operational efficiency, maybe, James, you can give us some insight through the clients that you work with on strategies that companies can use to leverage AI for optimizing cost.

Cloud costs blindsided people in the early days of lift and shift. How do we address that when it comes to AI implementation?

I think, as it relates to I treat AI like a lot of other workloads to your to your point on cloud, it's, you know, right location, right time, right reason, and you just have to apply that to the same AI model.

I think what I'm seeing is that means, you know, I do need to provide a playground, if you like, for teams to go away and experiment. And that might be a good use case for running things in public cloud. I get access to a lot of different tools.

It would be complex. I don't want to spend time training and building my own models. I want to use things that exist.

I want to use some of those tools, but making sure that we understand that OK, when I go from a playground of access to 50 users to a production system with 500 users, how does that change the amount of compute?

How does that change the data pipelines they're going to have to feed into that system?

And thinking about that growth might mean that that workload needs to land in a different location, either with a different cloud, into a colo provider, into my own data center, but thinking about that up front to your point, just like any other cloud workload is, I think, pretty important.

Yeah, on top of what James mentioned, there are tools which all cloud providers are providing as a budgeting for how much we are spending, and if we we definitely need to look at that.

And some of them, they do have some sort of analytic model that they predict that you know your spending is today or this month is this much, and moving forward, next year is going to be that much. So that's with every corporation they do.

Need to have a keen eye on that one, because I've seen firsthand experience that if you don't look at people are going to be utilizing the services and utilizing GPU is easy, but it's very, very expensive.

James, you mentioned over the course of the our time together today that data management has been a component of the discussion, and the importance of using that to build a proper foundation for any implementation.

What is it that you have yet to hear from the conversations that took place today about data management that we need to know before we leave this room, I think one of the things is to think about how that data is going to change over the time, and how we are monitoring that data for change.

I don't have a fantastic solution here, but I think it's something everyone needs to think about. Is that as as these AI models become more important to organizations looking for signs of malicious attacks within that data, looking for signs of bias within that data.

It comes back to my having someone responsible for the governance and ownership of that data. But to me, that's the one thing that I think everyone is still sort of struggling with.

I think it's the one thing that everyone needs to think about is how those models are going to evolve over time, and the responsibility that you have. You can offload lots of things. You can't offload the responsibility for how you use that models with your data.

It's, I think, again, it's a challenge, but I think it's one that every organization needs to be thinking about today. You see, we're struggling. What are we struggling with?

Specifically, I think there's just simply the tools that can be put in place that are designed, and I think we're still in the early days of this, for actually monitoring the models, for some of these signs, for it's very difficult to look, in general, at a set of data, and say, I see signs of bias in this information as humans, feels like a matrix, yeah, oh, I can read that one in that one, we find that difficult as individuals.

As you know, it's incredibly difficult for a machine to recognize that there's bias in that data, and applying that across an organization is, I think, still a challenge. Well, just just understanding, like, how AI models are built.

Like you are spending 40% of your time just dealing with data, 20% of the time you are just doing machine training, and 40% is you are evaluating. So I think data is king.

Like you have to focus all your time and effort starting with the source of the data itself.

Like you have to check the systems that are generating even this data, is there, like, validation on those systems, like, you know, we deal with mainframe sometimes all legacy and they didn't have the proper validation.

So you can imagine the data that's going to go out of those systems is not clean. So you have to, first, you know, validate the source of the data itself. You. Need to make sure that there is variety of data.

Sometimes the quantity matters, although, like, there's a misconception, people think, Oh, I don't have huge amount of data at the ends. I cannot build AI models. You'll be surprised that you know some computer vision AI models could run with 50 images only.

That's all what you need to have a computer vision AI model. So data is really critical, and also the continuity. Because remember, once you build AI model, you need to start improving it.

You start with 60% accuracy, 70% 80% what makes it better and better is the flow of data and how the flow of data will come smoothly. You need to build an automated pipeline. This automation becomes a key when you are dealing with that, with that as well.

Mohammad, you're nodding and approved, yeah. So I totally agree with both of my friends here.

On top of this, the quality of data and also the context of the data is important because the machine doesn't know anything, and it's like a simple conversation that we have with individual how we provide the data, we get a different response. Hey, is a cloud?

Is a dark cloud? Okay? We expect rain. So that's the data that we simply get it. So it is very important to understand what's the context of the data. And back in the days, that's why our data scientists, they were saying, I need every single data.

Because nobody were able at the time to say, what is the definition of these data for this model that I'm building, maybe is relevant or is not relevant.

So it is very important, as I mentioned, data is a fuel, and depending on what we are trying to use it for, whether it's a car or is a rocket science, we rocket we have to use different type of fuel with a different type of quality.

So data cleansing and data management, it is critical, and it's not the silver bullet that we're going to say, Okay, we have a Ji move, and all the data is clean now it's it's require ownership, because data is not coming from one system these days.

Data is coming from multiple sources and and we need to be able to have a pretty good guard rail and data governance to be able to clean the data from the source and have a reliable data to be able to use it for any sort of AI model.

I'll also add that I'm because I'm seeing this happen all the time at the moment, is that people are implementing these AI systems a way to get access to new sources of knowledge.

It's a great way for me to use semantic understanding, to really understand what data I'm holding. But what you're feeding into those systems often is disparate data sets.

So you have, you have a set of data over here there, where you know users a, b and c have access to the data. You have a set of data over here, where users D, E and F have access to the data.

It's really easy to throw all of that into an AI system and build up without going into the weeds too much these vector database models. The problem is, as soon as you do that, if you're not doing it carefully, you've lost all of that permission control.

And I've seen more than a few examples where the end result is a system that is divulging information from both of those data sets with none of those controls.

Give me another example of some common pitfalls in AI implementation, and how an enterprise can avoid them to ensure that the strategies that they've decided to focus on actually deliver business results.

I think some of this comes down to my point earlier on, the the metrics and the proof of value.

If you look at the list, and again, I see this a lot organizations, most the time, have more than a couple of AI projects that they're interested in, that they're working on.

If each one of those AI projects has a set of metrics associated with it, I don't think it's fairly easy to look at those metrics and go do these line up with the business objectives. Are there certain projects in here that fit a couple of key categories?

One the metrics align with what we're trying to do, but I think maybe equally importantly, they're feasible in a short period of time, because there's a lot of AI projects where they're very pie in the sky.

It would be amazing to do this, and then when you dig into it, there's one person working on 15 of these projects, and it's never going to get done.

So what has the most business impact based on the metrics, and what can we actually do in a from a feasibility perspective, in a reasonable amount of time? Mohammad, final thought from you, yes.

So, in my opinion, one of the things that I've seen problematic is critically of that criticality of that service, if the service is super critical and is using some sensitive data.

In it, we can't look at the AI with the same same eyes that this AI that's looking at my documentation procedure and creating some sort of agent that answering my question about, How do I call this vendor versus it's going to give me a predictive form how my clients is utilizing or investing on this fund or that fund.

These are two different contexts, and it's two different business criticality. So I've seen that business they look at the AI with one eye. They're saying, You know what it is?

AI is AI, so let me use it and just feed the data, which then we get to the back problematic of the regulatory and bunch of other things that we need to address after that. Mohammad, thank you so much, hosan.

Final thought from you, just just on the things that sometimes you miss an opportunity. And I know we are talking about data, data, data, but I haven't heard that. You know, there is an opportunity for data augmentation.

Sometimes you could fake that, even if you don't have it, but in a scientific way that will allow you to build a proper AI model, and you don't miss the opportunity.

The second thing is, you know, for organizations, whenever they are going into the journey of the AI start always small, like, you know, some, some organization will hire an army of people, Oh, lots of machine learning engineers, ml ops engineers, data engineers, and they haven't proven, like one successful, right, you know, AI model.

So avoid going big, start small and then scale from there. Gentlemen, thank you so much for your time and your insight here today. Thank you. James Mohammed Husam, thank you for that. Allice , how about I give you a buck 22 left in our day today?

Well, that's exciting, because that gets us just a few more steps closer to our final networking reception. Wink, wink. All right, guys, what do you say? Give a round of applause for our panelists here today. Thank you so much, gentlemen. So as. Transcribed by https://otter.ai