AI Training And Certifications Midmarket IT Leaders Need In 2026: Ready.Set.Midmarket! Podcast Transcript
Adam Dennison: Hello and welcome everybody to another episode of Ready Set Midmarket, the podcast for all things midmarket IT. I am Adam Dennison. I'm joined by my co-host Samara Lynn. She's the senior editor with MES Computing hi Samara. And I want to thank our two guests this afternoon talking about AI in terms of training and certifications. I want to welcome Jay Bavisi. He is the president of EC Council.
Jay Bavisi: Thank you. Thank you for having me here.
Adam Dennison: And thank you so much. And I also want to welcome Raj Dubey. He is the CTO of LT. And Raj is also one of our advisory board members with the MES brand. So thank you, Raj, for taking the time to be with us today as well.
Raj Dubey: No, thank you. It's a pleasure.
Adam Dennison: Awesome. Before we get started and kind of digging into the topic overall, would like to ask, we'll start with Jay and then go to Raj. Just tell us a little bit about yourselves, your organization, the role that you play within there, kind of what your market looks like in terms of who you serve, things of that nature.
Jay Bavisi: Sure, thank you. Thank you, Adam, for having me here on the show. So, I am the founder of an organization called EC Council. We are a certification body based here in the US. And what we really do, we started out 20 years ago.
We are very well known for our premier certification called a certified ethical hacker. Fast forward two decades, we're in 170 countries and we basically teach the good guys, the IT folks, the cyber security folks, 26 different disciplines in cyber security.
And now with this new world of AI that's coming in, we realize there's a massive need in workforce development in artificial intelligence. And we've been spending a lot of time and effort to research how we can uplift the community with workforce development in artificial intelligence. And that's basically what we're doing right now, Adam.
Adam Dennison: Perfect, thank you so much. And Raj?
Raj Dubey: I'm a CTO for LT. It's one of the largest marketing agencies in the Southwest here. I'm in a very unique position. So I also not only manage the digital ecosystems, the IT, the security and compliance, but I also get to deal with a lot of clients. So on average, we do work with close to hundred clients that are on our roster on a daily basis. So I get exposed to what's in our system, but also in their system. And they're always looking, we work very collaborative to kind of get ourselves to a better secure posture, to better compliance posture, as well as how can we prevent anything from happening going bad. means an AI is something that's kind of ⁓ been our framework. We've been on this for journey for more than a couple of years. Actually, one of our original founders heads a company that's focused on AI. We work on AI on a daily basis. So it's a fun experience.
Adam Dennison: Great. Thank you, Raj. So I think, obviously, AI is a hot topic. It's talked about, it's the, you know, can't read an article without it. You can't go to an event without it, a conference, can't listen to a podcast without it. But I think we're definitely evolving, you know, the discussion around AI. And now we'd like to focus on, you know, what are those kind of skill sets? What is out there from training and certification?
Jay, you've been in it for 20 years from a security standpoint. We know there's lots of different certifications and trainings that are out there for IT across many, many disciplines. But what are those sort of tops of the waves that we be thinking about now when thinking about AI and how to train our teams and our users and our workforce and things like that? What are some of those kind key pillars?
Jay Bavisi: Sure, I think that's a fantastic question, Adam. Thank you for asking. To answer that question, I'll ask you back one question, right? And that will set as a baseline.
Can you tell me how much do you think humankind is going to spend in 2026 for AI? What do think the number is?
Adam Dennison: A worldwide spend? Boy.
Jay Bavisi: Worldwide spend
Adam Dennison 50 billion.
Jay Bavisi Okay. Raj?
Raj Dubey: Probably more than that. I think probably close to it will be on upwards of more than $500 billion that total companies are spending across the channels and both. So it could probably hit a trillion.
Jay Bavisi: You're right. It's $2 trillion. $2 trillion. Now, if you think about that, $2 trillion across the world, this is a brand new technology that is being implemented. If we look at the elements of technology, if we look at IT traditionally, we know that IT was always deterministic. You have a solution, you have a clear outcome. You put in certain inputs, you get certain outcomes.
Adam Dennison: That was a little off.
Jay Bavisi: AI is probabilistic because every time you put something in an AI, the LLM model is going to spit out something slightly differently, right? It's going to try to be better. If you take a look at IT, it's been very transparent.
But if you look at AI, we're dealing in a dynamic environment. Now, if you think of that, what I just said, two trillion, you think about the opaqueness, you think of the probabilistic nature, you think of the dynamism, the workforce challenge that we have is humongous because every organization, especially in the midmarket, which is where your audience really lies, is using AI somewhere and I -- just talking to Raj, Raj made a good point, know, in IT, you know, we always talk of shadow IT. Now we're talking about shadow AI because there is someone in everyone's organization using chat GPT or using Claude or using Gemini and there's some company data somewhere. So, the way we think about workforce development is to think about this in four large segments. The first is really the essentials, getting everybody onto the same page, right, which is the fundamentals level.
Raj Dubey: Absolutely.
Jay Bavisi: And this is where we want to teach the workforce just the same way that we thought workforce operating systems, and we thought about them, the basic applications, what AI is and what is not. It's not just a prompt, it's not just a model, it's not just an application, it's far more than that. So, I think that's the first, the baseline part. And we've got to start this at the high school levels. We've got to start this at the knowledge worker levels and multiple levels and multiple facets.
EC Council is about to publish a framework We have a pretty interesting advisory board led by the leaders of the largest companies in the world and these are all AI leaders. And we debated this thoroughly, and we really got to think about how do we build workforce development when we are adopting AI?
The way we should think about workforce development is in any organization that uses AI. Question number one, does my workforce understand the basics of AI? If no, essentials is probably the way you think about it. If you're adopting AI, well, do you have people that are trained and certified in AI program management? And if the answer is no, then that's the direction you should go. If you are utilizing some advanced features of AI, for example, you might have some open LLM models that are all in-housed. Are you thinking about the defensibility of AI? If not, you better have people that train offensive AI. And lastly, your C-suite and your leaders should be thinking about responsible AI governance and ethics.
Raj Dubey: But to that point, I think many companies are still struggling. There's definitely a skill gap and there's a talent gap. Like if you talk about shadow AI, means people are putting data, trying to get answers, just like how we used to do it for Google long time back. So it's not about just...putting policies, I agree that training is needed, but I think we have to put guardrails around it. Like we originally initially thought that prompt engineering will become like, my God, it will be the most amazing thing on the planet, but no more. It's about kind of guarding those inputs. How do you normalize it? How do you define those boundaries? And that way you can guardrail it, correct?
Jay Bavisi: I think that's a great question.
Raj Dubey: Like even in our organizations means we let loose everybody, got everybody licenses and everything to any tool that they want. And eventually we had to centralize it. So, we had to make an AI council. And then we created this concept called AI Hub. So, what we do is we meet on a monthly basis. We decide what business or things we want to focus on for this month and then we operationalize it, put it on our hub so it's more contained and controlled. I think that is the key element of kind of managing AI.
Jay Bavisi: Yeah.
Samara Lynn: And can I just jump in for a second? You know, I think one of the things that's so intriguing about new AI roles is those roles aren't really clearly defined yet. Like if you need someone with AI governance, AI security, like there's no really industry titles or compensation standards for that yet. So how are you navigating that as you're hiring, Raj, for your company's AI goals? And Jay, how do you find the companies that are coming to you for compliance and certifications?
Raj Dubey: Yes.
Samara Lynn: How do you articulate these jobs?
Jay Bavisi: Raj, do you want to go first?
Raj Dubey: Yeah, I can go first. I can quickly say, so basically, we have two role types that are part of that council. So, we said governance comes from all the way to the top. So, our CEO is personally involved in that AI council so that they understand the implications and impact. Now I have data scientists that are part of that council. We have AI developers that are part of the council.
And so, what we look at in terms of, you know, we are looking at challenges that we can solve, and then we kind of prioritize it. It's like, you know, think about like, say, for example, I set a goal for this year. go like, guys, we need to build like 10 agents. So, we are moving to a genetic model now. You know, I want things to be more efficient. I want things to be working while I'm sleeping. It's like those heydays of e-commerce, like you make money while you're sleeping. I want things that can be automated, be done through my agents that are running. But you meet data scientists, and suddenly that role type has become more valuable, you know, in terms of how you analyze that data. It means it's like, you know, think about being old days, we used to do data warehousing and business intelligence. Those used to be six months to a yearlong project. Guess what? We can do that in a week, two days.
But then to understand that data and expose it and then optimize it for what you want to do is that's what critical. But yeah, you have to, means there's no official role types, means in IT, like what do you define? Like, do you call one IT guy, a network engineer, AI guru? No, we don't have those role types here.
Jay Bavisi: Yeah, I would like to add two parts, a lot of interesting things, right? So the example that Raj, you just explained, you have an AI counselor, I think that's great. I think, now, Samara, this is a developing story. And the thing about AI is that the story develops every second. So by the time we finish this podcast, there's already a change, already happening; industry is happening at a very lightning speed. But there are a couple of points that I do want to make. The example that Raj just provided, I think that's a fantastic example. Clearly,
Raj is part of an organization, LT, that's ahead. They've already built an AI council and there's already leadership at the top that's thinking about AI implementation. However, that alone is not gonna suffice because responsibility, as Raj said correctly, starts from the top. But the top needs to come from the board. The truth of the matter is, Samara, as AI gets implemented, whether we like it or not, there are gonna be a lot of questions coming. Today, the clients are demanding that,
I want efficiency. We've already seen this in the IT world. When software started growing, everybody wanted to talk about efficiency. I want to use IT and I want to be able to cut costs and I want to improve the user experience of my clients. Very soon, it moved to governance. Wait a minute. Let me ask you a couple of questions. Where is my data stored? How is this code actually secure? What are the risks that my organization is going to be exposed to?
Jay Bavisi: The EU has got regulations that you have to deal with. The US has got regulations to deal with. You must accept that AI is going to go through the entire metamorphosis that IT has gone through. The same transformation is going to apply.
So, organizations that already think through governance because regulatory frameworks have already started. Now to your question about workforce development, Samara, guess what? DCWF [DoD Cyber Workforce Framework] is a defense workforce framework, already defines AI roles and already defines the programmatic requirement of certain jobs within the Department of Defense. And obviously, EC Council being a Department of Defense, A140 compliant organization; we mapped to all of these frameworks. So our AI programs are mapped to these frameworks. But the frameworks clearly define exactly what is the job role of an AI program manager. What's the job role of somebody who is responsible for governance of AI. And this is what the free world is going to need to do. to fast forward the organization that Raj is leading today, great, they have an AI council. I can bet within 12 months Raj is going to have someone to say Chief AI Officer.
The chief AI officer is where the buck stops and the chief AI officer is responsible for the governance, responsible for the data, responsible for the risk because Raj is going to get customers coming and say, hey, I work in the EU framework. I need compliance. Are you compliant? If you're not, goodbye. I'm going to find somebody else. And that's a fact.
Raj Dubey: Yes. And to add to your point, it's like executives will eventually not care about how smart AI is. It's about risk and balance, correct? So what is the risk? Means we are talking about, even this morning, I had a discussion with one of my clients here in the Valley, and they were talking about building this nice agent that can enhance that search functionality. So we go, okay, we can build that. But the challenge comes, what if somebody searches, hey, what is the, paycheck for the CEO. And if that data exists in any of the documents in your network, and if I build an agent, it's going to find that data and it's going to expose it. And so the question becomes, what is the governance? Okay, to your point, what is the risk? Because sometimes we don't think about it. So we do not let build these agents and let them lose in the organization. So what is the guardrail? And many times that the guardrail that's good is the most critical otherwise you're exposing the whole data and the privacy and compliance goes out the window. So I think that is going to be the next critical step when we are thinking about how we put those guardrails and governance in our AI implementations, means like agents can be built in overnight.
Jay Bavisi: Yeah, all of your non-disclosures will be out of the window because you signed non-disclosure agreements
But agents are going to go find it and expose it. And now you have a legal ramification and that's really the world we're leading in.
Raj Dubey: Yes.
Adam Dennison: So let me ask you a little different question around the training aspect of it. I'm a business user. I'm not a data scientist. I'm not an analyst. I'm not a developer. I'm a user. So when you're thinking about the, you know, we hear the guardrails and the governance and the training, and that's all important. Totally understand that. At what point, and are you both at that stage yet where you're developing programs or Raj, looking at the business users?
And thinking about how do we make sure that they're being productive with this? How do we make sure that they are driving efficiencies, whether they're making money or saving money? Are those disconnected? they together? Where does the business user in a specific business department fit in here where they can really make the most out of using AI to drive your business forward? And is there training there or is that just trial and error?
Jay Bavisi: Yeah, so look, there are two parts to this. The usability of artificial intelligence in a business case is something that what exposure, marketing, and market awareness is all about, right? Every time we flip an Instagram reel or you get a TikTok or you do your research or even if you use AI prompt engineering or talk to competitors talk to your partners you're going to realize someone's doing better they're implementing AI better right it's the same thing the way we discover software we have a problem we try to find what problems can can be solved by which software right and that's going to carry on happening if you look at IT as we discovered software as a found solution security was never the first thing in our mind it became an afterthought
We thought about productivity, we talked about usability, we talked about getting ahead and making sure that we're competitive. And then we realized, oops, wait a minute, we have a problem. What about security? And then we went back and said, oh, we're to have someone in security that has a certified ethical hacker that's going to come and secure our systems. Now, in AI, it's exactly the same world. So to your point, Adam, when you're looking at AI solution, I think that's going to happen out of your natural ecosystem anyway. But the question is, as what Raj correctly pointed out, how do we make sure that our ecosystem or our workforce understands the risks behind it, the governance behind it, that they would think through? Because these are business use cases. So business use cases need to go beyond the business use case. You need to think about the governance use case. You got to think about the risk use case. You got to think about the guardrails. You got to think about, well, if something goes wrong,
How do I roll back? And to my point, I think that's the most important thing. As Raj said, you can implement an agent very quickly, right? Just yesterday I was taking a look at our research labs and we had a system that used more than 50 agents interconnected to solve a problem. It's complex but it's not impossible to do. But how do you govern that? And I think that's where the challenge is. Raj.
And then human has to stay in the loop in this governance model. So you've got to have oversight on it. And I'll give an example. like to your point, Adam, I think if you can identify small areas where we can optimize, you can show ROI, like time savings and other things. And I can give you two examples. like we have a lot of GPTs we have built on our AI Hub, which actually helps our business owners, our marketing teams, our branding teams. When they do, for example, discovery sessions and everything, they're able to take that and they get optimized data out of it. So we use it very extensively. We'll you another use case. It's like a product that we are building for the medical practice where you know, it's a doctor-patient interaction. And what that is doing is based on the recording and the RAG model that we have built and that we have trained on it, it's providing information to the doctors about, these are like, I've already checked all the history of this patient. These are the new things. These are some of the other tests that you need to do. These are the medicines. These are the recommendation but it cannot prescribe. So it gives the option to the doctors to quickly select, okay, this is right, this is right. So human always remains in control. I think that's where the differentiation would be.
The one point, especially in healthcare, that I will say is that we've got to be very concerned about hallucinations, right? Different models tend to hallucinate. You don't want to hallucinate in a healthcare situation because sure, the doctors are going to prescribe medicines, but how do know that the deterministic process that the AI used to come to a conclusion was not hallucinated, right? And I think that's what I'm showing you is an example of how you'll actually be thinking of putting guardrails and how you're thinking about risk and how you're thinking about governance. So we have to be very suspicious people, I guess. And that's the whole point of the training, right? How do you train people to really think twice?
Raj Dubey: So yes, absolutely. That's going to be very critical.
Jay Bavisi: You look nervous there, Adam. Are you nervous yet?
Raj Dubey: Yeah
Adam Dennison: I'm nervous that the world is spending $2 trillion this year on it.
Jay Bavisi: Okay.
Samara Lynn: This is a lot of information. Thank you.
Raj Dubey: And I'll add something here ... think, Adam, that applies to a couple of our IT leadership. So, to give an idea, one of the agents that we're building is false-positive. So, we get so many alerts in IT. We have something called IT fatigue. And if you're monitoring systems, and Jay, you must be aware of it, you're getting these gazillion inputs. Sometimes our alert system goes haywire.
So that is where, again, the training goes into place. So, we are building in our roadmap an agent that analyzes everything, understands the user behaviors and the patterns. you know, if I log in a 2 AM, it knows I will probably log in at 2 AM because I am like random types. But if Adam logs in a 3 AM into my system, no offense to you Adam, but it flags it. You know, that, hey, what is he doing at 3 AM? That's not a normal pattern. And then it starts the proper alerting system. So I think if we can reduce some of that noise and find, again, finding those small, small wins and use cases, and then having AI help you be more efficient, I think that is something that we can take to the executives and then say, hey, this is what we can do with what we can build. And you can show demonstrable like kind of efficiency and ROI around it.
But even today, if you do like a poll, what is the ROI on AI? You'll get 50-50 response.
Jay Bavisi :So you guys want me to scare you a little bit? Want me to share with you a fun story? So, you know, just to what Raj was just saying, right? You know, unfortunately, Raj, a lot of what you're trying to build, SOCs already take care of that. You've taken most of the SIMs out there, they already take care of, you know, what you're describing. But let me share with you a real life experience, right? Without naming any clients whatsoever. As you know, easy counsel.
Raj Dubey: Yes.
Jay Bavisi: So as you know, right, EC Council is critically known for our real forte is ethical hacking. I founded the certified ethical hacker standard. And really my life over the last two and a half decades has been about creating certified ethical hackers is really helping the good guys hack into their own systems. Because just because you hack in your own systems, you'll be able to identify vulnerabilities and plug them in and become secure and that's what my life has been. So now comes AI. Now, I'll share with you some public information. We announced a hundred-million-dollar funding to invest in next gen AI companies. They do the best things out there to help companies get secure. And our very first investment was in a company called Fire Compass and this is a US technology patented and what it did was very interesting.
It did everything it could to erase my 25 years of history. Because what I did was I built ethical hackers out there to be able to go in and hack into systems, right? And around the world compliance requires every organization. I'm sure Raj, you know, you do something called a penetration test and every year you'll call a third party and the people that show up are more often than not ethical hackers and they'll come in and they'll hack your systems and they'll tell you Raj, these are the vulnerabilities and you got to plug it. The system used to happen once a year and this is a global phenomenon. If you're a large bank, you do it once a quarter. This company built an AI, an agentic AI platform, and it could hack your organization ethically, not once a month, not once a quarter, but every day if you want it. 365 days, 24-7, always on. And it solved a major problem for organizations because guess what? Every day, your organization's IT infrastructure changes. Somebody opens a new path, someone adds some applications, someone downloads certain things, and there's a new update happening. And this is something human beings cannot solve. But here's an example of AI being used by the good guys to defend it. So we went to a Fortune 50 organization, and they said, wow, this is really interesting, Jay. We want to test human hackers versus this AI platform. So they picked the best teams that they had and...
And we went after this Fortune 50 organization, and they said, the CIO said, we're going to have a rule of three, three rounds and we're going to see what happens. The best of the breed was the so-called platform. Round one was run. The human hackers found two critical vulnerabilities. The AI found six. But AI is learning, it's agentic, right? So we ran round two, changed the teams. Round two, the humans found three.
The AI found 12. There were 20 by the way, right? Then we ran round three and this is over a period of three weeks. The humans found one because it's a third team, they found one, they went down regression. AI found 16.
AI is reinventing cybersecurity as we speak. FireCompass is now able to do automated pen testing using agentic AI, using algorithms that is really out of human reach to be able to attain the speed and accuracy without false positive in a very short period of time. This is the era we are living in, right? So, if the good guys are not going to get trained, on how to use AI responsibility to improve productivity, to improve security, right? Then I guess you are going to face, you're going to be facing a very different world that is, you know, highly risky for you and your organization.
Adam Dennison: So how do you keep the good guys ahead?
Jay Bavisi: So I think that's a colossal problem. I mean, that's something that we are always striving to do. We are going to need to really have the hunger to actually train our workforce right from the start and not a patch job. Not like, okay, we want to implement a system today, so let's train them on the systems, right? We really got to have workforce development embedded inside. We got to have a culture of security more than the culture of driving profits. We got to think about governance, we got to think about risk, and this starts from the top. So we got to get there because regulations are going to help, right? Customer requirement is going to help. Customers are going to come back and say, okay, great, you're using AI, but hang on a second. Before we talk about features, talk to me about the guardrails, talk to me about your regression testing, talk to me about the implementation. It's going to happen, but... hopefully we are going to be far more proactive in being trained.
Raj Dubey: if it is AI assisted, then obviously we'll meet the timeline. But no, to your point, I think there's another thing too there, Jay, because now you're able to find the vulnerability faster. You're able to patch it as well to fix it too, before we are waiting too long. And so it's like not, you're not talking about a 30 day lead, 60 day lead. It's like how you're constantly monitoring and patching things.
So it can play on both sides. I see that as an advantage as well.
Adam Dennison: So I'll ask one final question because we do serve the midmarket and that example is eye-opening, right? That example, Jay, thinking about trying to stay ahead, trying to make sure you've got the right governance in place, you've got the right training in place. The fortune companies have big budgets. They have teams. They have teams set up for this. Mid-market doesn't.
Jay Bavisi: Absolutely.
Adam Dennison: How far behind, how risky is it for them? Are they going to simply have to look elsewhere, whether it's organizations such as yourselves or others, to help them and bring them in? How are they going to be able to realistically afford this and be able to get to those levels where they need to be to minimize risk as best they can?
Jay Bavisi: Yeah, look Adam, you know there was an era when human beings used to walk and then someone found out that wait a minute, we can ride a horse and then you kno humankind decided all right, let's all ride a horse and then someone invented and say wait a minute You know, there's something called an engine and we can actually you know use a car and then we said wait a minute we can fly right and at every juncture when there was an innovation in technology Humans actually evolved right? So I think AI is one of those moments where we have to just be Determinant that this is an evolution and we need to evolve. So to your question
The IT midmarket, those people that are sitting in an IT world today, Sure, I mean, you understand cloud, you understand networking, but this is the time for them to get AI skills and get certifications and get certified because that's one verification method where you are actually going to have a tremendous amount of demand, right? If we take a look at jobs today, right? The biggest shortages that are happening right now, mean, cybersecurity used to be number one, but now it's AI, right?
Organizations are looking for people that have AI literacy, that speak the AI language. You understand what a RAC stands for. It's not something that you, a piece of cloth you put on your floor when you spill milk. It's slightly more than that. You understand what prompt engineering is all about. You understand what hallucinations and biases, right? And what tokens are, what LLMs are, and you know. So I think we move in an era, and this is an opportunity for IT folks to actually upskill themselves. And when they do,they will differentiate themselves very quickly. And I think so, I think it's a major opportunity for those that are really keen to actually grow. But you have to decide, I want to get off the horse, I want to get into a engine. That's the determination that you have to make.
Adam Dennison: Raj, Jay, really appreciate you spending the time with us today, giving us your insight into kind of where things are headed from an AI perspective, because ⁓ it's, as you mentioned, it's changing. It'll be changed by the time we close out this podcast. So thanks so much for spending time with us. Really appreciate it.
Jay Bavisi Thank you. Thank you very much for having us.
Raj Dubey Thank you.
Jay Bavisi Take care. Thank you.
Adam Dennison Thank you.