AI Technology Partners Builds Custom AI Solutions For The Midmarket
AITP provides customized AI solutions and are focused on the midmarket. Our interview with the company’s CEO.
(John Treadway, CEO, AI Technology Partners, Inc.)
MES Computing recently spoke with John Treadway, the CEO of AI Technology Partners Inc. His company provides generative AI solutions, many for midsized organizations.
In the interview, Treadway speaks about the custom AI solutions his company provides, its customer base, how it can help midmarket companies build AI policies, and more.
Can you provide some background on your company and the services it provides the midmarket?
AI Technology Partners was a kind of a restart of our previous business, but, effectively, we started in May of 2023, so we're coming up on two years of business.
Our focus is on building generative AI solutions for mid-tier enterprises. That means anywhere from a few 100 people to a few 1,000 people in terms of the size of the company.
We are very focused on engagement with the business. But we work with the IT leadership in a lot of companies ... working with senior executives and their business line leaders, and CEOs, chief marketing officers, heads of sales, heads of operations, etc., and their teams to deliver very specific generative AI capabilities that solve business problems within those functions.
Who is your typical customer?
We're doing some work in financial services and biotechnology and then just general industries. We're working with one client right now, which is in the auto parts business. They're a spin out, and they're owned by a private equity firm. We actually do really well with private equity backed companies in particular, which, of course, is mostly mid-tier enterprises ... What we're building is highly secure and focused on delivering ... so you could use ChatGPT, or you could use a solution that we build where all of the data is maintained within ... the companies own it, infrastructure and storage, and you have security controls over it. It's not going out to OpenAI and risking the business.
Let’s say I am an owner of a mid-sized tech news site. And we wanted to build a chatbot where a user could go to our site and say, ‘I want to see all articles from years 2000-2010 on news about Microsoft.’ Is that something your company can build? And are you partnering with OpenAI or using other LLMs? Are you building the LLM for us?
There’s two parts of the answer ... one, is your use case, and the other is answering the direct question.
The direct question is, we don’t build models. We use the foundational models and the frontier models that are out there from OpenAI. But we can also work with Gemini. We can work [with] Google. We can work with Anthropic. We can work with ... models from Meta, which is Facebook. We are model vendor agnostic.
We are mostly working within Microsoft Azure for our deployments. That's currently where everything that we're doing is deployed. We're focused on that part of the ecosystem, and a lot of that has to do with a lot of the things we're doing ... knowledge work and connecting to SharePoint and other related internal systems that are all already running in Microsoft.
So there's no network traffic, doesn't have to go across the firewall and all of that.
The use case that you talked about, which was specifically externally facing, generative AI, is particularly not valuable or not interesting in a general sense for external usage. What I mean by that is, you should never take a chatbot, put it on the website and give it direct access to the model. And the reason for that is that people can ask it questions that you don't know what they're going to ask it. And these models know everything. Somebody could do what's called a prompt injection attack ... it's a security issue.
Now, what you could do is, if somebody said, ‘Hey, I want to see stuff,’ and I had a form ... and I want to find articles. Here's the subject, right now, you can do that with any search engine at this point. So there's no real value in doing that with generative AI, okay, right now, what you could do, though, is ... you could have on all of your content in your articles, you could have something, a button that says, generate a summary of all of the articles in this topic with links to those articles. And that you could do, because there's no prompt window, there's just a button, and you're controlling what the back end of that is doing, and we can control that. So, you could have it generate [a] generative AI summary of articles.
What are the biggest use cases your customers are using your services for?
The first thing is, we're giving people a ChatGPT clone capability we call enterprise GPT, which is a ChatGPT-like system that runs within their own within their own Azure environment, and it's completely secure, and they control the data, so all of the data is locked down and secured with your own keys. That is a very secure internal ChatGPT clone, and it has access to all the same models and more.
Another one is ... IT operations management ... MSPs, perfect example, they have a lot of clients. They've got outages, they've got root-cause analysis. They have to communicate with the clients, particularly if that has end customer impact.
So a system that you're running for your client is actually servicing their clients, and it's now it's down. It's causing revenue outages or other impact. I want to know what happened. I want to solve that as quickly as possible. We can do that. That's that's something where we can help. We don't replace the other approaches around predictive analytics and machine learning, which also have value in that same process, but it's yet another tool that you can use and has a lot of value.
Marketing is a big use case ... content generation, content planning, writing, editing, reviewing, summarizing, creating campaign plans, evaluating campaign outputs, all of that kind of stuff. That's a great use case in sales. It's generating RFPs.
We have a client [that's] using one of the systems that we deployed and even without it being optimized for RFPs, they cut the time to respond to an RFP by 80 percent.
How long does it take you to typically build a custom solution for a customer? And can you give you an idea of the expense?
The expense really depends on the size of the client, how many users they have, how sophisticated it is, and what kind of integrations we're doing. So, it could be anything from, you know, 50k to 250k to build something. It could take anywhere from a couple of weeks to, you know, three to four months to deliver it, depending on that and there's a managed service component, so there's an ongoing relationship that we build so that once you have something in place, it expands [as] you have more needs.
Some midmarket leaders we’ve spoken with have expressed concerns with creating a solid AI policy. Is that something your company can help with?
We have a standard AI policy package, generative AI principles and policies and guidelines, and we just give that to anybody, if anybody asked for it. We don't charge for that, so we can give them that, and we can help them customize it. I'm adding something now because both OpenAI and Anthropic Claude have come out with solutions that allow it to control your desktop and log into websites and do stuff. Huge risk, massive risk. We're updating the policies to handle that, but yes, we can help you with your policies and procedures and guidelines.