Amanda Hendley, Planet Mainframe:
I’m right here immediately with Anthony Doro, a distinguished engineer with over three many years of expertise and a confirmed observe document of shaping the way forward for enterprise expertise. I’m additionally joined by Leah Sokalov, a senior supervisor in product administration at BMC, main the generative AI initiative throughout the AMI portfolio.
Amanda Hendley, Planet Mainframe:
Leah, constructing in your discuss: for giant mainframe organizations, the problem of scattered institutional data isn’t new. However how has the widening mainframe abilities hole made addressing this information knowledgeable disaster extra pressing now than ever earlier than?
Leah Sokalov:
Yeah, so the issue of scattered data throughout the mainframe is just not new. However recently, it’s turn out to be extra pressing due to the widening abilities hole and the scarcity of recent individuals coming into mainframes. The complexity has additionally grown tremendously in recent times. So despite the fact that the issue isn’t new, it’s extra pressing than ever.
We’re addressing that by having AI assistants—these data specialists—come into play and assist leaders and organizations work by way of this problem.
Amanda Hendley, Planet Mainframe:
Anthony, we frequently hear about generative AI’s potential but additionally about its limitations for mainframe groups. What are the non-negotiable necessities to make GenAI actually helpful for actual operations? If we need to transfer past generic solutions and ship expert-level intelligence, how do particular language fashions play an important position?
Anthony Doro:
Yeah, that’s a very good query. At this level, I believe all of us understand language fashions alone should not sufficient for enterprise options. Now we have to attach the language fashions to your workflows and your processes. The LLM principally has to turn out to be an extension of your enterprise atmosphere.
As well as, harnessing all of your enterprise data and augmenting it with the LLM is crucial. Every little thing you doc—playbooks, processes, workflows—should be ingested and processed and stay aspect by aspect with the LLM.
We even have to attach the LLM to your techniques. It wants real-time knowledge to carry actual substance to enterprise generative AI. That’s what takes us past generic solutions. It turns into infused into your enterprise processes.
Amanda Hendley, Planet Mainframe:
So shifting a bit from generative AI and the way AI helps us—let’s discuss how AI can begin doing issues for us. Are you able to discuss AI brokers and agentic workflows and what meaning for mainframe operations?
Anthony Doro:
Positive. Completely. If we take a look at the place we’re immediately with numerous options, we’re nonetheless in reactive-based fashions. You could have a chat interface, the person sits there, sorts questions or prompts, and will get a response. They’re reacting to some scenario—possibly responding to an e mail, or coping with an alert within the AIOps house. You course of the data your self, then ask the AI for assist or steerage. That’s reactive.
With AI brokers, we’re shifting to a proactive mannequin. These brokers work 24×7 on outlined use instances. They keep on high of conditions, detect issues early, offer you consciousness, and notify you in regards to the scenario or subsequent steps you need to take.
If we take a step again, brokers are software program entities that may observe their atmosphere, make selections, and take actions. That’s the shift—to a proactive mannequin with brokers.
Leah Sokalov:
Yeah, and that’s the true promise of brokers and AI—serving to speed up productiveness and offload the cognitive load we placed on our mainframe practitioners and specialists.
Anthony Doro:
Proper. These brokers hook up with the enterprise data we’re accumulating. They hook up with different techniques. They hook up with real-time knowledge. They harness all of that info to do the work they should do.
There are two sorts of workflows we hear about: prescriptive workflows—the place AI engineers map out precisely what brokers should do—and true agentic workflows. In agentic workflows, the brokers work collectively, collaborate, and determine the perfect subsequent steps as they transfer ahead.
Amanda Hendley, Planet Mainframe:
For brokers to function in a high-stakes mainframe atmosphere, we’re placing numerous belief of their means to make selections. That’s going to boost safety considerations. Are you able to discuss in regards to the issues for safety?
Anthony Doro:
Yeah. There’s numerous hype round brokers immediately, and that’s good—we’re selling them within the route we would like. However on the finish of the day, these brokers are providers with synthetic intelligence constructed into them. As an architect, I take a look at them like some other service we create.
It comes all the way down to a zero-trust structure, least-privilege entry, role-based entry—every little thing you’ll apply when mapping out any system. We observe those self same patterns with AI. After which we add further AI-related guardrails across the whole agent to verify it’s protected and safe.
Leah Sokalov:
Yeah. I like to think about AI brokers as interns. Once you onboard a brand new particular person, you don’t give them every little thing on day one. You slowly acquire belief. You practice them. You be sure what they’re doing complies together with your guardrails and your safety measures. It’s the identical right here.
Anthony Doro:
Precisely. We take a look at it as micro-access. You give brokers entry solely to what they should do—nothing extra, nothing much less. And you’ll isolate and guardrail all the expertise.
Amanda Hendley, Planet Mainframe:
Leah, wanting ahead, how do you see conversational AI data specialists and brokers not simply modernizing the mainframe, however reshaping the way forward for mainframe groups—possibly even enhancing resilience and turning legacy data right into a strategic benefit?
Leah Sokalov:
We talked in regards to the data hole and ability scarcity. And we have already got AI assistants which might be reworking the way in which we work together with techniques—z/OS, CICS, IMS, no matter you’ve got. These interactions have gotten simpler.
However wanting ahead, think about having your entire outages, tickets, incidents, and workforce data codified and accessible. It doesn’t matter if somebody has twenty years of expertise or twenty days. Everybody will get entry to the identical huge data.
That’s true resilience. Information—like safety—is among the most important belongings we have now for mainframe resilience shifting ahead. And it’s thrilling.
Anthony Doro:
Yeah. It’s shifting all of us past the reactive chat-based experiences we have now immediately into actually agentic fashions. That’s going to open limitless alternatives to do actually inventive issues on this house.
Leah Sokalov:
The expertise is there. The instruments are there. The platforms are there. We simply must sustain and see how we mix every little thing for the advantage of our organizations to spice up productiveness and function precise assistants to what we do within the mainframe area. A number of innovation is coming. It’s thrilling.
Amanda Hendley, Planet Mainframe:
Properly, thanks each for becoming a member of me immediately.
Anthony Doro:
Thanks for having us.
Leah Sokalov:
Thanks.
