Customers will be able to take transactional workloads off the main CPU and move that work to the accelerator for further machine learning, AI or generative AI evaluation and handling, Dickens said, which makes operational, scalable sense.
“In addition to code generation, this scalable mainframe AI platform (chip/card/software) would be good for a number of applications, including credit ratings, fraud detection, compliance, financial settlements, and document processing and simulation,” said Patrick Moorhead, founder, CEO and chief analyst of Moor Insights & Strategy.
“If you’re an enterprise and have a mainframe, you likely are using it for mission-critical apps that require the highest level of resilience and security. Previously in AI, enterprises would move the data off the mainframe to a GPU server, do work on it, then send it back on the mainframe,” Moorhead said. “That’s not efficient or fast and less secure for apps like credit ratings, fraud detection, and compliance.”
IBM’s Jacobi also talked about how code security and compliance will benefit from the new AI support.
“Many clients run dozens of millions of lines of code, or hundreds of millions of lines of code in their applications, and they are very security concerned and sensitive about the code base,” Jacobi said. “The code base itself is sort of a codified business process of how to run an insurance company, or how to run a bank. So, of course, that is very valuable IP to them.”
“When customers do AI on those kinds of code structures, they would prefer to do that directly within the secure environment of the mainframe, rather than doing that analysis elsewhere. And now they can,” Jacobi said.