With summer winding down, it’s time for a generative AI status check.
GenAI interest remains strong, as 81% of 4,470 global business leaders polled by ServiceNow have pledged to increase spending on AI over the next year. What are they focusing on?
CEOs told Deloitte their organizations are using GenAI to increase efficiencies (57%), discover new insights (45%) and accelerate innovation (43%). This is a testament to the power of top-down leadership, with innovation flowing down throughout the organization.
Meanwhile, hyperscalers engaged in an AI arms race are investing in global datacenter construction infrastructure buildouts and stockpiling GPU chips in service of LLMs, as well as various chat, copilot, tools, and agents that comprise current GenAI product categories.
As an IT leader, deciding what models and applications to run, as well as how and where, are critical decisions. And while LLM providers are hoping you choose their platforms and applications, it’s worth asking yourself whether this is the wisest course of action as you seek to keep costs down while preserving security and governance for your data and platforms.
Beware the cloud-first playbook
Hyperscalers are scaling out with the assumption that the majority of people will consume their LLMs and applications on their infrastructure and pay for ancillary services (private models, or other sandboxes boasting security and governance).
History suggests hyperscalers, which give away basic LLMs while licensing subscriptions for more powerful models with enterprise-grade features, will find more ways to pass along the immense costs of their buildouts to businesses.
Can you blame them? This operating model served them well as they built out their cloud platforms over the last 15 years. IT leaders leaned into it and professed themselves “cloud first,” a badge of honor that cemented their legacies as innovators among their bosses and boards.
In recent years, organizations have learned the value isn’t so black and white. The public cloud offers elasticity and agility, but it can also incur significant costs for undisciplined operators. As a result, organizations migrated workloads to on-premises estates, hybrid environments, and the edge.
While hyperscalers would prefer you entrust your data to them again the concerns about runaway costs are compounded by uncertainty about models, tools, and the associated risks of inputting corporate data into their black boxes. No matter how much fine-tuning and RAG applications organizations add to the mix won’t make them comfortable with offloading their data.
All this adds up to more confusion than clarity.
Your data, your datacenter, your rules
The smart play is to place some bets that can help move your business forward.
Is your priority automating IT workstreams? LLMs can help generate code and basic programs. How about helping sales and marketing create new collateral? GenAI chat applications and copilots are perfect for this, too. Maybe you want to create avatar-based videos that communicate in multiple languages? Of course, GenAI can also help with that.
As you pursue such initiatives, you can leverage the shift to more efficient processors and hardware and smaller, open-source models running on edge devices.
Business and regulatory requirements will also influence which platforms and architecture you pick. Yet you can control your own destiny by avoiding some of the same pitfalls associated with public cloud platforms.
It turns out that deploying small to large LLMs on premises with open-source models can be more cost effective,according to research from Principled Technologies and Enterprise Strategy Group. In addition to cost savings, organizations benefit from the security and governance protections afforded them by running solutions in their own datacenters—essentially bringing AI to their data. Moreover, organizations can create more guardrails while reducing reputational risk.
Ultimately, you know what your business stakeholders require to meet desired outcomes; your job is to help deliver them. Even so, GenAI is new enough that you’re not going to have all the answers.
That is why Dell Technologies offers the Dell AI Factory, which brings together AI innovation, infrastructure, and a broad ecosystem of partners to help organizations achieve their desired AI outcomes. Dell’s professional services team will help organizations prepare and synthesize their data and help them identify and execute use cases.
Learn more about the Dell AI Factory.