For more AI impact, fortify four key pillars of your AI strategy: Vision, value-realisation, risk and adoption plans.
For more AI impact, fortify four key pillars of your AI strategy: Vision, value-realisation, risk and adoption plans.
Generative AI (GenAI) is one type of AI that executives suddenly want to try in their business, but to capture its value and manage risk in a sustainable way, executives need a sound, holistic and achievable AI strategy.
Consider the four key elements of any AI strategy (below) and download the GenAI planning workbook to:
Building an AI strategy inclusive of GenAI requires a rigorous approach — from developing a business-driven vision to planning which initiatives to adopt and why.
Generative AI is suddenly on everyone’s radar, but some organisations already have extensive experience and success in deploying AI techniques across multiple business units and processes. Gartner research shows these mature AI organisations represent just 10% of those currently experimenting with AI, but would-be GenAI adopters can learn a lot from them.
Generative AI has the potential to radically transform existing economic and social frameworks, as did the internet and earlier innovations such as electricity. The question for your business is how AI will support enterprise ambitions and drive stronger results.
Deployed well, GenAI will become a competitive advantage and differentiator, building on the ability of AI in general to automate repetitive and tedious tasks and generate new insights, ideas and innovations with predictive analytics, machine learning (ML) and other AI methods.
Generative AI could significantly impact shareholder value by creating new and disruptive opportunities to drive enterprise goals such as:
A recent Gartner survey of more than 600 organisations that have deployed AI shows those with the widest, deepest and longest experience with AI do not measure success by project volume, tasks completed or output. Instead, they:
Focus more on business metrics than financial metrics, and follow specific attribution models and ad hoc measures tied to each use case
Benchmark both internally and externally
Identify metrics early, and measure the success of AI use cases quickly and consistently
Business metrics include those focused on:
Business growth, e.g., cross-selling potential, price increases, demand estimation, monetisation of new assets
Customer success, e.g., retention measures, customer satisfaction measures, share of customer wallet
Cost-efficiency, e.g., inventory reduction, production costs, employee productivity, asset optimisation
Gartner research separately shows that organisations where the AI team is involved in defining success metrics are 50% more likely to use AI strategically than organisations where the team is not involved. When selecting metrics, the AI team should include feedback from the groups that manage data, business analysts, domain experts, risk management leaders, data scientists, and IT leaders and developers.
Government regulations and frameworks around AI are starting to emerge, so be aware of specific regulations in relevant jurisdictions. As AI usage continues to trigger questions about ethics and responsibility, new regulation may come in response to shifting public sentiments about AI use. In general, though, prepare for major types of risks, including:
Regulatory. AI poses legal risks by potentially opening up organizations to lawsuits over copyrighted or protected content, information and data. Regulations are changing quickly, so be aware of local and jurisdiction AI regulations to ensure you stay compliant with governing policy. Also watch for industry-specific regulations, such as in life sciences and financial services.
Reputational. AI can amplify bias and create a “black box” — an AI system with no user visibility into inputs and operations. Vendors that do not provide transparency on training datasets risk harmful outputs. Untested AI services can also pose risks through poor decision making and/or execution of tasks. Organizations need to build robust guardrails to prevent loss of intellectual property or customer data when building or buying generative AI services.
Competencies. AI requires a unique set of skills that must be intentionally sourced through upskilling existing talent or from academia or startups. Skills in areas such as prompt engineering and responsible AI will be in growing demand in the near term.
AI threats and compromises (malicious or benign) are continuous and constantly evolving, so set principles and policies for AI governance, trustworthiness, fairness, reliability, robustness, efficacy and privacy. Organizations that don’t are much more likely to experience negative AI outcomes and breaches. Models won’t perform as intended, and there will be security and privacy failures, financial and reputational loss, and harm to individuals.
The Gartner AI TRiSM (trust, risk and security management) framework includes solutions, techniques and processes for model interpretability and explainability, privacy, model operations and adversarial attack resistance for its customers and the enterprise. We advocate standing up a cross-functional, dedicated team or task force, including legal, compliance, security, IT and data analytics teams and business reps, to gain the best results from every AI initiative.
When it generates new versions of content, strategies, designs and methods by learning from large repositories of original source content, generative AI can result in:
False outputs. Generative AI can be unstable and erroneous in reasoning and fact, fail to fully comprehend context, have limited explainability and trackability, and be biased.
Security. Currently, any confidential information entered into public applications is stored and can be used to train new versions of the model. Sensitive data and intellectual property can become available to users outside the organization, including malicious actors.
Legal. Generative AI can present legal risks associated with intellectual property and privacy concerns, including copyright infringement, trade secret misappropriation, data privacy, model bias and model security.
New tools like ChatGPT have turbocharged interest in the potential of AI, but to capture their value, executives need to look more broadly at business value, risk, talent and investment priorities and prepare for the potential disruption to existing business models and strategies.
To date, AI business value has largely been generated from one-off solutions. Getting more value at scale, including from GenAI initiatives, may require deep business process changes; new skill sets, roles and organizational structures; and new ways of working. Failing to change will likely reduce your ability to capture the opportunities you identify.
Map out how your organization will transform processes and systems and upskill people as GenAI becomes integrated into daily work. Deploying AI in a mindful and future-focused way will be the difference between long-term success and potential disaster.
Gartner strategic assumptions say:
By 2026, over 100 million people will engage robocolleagues (synthetic virtual colleagues) to contribute to enterprise work.
By 2033, AI solutions, introduced to augment or autonomously deliver tasks, activities or jobs, will result in over half a billion net new human jobs.
Identify issues that could slow adoption of GenAI projects or impede your ability to capture their value. Map out solutions and actions and assign an executive owner to champion the organizational change required. For example, if your organization lacks the data literacy needed to drive AI projects, incorporate executives (not just employees) into data literacy training and exercises, Make the chief data and analytics officer (CDAO) responsible for driving the program and ensuring other executives attend.
In selecting use cases for AI, including those employing GenAI, line-of-business stakeholders should be able to clearly articulate the tangible business benefits they expect by asking:
What problem is the business trying to tackle?
Who is the primary consumer of the technology?
What business process will host that AI technique?
Which of the subject matter experts from the lines of business can guide the development of the solution?
How will the impact of implementing the technology be measured?
How will the value of the technology be monitored and maintained? And by whom?
Engaging in a comprehensive AI strategy without first experimenting with its component techniques puts the cart before the horse.
Follow these five steps to introduce AI techniques:
Use cases: Build a portfolio of impactful, measurable and quickly solvable use cases.
Skills: Assemble a set of talents pertinent to the use cases.
Data: Gather the appropriate data relevant to the selected use cases.
Technology: Select the AI techniques linked to the use cases, skills and data.
Organization: Structure the expertise and accumulated AI know-how.
This five-step formula is a tactical approach to the introduction of AI techniques, favoring a quick time-to-value perspective. It’s not a strategic, longer-term outlook.
Step 1 — identifying the most valuable use cases — should target concrete improvement projects coupled with tangible business outcomes. Feasibility is critical.
Typically, returns are higher when risk is high and feasibility is low, but projects that are impossible to accomplish with available technologies and data aren’t worth pursuing regardless of the apparent business value.
Feasibility criteria include:
Technical. How well can the existing technology options improve the stated business use case to the level of “state of the art”?
Internal. Considerations such as (lack of) culture, leadership, buy-in, skills and ethics.
External. Considerations such as (lack of) regulations, social acceptance and external infrastructure.
A use case with an outstanding contribution to business value and easy feasibility is either a breakthrough or the market is missing a great opportunity.
AI is very data-intensive, and while you can employ GenAI without integrating applications into your data stack, you won’t get the most out of AI without an enabling data strategy.
Articulating clear data management and governance requirements, such as expectations for data quality and trust, lowers cost of data acquisition and helps you find and capture the data you need to power your AI.
Also see: “Key Success Factors in Any Data and Analytics Strategy,” “Modernize Data Management to Increase Value and Reduce Costs” and “Becoming a Data-Driven Organization.”
Join your peers for the unveiling of the latest insights at Gartner conferences.
Gartner clients: Log in for a complete suite of actionable insights and tools on AI.