Addressing six key generative AI adoption to set the stage for success.

In this article:

Organizations at the forefront of generative AI adoption are addressing six key priorities to set the stage for success. These priorities include managing the AI risk/reward tug-of-war, aligning their new generative AI strategy with existing digital and AI strategies, thinking big, encouraging experimentation across organizations, looking strategically at productivity gains, considering impacts on workers, roles, and skills-building, and teaming up and collaborating with their ecosystems.

The excitement and anxiety generated by AI in the global business environment and individual organizations are strikingly parallel. While surging market capitalizations for early AI leaders provide financial evidence of the opportunity investors and markets see in generative AI, experts in the field are voicing existential angst about the potentially significant unintended consequences that could emerge as the reach of AI grows. In many companies, there’s a tug-of-war between executives and managers seeking to rapidly tap the potential of generative AI for competitive advantage and technical, legal, and other leaders striving to mitigate potential risks.

Achieving healthy tension often starts with a framework for adopting AI responsibly. At Inet, we developed such an approach several years ago and continue evolving it with the changing nature of AI opportunities and risks. Practical safeguards and guidelines help organizations move forward faster and with more confidence. Open-minded, agile leadership is critical: risk-minded leaders deliver better, faster guidance as they internalize the momentous significance of the generative AI revolution. Opportunity-seekers are well-served by spending time immersing themselves in what can go wrong to avoid costly mistakes.

One company recognized the need to validate, root out bias, and ensure fairness in the output of a suite of AI applications and data models designed to generate customer and market insights. However, due to the complexity and novelty of this technology and its reliance on training data, the only internal team with the expertise needed to test and validate these models was the same team that had built them. The near-term result was stasis.

Another company made more rapid progress, in no small part because of early, board-level emphasis on the need for enterprise-wide consistency, risk-appetite alignment, approvals, and transparency with respect to generative AI. This intervention led to the creation of a cross-functional leadership team tasked with thinking through what responsible AI meant for them and what it required. The result was a set of policies designed to address that gap, which included a core set of ethical AI principles; a framework and governance model for responsible AI aligned to the enterprise strategy; ethical foundations for the technical robustness, compliance, and human-centricity of AI; and governance controls and an execution road map for embedding AI into operational processes.

In conclusion, organizations at the forefront of generative AI adoption are addressing six key priorities to set the stage for success. By addressing risk head-on, organizations can maintain momentum and capitalize on the potential of generative AI.

The rapid improvement and growing accessibility of generative AI capabilities have significant implications for digital transformation in organizations. Generative AI’s primary output is digital data, assets, and analytic insights, which are most effective when applied to and used in combination with existing digital tools, tasks, environments, workflows, and datasets. Aligning your generative AI strategy with your overall digital approach can lead to enormous benefits, but it is also easy for experimental efforts to germinate that are disconnected from broader efforts to accelerate digital value creation.

A global consumer packaged goods company recently began crafting a strategy to deploy generative AI in its customer service operations, which led to the realization that similar generative AI models could fill out forms and provide Q&A access to data and insights in a wide range of functions. This resulting gains dwarfed those associated with customer service, and were possible only because the company had come up for air and connected its digital strategy and its generative AI strategy.

Connecting digital strategies and AI strategies can help mitigate disconnects between risk and legal functions, which tend to advise caution, and more innovation-oriented parts of businesses. This can lead to mixed messages and disputes over who has the final say in choices about how to leverage generative AI, which can frustrate everyone, cause deteriorating cross-functional relations, and slow down deployment progress. However, these disconnects can be easily avoided by involving the CHRO, CIO, and CISO in assessing new opportunities against the company’s existing data, tech, and cybersecurity policies.

Experimenting with an eye for scaling is critical for companies hoping to reap the full benefits of generative AI. The diversity of potential applications often gives rise to a wide range of pilot efforts, which are important for recognizing potential value but may lead to a “the whole is less than the sum of the parts” phenomenon. Senior leadership engagement is critical for true scaling, as it often requires cross-cutting strategic and organizational perspectives.

Experimentation is valuable with generative AI because it is a highly versatile tool, akin to a digital Swiss Army knife, and can be deployed in various ways to meet multiple needs. Centralized control of generative AI application development may overlook specialized use cases that could confer significant competitive advantage. Engaging individual workers and departments in experimentation and exploration is essential for maximizing the benefits of generative AI in organizations.

The use of generative AI in various industries has led to the development of various strategies and approaches to improve efficiency and productivity. However, these strategies can sometimes lead to a “pilot purgatory” state, where promising glimmers generate more enthusiasm than value. For example, a financial services company could have fallen prey to challenges in its HR department as it looked for means of using AI to automate and improve job postings and employee onboarding. However, the CHRO’s move to involve the CIO and CISO led to policy clarity, a secure, responsible AI approach, and a realization that there were archetypes or repeatable patterns in many HR processes that were ripe for automation. This led to a lightbulb moment, the realization that many functions beyond HR and across different businesses could adapt and scale these approaches, and to broader dialogue with the CEO and CFO.

As leaders make such moves, they need to take a hard look at themselves: What skills does the organization need to succeed at scale with AI, and to what extent do those capabilities already reside somewhere in the company? What is the plan for filling skills gaps, and on what time frame? Failure to pose questions like these can lead to problems down the road—and they are much better answered in the context of early experiments than in the abstract.

Generative AI’s ability to find relevant information, perform repetitive pattern tasks quickly, and integrate with existing digital workflows means the increased efficiency and productivity it can deliver can be almost instant, both within individual departments and organization-wide. Companies can do three things with generative AI: reinvest them to boost the quality, volume, or speed with which goods and services are produced, keep output constant and reduce labor input to cut costs, or pursue a combination of the two. Inet company in Indonesia followed the first approach in small-scale pilots that yielded 30% time savings in systems design, 50% efficiency gains in code generation, and an 80% reduction in time spent on internal translations. When generative AI enables workers to avoid time-consuming, repetitive, and often frustrating tasks, it can boost their job satisfaction.

The media industry is one of those most likely to be disrupted by this new technology. Some media organizations have focused on using the productivity gains of generative AI to improve their offerings, using AI tools as an aid to content creators, rather than a replacement for them. Instead of deciding that fewer required person-hours means less need for staff, media organizations can refocus their human knowledge and experience on innovation—perhaps aided by generative AI tools to help identify new ideas.

Organizations should consider the impact of generative AI on their workforce and address it from the start to make or break the success of their initiatives. Many employees are either uncertain or unaware of these technologies’ potential impact on them, and putting people at the core of a generative AI strategy is essential.

Facebook
Twitter
LinkedIn
WhatsApp