Background Circle Background Circle
Navigating a surprising pandemic side effect: AI whiplash

Navigating a surprising pandemic side effect: AI whiplash

Amid the many business disruptions caused by covid-19, here’s one largely overlooked: artificial intelligence (AI) whiplash.

As the pandemic began to upend the world last year, businesses reached for every tool at their disposal—including AI—to solve challenges and serve customers safely and effectively. In a 2021 KPMG survey of US business executives conducted between January 3 and 16, half the respondents said their organization sped up its use of AI in response to covid-19—including 72% of industrial manufacturers, 57% of technology companies, and 53% of retailers.

Most are happy with the results. Eighty-two percent of those surveyed agree AI has been helpful to their organization during the pandemic, and a majority say it is delivering even more value than anticipated. More broadly, nearly all say wider use of AI would make their organization run more efficiently. In fact, 85% want their organization to accelerate AI adoption.

Navigating a surprising pandemic side effect: AI whiplash

Still, sentiment isn’t entirely positive. Even as they’re looking to step on the gas, 44% of executives think their industry is moving faster on AI than it should. More startling, 74% contend the use of AI to help businesses remains more hype than reality—up sharply in key industries since our September 2019 AI survey. In both the financial services and retail sectors, for example, 75% of executives now feel AI is overhyped, up from 42% and 64%, respectively.

How to square these seemingly opposed points of view on what KPMG is calling AI whiplash? Based on our work helping organizations apply AI, we see several explanations about hype. One is the simple newness of the technology, which has allowed for misperceptions about what it can and can’t do, how long it takes to realize enterprise-scale results, and what mistakes are possible as organizations experiment with AI without the right foundation.

Even though 79% of respondents say AI is at least moderately functional at their organization, only 43% say it is fully functional at scale. It is still common to find people who think of AI as something to be purchased—like a new piece of machinery—to deliver immediate results. And while they may have experienced some success with AI—often small proofs of concept—many organizations have learned that scaling them to enterprise level can be more challenging. It requires access to clean and well-organized data; a robust data storage infrastructure; subject matter experts to help create labeled training data; sophisticated computer science skills; and buy-in from the business.

Of course, it also is no stretch to believe proponents of AI may have exaggerated its potential from time to time or discounted the effort required to realize its full value.

As to why executives are conflicted about the speed of AI’s adoption, we see basic human nature at play. For starters, it’s always easier to believe the grass is greener on the other side. We also suspect a lot of people worry their industry is moving too fast primarily because their own organization isn’t matching that speed. If they’ve experienced early-stage hiccups with AI—especially last year, when the world witnessed AI-enabled accomplishments like record-fast development of covid-19 vaccines—it may have been easy to succumb to those fears.

We see another factor driving mixed feelings about AI’s potential—the absence of an established legal and regulatory framework to guide its use. Many business leaders don’t have a clear view into what their organization is doing to govern AI, or what new government regulations might lie ahead. Understandably, they’re worried about the associated risks, including developing use cases today that regulators might squash tomorrow.

This uncertainty helps explain yet another seemingly contradictory finding from our survey. While business executives typically take a skeptical view of government regulation, 87% say government should play a role in regulating AI technology.

Moving on from AI whiplash

While every organization will need its own playbook to recover from AI whiplash and optimize its investment in the technology, a comprehensive plan should include five components:

  • A strategic investment in data. Data is the raw material of AI and the connective tissue of a digital organization. Organizations need clean, machine-digestible data labeled to train AI models, with the help of subject matter experts. They require a data storage infrastructure that transcends functional silos within the business and can deliver data quickly and reliably. Once the models are deployed, a strategy and approach to harvest data is needed to continuously tune and train them.
  • The right talent. Computer scientists with expertise in AI are in high demand and tough to find—but crucial to understanding the AI landscape and guiding strategy. Organizations unable to build a full team of scientists internally will need external partners who can fill in the gaps and help them sort through the ever-expanding array of AI vendors and offerings.
  • A long-term AI strategy guided by the business. Organizations get the most from AI by thinking about finding solutions to problems, not buying technology and searching for ways to use it. They let the business, not the IT department, drive the agenda. When AI investments tied to a business-led strategy go wrong, they become opportunities to fail fast and learn, not fast and burn. But even as companies iterate quickly, they need to do so in line with a long-term AI strategy, because the biggest benefits are realized over the long haul.
  • Culture and employee upskilling. Few AI agendas will gain traction without buy-in from the workforce and a culture invested in AI’s success. Winning the commitment of employees requires providing them with at least a rudimentary understanding of the technology and data, and an even deeper understanding of how it will benefit them and the enterprise. Also important is upskilling the workforce, especially where AI will take over or supplement their existing responsibilities. Embracing a data-driven mindset and instilling a deeper AI literacy into an organization’s DNA will help them scale and succeed.
  • A commitment to ethical and unbiased use of AI. AI holds great promise but also the potential for harm if organizations use it in ways customers don’t like or that discriminate against some segments of the population. Every organization should develop an AI ethics policy with clear guidelines on how the technology will be deployed. This policy should mandate measures and be part of the DevOps process to check for issues and imbalances in the data, measure and quantify unintended bias in machine learning algorithms, track the provenance of data, and identify those who train algorithms. Organizations should continuously monitor the models for bias and drift, and ensure explainability of model decisions are in place.

What’s next

Executives’ objectives for AI investments over the next two years vary by industry. Healthcare executives say their focus will be on telemedicine, robotic tasks, and delivery of patient care. In life sciences, they say they’ll be looking to deploy AI to identify new revenue opportunities, reduce administrative costs, and analyze patient data. And government executives say their focus will be on improving process automation and analytics capabilities, and on managing contracting and other obligations.

Expected outcomes also vary by industry. Retail executives predict the biggest impact in the areas of customer intelligence, inventory management, and customer service chatbots. Industrial manufacturers see it in product design, development, and engineering; maintenance operations; and production activities. And financial services firms are expecting to get better at fraud detection and prevention, risk management, and process automation.

Long-term, KPMG sees AI playing a vital role in reducing fraud, waste and abuse, and in helping businesses sharpen their sales, marketing, and customer service operations. Ultimately, we believe AI will help resolve fundamental human challenges in areas as diverse as disease identification and treatment, agriculture and global hunger, and climate change.

That’s a future worth working toward. We believe government and industry alike have roles to play in making it happen—in working together to formulate rules that foster the ethical evolution of AI without stifling the innovation and momentum already underway.

Read more in the KPMG “Thriving in an AI World” report.

This content was produced by KPMG. It was not written by MIT Technology Review’s editorial staff.

Source link