The Top Five Pitfalls to Consider When Implementing Generative AI Technology

Many government agencies are thinking about how generative artificial intelligence (AI) is going to impact them, and perhaps how they may be able to use it to their advantage.

These generative AI solutions are based on large natural-language models and are capable of creating remarkably sophisticated, human-like responses with very limited prompting. They’ve caught the attention of nearly every public and private-sector leader.

For those leaders exploring the idea of using generative AI to make their outcomes more efficient, we have made a list of five possible pitfalls to consider before taking the AI technological leap.

1. Waiting too long to implement generative AI internally

Despite what may at times feel like excessive hype, generative AI is real. Even if it’s only ten percent as disruptive and transformative as it’s being portrayed, generative AI will change the way we all live and work in significant ways.

Yet, some organizations are trying to ban its use. While there are legitimate and serious concerns with generative AI that must be addressed, most notably intellectual property leaks and privacy or regulatory violations, banning AI functionality and platforms may be as harmful as it is impossible. Government employees at all levels – federal, state and local – are already using ChatGPT or one of its equivalents.

2. Thinking of generative AI in terms of “tasks” instead of “outcomes”

In our conversations with clients in the government and private sector alike, we hear that many want to explore how they can use generative AI to perform a task instead of achieving an outcome. While using generative AI for tasks has the potential to create fruitful outcomes such as automation, taking an outcome-based approach instead will best realize its full potential.

We must define clear use cases and realistically attainable goals up front.

It’s also not an “install it and check the box ‘done’ project.” Businesses using AI must continually reexamine goals and rethink how generative AI should integrate into the end-to-end ecosystem of technologies and business processes.

Just as important, define clear metrics to help ensure goals are being met. The measures of success are an essential part of the feedback loop designed to continually improve and refine the technology, its usage and the experience people have with it.

3. Thinking that generative AI is a replacement for humans instead of an enabler for humans

Generative AI is a powerful enabler that can improve worker productivity by handling more routine tasks, freeing them to handle more complex or sensitive issues, but it can’t eliminate them from the equation – nor should it. The recent experience some agencies have had with conversational-AI chatbots is relevant here. These chatbots had been touted as a massive productivity booster by replacing many (if not most) customer service representatives.

Many public and private sector organizations have spent millions of dollars chasing this dream, building chatbots using the much-hyped commercially available solutions only to find they didn’t come anywhere close to living up to this promise. Users were often frustrated, disillusioned or worse: misinformed.

AI-powered solutions aren’t a substitute for humans—they create very different experiences and therefore are best applied to very specific use cases. The best applications deploy generative AI as an enabling tool – for example, to help front-line workers, including helping them document issues, summarize lengthy content or highlight red flags.

4. Focusing on the technology instead of the experience

The key is to always be focused on the experience that generative AI creates, enables or enhances – not how that experience is powered. Once you define the business issue that you believe generative AI can help address, your execution must start with the employee and/or customer experience and work backward. You may find that generative AI isn’t the solution after all. Don’t let the technology dictate the experience.

KPMG has defined six pillars of customer experience excellence that can serve as a guide as you consider this process. Integrity and trust are the foundation of an excellent experience – and that foundation is the most difficult to achieve with a generative AI solution.

5. Ignoring legal, ethical and privacy concerns

Data quality matters, and the classic risk of “garbage in, garbage out” is very real with generative AI. You must have the proper data governance framework in place, or you open yourself up to privacy, discrimination or other regulatory violations – or at least to embarrassment and criticism since laws or regulations vis-à-vis AI have not been able to keep pace with such a fast-moving technology. Efforts are underway to more clearly define the rights employees or customers should have, but most are still in relatively early stages.


KPMG is an early and enthusiastic advocate for the power of AI. We have broad and deep experience in generative AI technologies, finance functions and processes, and business planning.

With our suite of intelligent tools and services spanning project architecture, operating models, data and signals, and highly skilled data science and business talent, we are well positioned to help businesses and government leverage generative AI to transform operations and outcomes.

We understand both the promise of generative AI and the process and cultural changes, including the embrace of responsible AI practices, that will be required to realize its full potential. Our passion is to create value, inspire trust, and help clients deliver better experiences to workers, citizens, and communities.