iStock illustration

Chatbots have become vital tools for banks and credit unions trying to offer customers relatively frictionless ways to answer simple questions. But, according to a new CFPB report, those same tools could be putting them at risk of violating federal consumer protection laws.

“To reduce costs, many financial institutions are integrating artificial intelligence technologies to steer people toward chatbots,” agency director Rohit Chopra said in a statement. “A poorly deployed chatbot can lead to customer frustration, reduced trust, and even violations of the law.”

The CFPB said has received “numerous” complaints from financial institution customers frustrated by attempts to use chatbots or trying to raise concerns or disputes.

Nearly 1 in 4 Americans interacted with a bank chatbot last year, the CFPB estimated, and that figure is likely set to grow as banks look to have chatbots take on simple tasks like retrieving account balances, looking up recent transactions and paying bills that a more expensive human worker might have done in years past.

But, the CFPB report says, many chatbots are simple programs that follow what’s known as “decision tree logic” or that look for keywords or emojis that trigger a limited set of responses, like directing customers to a list of frequently asked questions. While some banks have built their own, more advanced chatbots, and so-called “generative AI” programs like ChatGPT hold promise for better customer interactions, the CFPB report warns that the complex nature of financial services may not always be a good fit for what customers need.

The report lays out a number of common risks that, it says, banks should consider when implementing a chatbot on their website:

  • Noncompliance with federal consumer financial protection laws: “Financial institutions run the risk that when chatbots ingest customer communications and provide responses, the information chatbots provide may not be accurate, the technology may fail to recognize that a consumer is invoking their federal rights, or it may fail to protect their privacy and data.”
  • Lowering customer service, and thus trust, by being unable to connect customers to a human representative when they need one, or being unable to correctly diagnose what a customer’s looking for.
  • Actively harming consumers by providing inaccurate information that could lead to bad decisions, including situations where they unintentionally incur fees.

CFPB Warns Banks About Chatbots

by James Sanna time to read: 2 min
0