Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Bots may send your liability risk soaring

Evan Schuman | Jan. 11, 2017
Judges and juries may think that a company should be better able to eliminate errors in responses with automation

Artificial intelligence bots are all the rage these days, as companies try to figure out the best ways they can be used. But using them to interact directly with customers forces some interesting questions about legal liability. 

What happens when a wrong answer causes financial harm to a customer? Does it make a difference if the answer was delivered by a human call center representative or an automated bot? In most cases, it absolutely will. 

Consider a typical fintech company, a bank. It uses a bot to cover the most commonly asked retirement fund questions, but someone programmed the wrong answer into the system. Let’s assume that the error causes a customer to miss a key deadline, which causes that customer to have an opportunity-loss of a lot of money. If this matter goes to litigation and a jury or judge is deciding an appropriate resolution, will they view this differently than if an associate gave that wrong answer? 

Let’s say that the human associate is a 22-year-old with just one week on the job. A jury might decide that her error was deserving of some leeway. The same jury might take a completely different view if the error resulted from code that was written, reviewed and approved at multiple levels — including two people in the Legal department — over several months. 

There are parallels between this and laws dealing with host liability. If you host a big party, your ultimate liability should an intoxicated guest cause some harm could depend on whether you have an amateur serving drinks or a professional bartender. On the one hand, the bartender is likely far better at noticing the signs of intoxication and should understand his duty to cut the partygoer off. On the other hand, a jury is likely to hold a professional bartender to a much higher standard than, say, your Uncle Phil. 

Just to be clear, your A.I. bot is the professional bartender, and the 22-year-old new hire is your Uncle Pat. With the bot, you have an exponentially better shot at controlling what answers are given to customers’ questions, but if that more easily controlled method does glitch somehow, your liability is likely to be far higher. 

Michael Stelly recently retired after years as the lead mobile developer for Transamerica Retirement, which had $202 billion of insurance policies in force as of Dec. 31, 2015. While noting that the year of law school that he suffered through was just “enough to be dangerous,” he argues that the differences in liability that bots can impose are typically ignored by many companies using them. 

Yes, a bot has to go through many layers of approval before uttering a single vocalization. “It has to be fully vetted by Legal before [becoming available through] Apple and Google. Financial institutions employ a legion of lawyer to eliminate any kind of fiduciary circumstance,” Stelly said. “There are any number of stopgaps where it can be shot down.” 

 

1  2  Next Page 

Sign up for CIO Asia eNewsletters.