5 Ways to Avoid the AI Nightmares That Could Damage Your Business

{authorName}

KofaxWork like tomorrow

10 July 2020

“When AI Goes Wrong” is definitely going to end up being a documentary series this decade. Avoiding being one of the sad examples will require businesses to learn the mistakes of early AI adopters and to develop the tools that will avoid the likely disaster areas.

Article 5 Minutes
5 Ways to Avoid the AI Nightmares That Could Damage Your Business
  • Home
  • IT
  • Software
  • 5 Ways to Avoid the AI Nightmares That Could Damage Your Business

There have already been a number of chatbot and business AI disasters that have made headlines. But, to date, there’s been no record of one killing someone or being responsible for a death. Still, that scenario is coming, and as in the case of poor Robert Williams, the first person crushed to death by an industrial robot, the repercussions for the firms responsible will be deep and wide-ranging.

To avoid being in the firing line of “killer AI” headlines or the slightly less dramatic “AI bankrupted my business” requires careful planning to avoid customer distress, an impact on business results and other ramifications.

1. Regulated industries must follow rules and best practices

Financial, health and legal services or those offering similar advice already fall under a wide range of existing legislation and guidelines. While the AI-era of advice might still be in the early stages of being formulated, there are plenty of best practices out there that every business should follow.

The key to avoiding AI issues is that every partner, developer or distributor of AI is aware of all of that legislation and advice and follows it to the letter of local, national and global standards. From black-box AI to privacy and data storage, every box must be ticked.

Things may still go wrong, but having categorically followed the rules and guidelines will reduce the industry and internal damage.

Following other industry best practices will also help avoid a problem becoming a major issue, both within traditional business activities, and as AI spreads further into everyday society through monitoring CCTV cameras, identifying people in crowds or simply knowing where people are based on various data points.

2. Learn from where AI has gone wrong so far

The early AI disasters have been more about public mirth and moral outrage, rather than physical or financial harm. AIs have demonstrated gender bias, chatbots have been corrupted to use inappropriate language or express hate speech, but the overall trends give an example of areas to watch out for:

  • The risk of bots storing and sharing private or publicly identifiable information remains high.
  • Black box AI results that can’t justify how someone was turned down for credit/aid or given a different rate to someone in similar circumstances will likely be highly restricted, if not banned.
  • Actual or perceived bias (in hiring AI tools) will attract instant media attention. Any such tools have to be proven to be above reproach and have manual safeguards to avoid even the impression of bias.

3. Use real training data rather than generic or hypothetical data

Perhaps the closest an AI has got to injuring someone is with IBM’s Watson AI making incorrect assessments on cancer diagnoses (fortunately only during its training days). The system was trained on theoretical data rather than actual information, creating gaps in knowledge that generated bad advice.

Training AIs and chatbots is a key part of any AI development process, and the best quality data will always be real data, no matter how inconvenient it is to use. Mixing real and synthetic data might give better overall training results but with the huge volumes some AIs require (Facebook’s new Blender AI  was built using a model with 9.4 billion parameters) finding out why a mistake happened could prove impossible.

Instead, focus on valid training data for your specific use case to crack market-specific or relatively simple problems and ignore the hype around massively intelligent AIs.

4. Look at how AI changes your business

Perhaps the most subtle way that AI can damage a company is through the way it changes how the business operates. Some leaders are slaves to metrics, for better or worse, and AI technology can help them lead a company in many directions, as the AI gathers and analyses data, and suggest actions based on it.

When the time comes and AI decisions start overriding human ones, a better argument than “because the AI says so” will be necessary. Everyone at a leadership and management level needs to have confidence in any higher-level AI, and understand how it might shift power to a few people or business units.

Transparency and visibility through training, knowledge sharing, and dashboards will be key across the workforce if everyone is to believe what the AI suggests. Additionally, every decision must be backed up by human insight and analysis.

5. Have your PR prepared in advance

Scenario planning is usually a fun game for public relations departments, until reality bites. But when faced with a range of AI disaster scenarios, PR professionals need to understand the underlying issues of AI technology, how the public and media will react, and how their traditional responses (ignore it, blame the technology, blame a third-party) will not work.

The PR team needs a thorough briefing from the technology team whenever an AI is rolled out, explaining what the AI does, what could go wrong and any outlier possibilities. Combined, they can produce a set of technical early responses in plain language, as well as the human story that allows them to keep on top of events and to manage the wider story, providing regular updates on:

  • What we have learned
  • What went wrong and how we made things right
  • How we will avoid any repetition in future

Clarity and openness might not undo all of the damage, but it can minimize the long-term impact.

Kofax

Kofax intelligent automation solutions help organizations transform information-intensive business processes, reduce manual work and errors, minimize cost, and improve customer engagement. We combine RPA, cognitive capture, mobility & engagement, process orchestration, analytics capabilities and professional services in one solution. This makes it easy to implement and scale for dramatic, immediate results that mitigate compliance risk and increase competitiveness, growth and profitability.

Comments

Join the conversation...