Failure is Good - How ‘Black Box Thinking’ Will Change the Way We Learn About AI.
In a brave new world, it’s not just the brave who must stand accountable to prevent the repetition of historical errors. We live in a morass of political cover-ups, data breaches and capitalist control: now, more than ever, is the time for radical transparency. A change to regulatory mindset must happen, so we’re applying Matthew Syed’s theory of ‘Black Box thinking and logic of failure’ to artificial intelligence.
Too often, in a social or business hierarchy, we feel unable to challenge enduring practices and behaviours. We abide by rules and regulations we might know to be outdated and inefficient; we might witness dangerous negligence yet feel unable to challenge authority figures. Negative loops perpetuate when people do not investigate errors, especially when they suspect they may have made a mistake themselves. But when insight can prevent future mistakes, why withhold it? The only way to learn from failure is to change our perspective of it. Understand that failure isn’t necessarily bad and make room for positive consequences.
In medicine, there are numerous reasons why a human or system might fail during surgery or patient care. In the past, mistakes have been silenced for fear of recrimination, and vital opportunities to learn were discarded. Last year, the NHS spent £2.6 billion in litigation for medical errors and negligence, funds that could have been far better placed elsewhere. Mistakes aren’t a waste of valuable resources; they are a way of safeguarding them. Speaking up on current failures can help us avoid future catastrophic failures. To create a transparent environment in which we can progress from error, we need to move from a blame culture to a learning culture. To study the environment and systems in which mistakes happen — to understand what went wrong and to divulge the lessons learned.
In ‘Black Box Thinking — The Surprising Truth About Success (and Why Some People Never Learn from Mistakes)’, Matthew Syed calls for a new future of transparency and a change to the mindset of failure. According to Syed, these principles are about “the willingness and tenacity to investigate the lessons that often exist when we fail, but which we rarely exploit. It is about creating systems and cultures that enable organisations to learn from errors, rather than being threatened by them.” By changing your relationship with failure to a positive one, you’ll learn to stop avoiding it.
“All paths to success lead through failure and what you can do to change your perspective on it. Admit your mistakes and build your own Black Box to consistently learn and improve from the feedback failure gives you”.
- Matthew Syed, ’Black Box Thinking’
The AI Black Box
Let’s apply this ‘logic of failure’ to Artificial Intelligence as an alternative approach to regulation, with transparency and learning based on ‘Black Box thinking’ and aviation.
Contrary to hard-line AI ethicists — who may have a fatalistic view on punishment when things go wrong — a ‘Black Box thinking’ approach allows us to be real in how we react to and deal with issues. How we work to solve them and translate that to the rest of the industry so that others might learn too.
In any industry, applying intelligent systems to our challenges will likely result in unintended consequences. It’s not always obvious how to identify hazards or ask the right questions to oneself, and there is always the chance that something will go wrong; therefore, no business should be placed on a pedestal. We need to collect data, spot meaningful patterns, and learn from them - taking into account the information you can see and the information you can’t. Using ‘deliberate practice’, you can consistently measure margins of error, readjusting them each time. This can be applied to every part of human learning. How can we progress and innovate if we cannot learn? How can we learn if we can’t admit to our mistakes?
We can respond with transparency, accountability and proactivity to those consequences: to be trusted to do the right thing, to challenge industry standards, and consistently work on improving them. We must not build an industry on silence in fear of being vilified. Instead of extreme punishment, we should create the space and processes to learn and share knowledge, using root-cause analysis so that issues are not repeated elsewhere. We must gather the best perspectives and opinions, with experts coalescing to challenge and debate industry standards. By doing this, AI will advance more effectively and safely, and society will reap the rewards.
Artificial Intelligence is not a new industry; it is a new age. To live successfully in this brave new world, we must readjust our thinking and be just that: brave. In good hands, technology and artificial intelligence can turbo-charge the power of learning. We’d get to a better place faster if we could hold people accountable and resolve issues in public. We must have the courage to face the future with openness and honesty. To not be afraid of failure and admit to it for the sake of learning.