These days everyone and their dog is working on an artificial intelligence (AI) solution. Especially their dog. More specifically, the hopes are mostly put on machine learning, and its ability to uncover patterns off of large data sets.
Machine learning methods are often based on neural networks, which can be basically seen as black boxes that turn input into output. Not being able to access the knowledge within the machine is a constant headache for developers, and many times for users as well. There is a growing need to have machines capable of providing an explanation backing their answers.
Neural networks come in many flavors called “models” that can be evaluated and assigned a score that helps in understanding the goodness of answers they are expected to provide. A model can be improved by giving it either more or better data until the score becomes acceptable; normally a point is reached beyond which the results don’t really improve.
Some other practices give an idea about the model “fitness”, so that there is an expectation in terms of how risky an answer could a model give, or how many mistakes it would normally make. Unfortunately, it’s not always possible to tweak the box in order to achieve a certain effect, e.g. “I want my model to answer only when it knows it knows”.
Other machine learning approaches are more “readable” than neural networks, such as decision trees: an initial question is connected to multiple subsequent questions, one per answer, and so on until a final answer is reached-much like a flowchart.
Going from the initial question to the final answer yields a series of (normally) human-interpretable questions that work as an explanation. The issue with decision trees is that they only cover a certain class of problems, and even so they might not be flexible enough to generate a good model; in other cases, the model is good yet too complex and therefore unreadable.
The difficulty with dealing with intelligent black boxes is that while some domains allow for imperfect models (games, weather, step counters) others do not (medical, political, real estate) and as such are left out-of-bounds from AI territory. Scenarios that have no tolerance for faulty answers normally need to stop at “just give me some insights, please” but doing so will certainly not advance the field.
The challenge lies in having an AI capable of giving a recommendation, possibly a pinch of salt, and certainly the rationale behind the answer, in a human-readable format. How can we crack the black box open so that explanations become the norm?
By no means everyday neural networks need to be ditched, but a more open strain of machine intelligence is necessary if we are to move forward. Way too much time is spent in refining models “by instruments” and way too much time is spent in creating (even brilliant) solutions that cannot be trusted due to their application domain being too sensitive. See-through smart boxes would be hugely beneficial, and moreover they would bring a much-needed element of trust. We don’t trust humans that don’t explain themselves, why should we go easier on machines? Up to this point AI solutions are not very good at making themselves trustworthy.
At Kodit.io we are building a residential real estate data platform that makes it easier to buy and/or sell your home. One of our primary concerns is getting home valuations right. This, however, turned out to be a vastly psychological juggling act, as in the end a price is whatever someone would be willing to pay for the thing (“Toilet seat in the living room? Yay!”) So besides that one number, we are also working on providing context, the rationale behind it. A new school around the corner, a supermarket about to close, a soon-to-disappear forest patch, etc. are all pieces of information that could be easily missed, even by professionals. It’s humanly difficult to achieve the exhaustiveness of a machine.
Disclosing the winding road leading to an answer not only makes the machine more trustworthy but also opens up the unique opportunity of asking for feedback. If the user doesn’t like the answer, they won’t like the explanation; most likely there’s a distinctive part of it that made the whole thought process go haywire. Providing human users with the chance of taking a glimpse into the rationale, plus letting them point out the flaws would be invaluable. Furthermore, it means we get to tell off the machine and make it listen!
As digital services become more and more artificially intelligent, and increasingly pervasive, this new wave of explanation-powered recommendation tools will make us better users. No longer blind to the machinations and even capable of impacting them, users will finally start to embrace the machine as a trustworthy assistant.