INSIGHTS
8 min read
Published on 11/08/2022
Last updated on 06/18/2024
The Tacit Knowledge Blog Series 4/6
Share
Blackbox Interpretability and Tacit Knowledge
There is no accepted definition of interpretability and explainability, although the many different methods proposed to explain or interpret how an opaque AI System works. Sometimes the two terms are used interchangeably in the broad general sense of understandability. Some researchers prefer interpretability; for some, the term interpretability holds no agreed meaning. Some others posit that interpretability alone is insufficient to trust black-box methods and that we need explainability. Even the EU makes a circular case for explainable AI, identifying why some form of interpretability in AI systems might be desirable.The Blackbox Problem
In this blog, I prefer to follow the computer scientist Cynthia Rudin and make a clear distinction between the two terms:- An interpretable machine learning model is domain-specific and constrained in model form so that it is either helpful to someone or obeys structural knowledge of the domain or physical constraints that come from domain knowledge.
- An explainable machine learning model refers to an understanding of how the model works.
Hinton’s position
Humans cannot explain their subjective tacit knowledge to other humans. Geoff Hinton famously expressed this concept in an interview "People can't explain how they work, for most of the things they do." (Wired 2018). It is then unrealistic to expect an AI System to provide explanations of its internal logic. How can we expect to find explanations in a machine's internal representation (the tacit knowledge)? by looking at the blueprints?, at the internal neural network connections? That is equivalent to asking a neurologist to take fMRI images when a person is, for example, watching a cat while recording the subject's internal mental processes: "the facts remain that to see a cat differs sharply from the knowledge of the mechanism of seeing a cat. They are knowledge of quite different things" (Polanyi 1968). Sometimes, providing deep explanations of an AI system's internal algorithmic logic, although technically correct, produces the opposite effect on a less skilled audience, leading to a lack of trust and poor social acceptance of the technology.The only possible way to escape this puzzle is to convert tacit knowledge into explicit knowledge in a way that another human peer can interpret, for example, by asking the right question. A professional algorithm auditor may ask the right questions to an AI System, but never look at the blueprints of the DL model! The auditors’ job is to interrogate algorithms to ensure they comply with pre-set standards without looking at the internals of the DL model. It is also the approach used with counterfactual explanations for people who want to understand the decisions made by an AI system without opening the black box "counterfactual explanations do not attempt to clarify how decisions are made internally. Instead, they provide insight into which external facts could be different to arrive at a desired outcome" (Watcher 2018). Therefore, it should not be surprising that a DL model can be interpretable but not explainable to a larger extent. We should stop asking a machine what we never ask a human because of our cultural biases for adopting a double standard."I’m sorry my responses are limited You must ask the right question"
– Dr. Lanning's Hologram – I, Robot (2004)
Wittgenstein’s position
The DL black box problem is a misplaced source of criticism that can be mitigated by considering the interplay of tacit knowledge (non-articulated) and explicit knowledge (articulated). I posit that a DL system is inexplicable in the sense that a human brain is epistemically inexplicable.One source of this misconception is identifying human knowledge with internal brain processes and overlooking machine (tacit) knowledge with (explicit) algorithms. A robot can be explicitly programmed with engineers' explicit knowledge (the algorithm) to perform a specific task, e.g., riding a bike. Still, the action is performed with the robot's internal knowledge, i.e., its inaccessible tacit knowledge, and not with the engineer’s explicit knowledge. The robot does not know the algorithm but knows how to run it and knows how to ride a bike [3]. Even if it were possible that the robot explained how it worked, we (humans) could not understand anything. Ludwig Wittgenstein, the great philosopher of mind, alluded to this fact when he remarked, "if a lion could talk, we could not understand him.". Wittgenstein says that the meaning of words is not told by words alone. Lions perceive the world differently; they have different experiences, motivations, and feelings than we do. We may grasp a first level of what a lion may say to make sense of the lion’s words. However, we will never comprehend (verstehen) the lion as a unique individual and his frames of reference that we do not share at all. In analogy, there is little that we can share with a machine. Even if the machine in question explains how it works in perfect English or any other human language, we can grasp only a first level without fundamental understanding. Alternatively, we can do better. Supported by sociological research on tacit knowledge, we can employ counterfactual explanations to explain predictions of individual instances. That is a great way to distill knowledge from data without opening the black box and gaining trust."If a lion could talk We could not understand him"
– Ludwig Wittgenstein – Phil. Inv. (1953)
What’s next?
In the next blog, we will see how to capture the Tacit Knowledge of experts.References
- Polanyi, M. (1966). The Logic of Tacit Inference. Philosophy, 41(155), 1-18. https://doi.org/10.1017/S0031819100066110
- Rudin, C. Stop explaining black-box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intel 1, 206–215 (2019). https://doi.org/10.1038/s42256-019-0048-x
- Heder, Mihaly, and Daniel Paksi. "Autonomous robots and tacit knowledge." Appraisal, vol. 9, no. 2, Oct. 2012, pp. 8+. Gale Academic OneFile.
Subscribe to
the Shift!
Get emerging insights on innovative technology straight to your inbox.
Driving productivity and improved outcomes with Generative AI-powered assistants
Discover how AI assistants can revolutionize your business, from automating routine tasks and improving employee productivity to delivering personalized customer experiences and bridging the AI skills gap.
Related articles
Subscribe
to
The Shift
!Get on innovative technology straight to your inbox.
emerging insights
The Shift is Outshift’s exclusive newsletter.
The latest news and updates on generative AI, quantum computing, and other groundbreaking innovations shaping the future of technology.