Explainability

I’ve been following the Stanford CS 520 seminar on knowledge graphs for the last few months. There have been some great talks about cutting-edge work, but the one that impressed me the most was Robert Hoffman discussing explainable artificial intelligence. It’s an area I’ve been working in recently. Hoffman makes a lot of sensible points about the need for user-centered design in AI-driven systems. Too often the user’s point of view is considered an afterthought compared to what’s interesting to engineers and researchers (“designer-centered design”).

However, what I most appreciated about Hoffman’s talk was an unrelated statement he made in an introductory slide:

“I advocate for more training in the history of the disciplines and in the philosophy of science.”

I also feel strongly about this. One of my long-term career goals is to integrate more historical awareness into computer science, especially software engineering. I tell anyone who will listen that we need to learn more from the past and spend less time reinventing. Software engineering is one of the few engineering disciplines that gets away with not having a canon of established principles and practices. Imagine if architecture or chemical engineering reinvented a significant fraction of itself every 5-10 years.

Students of computer science should be educated in its history, and that education should be more than a sidebar in textbooks. There is already a large body of literature about the history of the field, but it is almost exclusively written by historians of science and technology, for consumption by other historians. That work has minimal impact on computer science or the IT industry. I believe it’s going to take insiders of the CS/IT system to change the status quo.