Transparent AI. What’s the Plan?

I listened to Joanna Bryson (a scholar in computer science and ethics) speak at the Fantastic Futures conference (AI4LAM) in December last year on: Information Professionals and Intelligent Machines: Can we Save the Librarians?.  Bryson draws the threads of societal change, technology and ethics together when she speaks, and in a way that I find accessible, compelling and wide ranging.  She also gave another talk last year Intelligence by Design, Systems Engineering for the Coming Era of AI Regulation at a Codiax conference.  In the talk, she mentions near the end “POSH” and “behaviour trees” and for some reason those words caught my attention.  These aspects of system engineering (very roughly speaking) appear to be methods used for system priorities and action selection.  It took some more digging around to find out what POSH actually stands for: Parallel-rooted, Ordered, Slip-stack Hierarchical and what a behaviour tree (it is a “mathematical model of plan execution used in computer science“). Phew!

Without the real ability to go into any detail on any of that, the idea that a systems have plans and behaviours that are designed and can be inspected, evaluated, and modified was somehow very reassuring. It reminded me a little of the layers of detail in requirements gathering and technical analysis, and dual focus needed when capturing technical and non-technical system requirements, and when responding to or evaluating RFP for technology solutions or developments.  Delving a little further though, it appears that with each system or AI model employed (e.g. like a behaviour tree) there may be different inspection methods that can be applied that can aid with improvement and enabling the transparency of artificial intelligence.  

Woven runner

There is a lot of new terminology and computer science knowledge to wade into to examine the FATE (Fairness, Accountability, Transparency, Ethics) and artificial intelligence.  Other words are appearing in the literature, like: explainability, interpretability, and understandability as well.  There is some acronym fatigue ahead for those navigating computer science methodologies and trying to understand what AI is and does – in lay terms.  With this idea in mind, two resources surfaced on Twitter feeds last week that are accessible and seem useful to share.  

  • A set of definitions for machine learning published in a post from Data Science Central as a data science cheat sheet.
  • A set of assessment questions clustered around the seven requirements that have been set by the European Commission to evaluate the trustworthiness of AI (to aid in its development). 👇         

Ensuring that European values are at the heart of creating the right environment of trust for the successful development and use of AI the Commission highlights the key requirements for trustworthy AI in the communication:

  1. Human agency and oversight
  2. Technical robustness and safety
  3. Privacy and Data Governance
  4. Transparency
  5. Diversity, non-discrimination and fairness
  6. Societal and environmental well-being
  7. Accountability

From: A European Approach to Artificial Intelligence