
As A.I. becomes ever more utilised in everyday decision-making, the worries become ever more complicated. Long gone are the days where the main concern was why Google returns pictures of dogs when you were really wanting cats.
A recent article in the New York Times highlights some of the problems associated with A.I. bias, but invariably the issues are compounded into three primary concerns:
The What: Do you understand what data is being fed into a model to train it? Is it truly representative and balanced for the question being asked? Does it intuitively withstand a causality test as opposed to just showing phenomenal correlation?
The How: Do you understand how the results are related to the data that the model ingests? Can the model be explained? For example: why did you reject someone’s loan application?
The Why: How stable is the model to changes in data? Simplistically, why does your model to find a cat sometimes mistake a dog for a cat?
We developed Preavisum GALEN to help navigate these issues. It shows what data is consumed by a process and provides provenance for what is produced by a process. If you are unsure of how your data is driving your business then contact us.
Comments