New York, May 27 (IANS): A new method developed by two Indian-origin engineers can provide important insights into how exactly a machine-learning algorithm comes to a decision to either accept or reject your loan application - something that usually remains a mystery.
"Demands for algorithmic transparency are increasing as the use of algorithmic decision-making systems grows and as people realise the potential of these systems to introduce or perpetuate racial or sex discrimination or other social harms," said Anupam Datta from Carnegie Mellon University.
"Some companies are already beginning to provide transparency reports, but work on the computational foundations for these reports has been limited," he said.
Datta's team developed Quantitative Input Influence (QII) to measure the degree of influence of each factor considered by a system, which could be used to generate transparency reports.
These reports might be generated in response to a particular incident -- why an individual's loan application was rejected, or why police targeted an individual for scrutiny or what prompted a particular medical diagnosis or treatment.
Or they might be used proactively by an organisation to see if an artificial intelligence system is working as desired, or by a regulatory agency to see whether a decision-making system inappropriately discriminated between groups of people.
Datta, along with researchers Shayak Sen and Yair Zick, recently presented his report on QII at the IEEE Symposium on Security and Privacy in San Jose, California.
A distinctive feature of QII reports is that they can explain decisions of a large class of existing machine-learning systems.