Carnegie Mellon University has been observing machine-learning algorithms due to unclarity of how the machines decide between certain categories. The engineers are looking at how the algorithms decide between: “credit, medical diagnoses, personalized recommendations, advertising and job opportunities.”
“Demands for algorithmic transparency are increasing as the use of algorithmic decision-making systems grows and as people realize the potential of these systems to introduce or perpetuate racial sex discrimination or other social harms,” said Anupam Datta, associate professor of computer science and electrical and computer engineering.
To see machine-learning in practice, head to Google Images and type: ‘Professional hair’. Observe the results. Then type in ‘Unprofessional hair’ and observe the results. This is an example of how Google Images’ algorithm has learned from the websites that have categorized unprofessional hair by using those images in the past. Therefore, when you search for ‘unprofessional hair’ you will see the bias the internet as a whole has and not because that’s what Google thinks.
Ultimately, Carnegie Mellon University now wants to understand why machines are learning and applying these kinds of algorithms the way they do. The university is measuring what they call Quantitative Input Influence (QII). “Some companies are already beginning to provide transparency reports, but work on the computational foundations for these reports has been limited. Our goal was to develop measures of the degree of influence of each factor considered by a system, which could be used to generate transparency reports,” Datta said.
The researchers say that the dataset that was initially used to train a machine-learning system will factor into QII measurements. However, the results they get will be useful in defining how a number of machine-learning systems behave due to a set of principles being shared by the machines.
So what can be done with QII measures? It can factor in for hiring decisions within a company. The researchers use the example of a moving company, which would only need two inputs: Gender and lifting capability. But wouldn’t that just mean the strongest man would be chosen? What about a female who could actually do heavier lifting?
Shayak Sen, a P.h.D student in computer science at Carnegie explains: “That’s why we incorporate ideas for casual measurement in defining QII. Roughly, to measure the influence of gender for a specific individual in the example above, we keep the weight-lifting ability fixed, vary gender and check whether there is a difference in the decision.”
The research is important in the future of defining how computers learn from the algorithms they have been pre-programmed with that are programmed to learn, but learn in a way that it does not start a robopocalypse or make an incorrect decision. We need to move away from racial profiling and Western bias defined by the mass majority of internet users because if the machines learn from that dataset, the future of robot-human relations won’t be a pleasant one.
Source: Eureka Alert