The results of a survey show that over 70 percent of Americans are cautious about self-directed machines, and artificial intelligence (AI) research going into lucidity is also not a shock. Previously in February, Accenture launched a toolkit that is capable of automatically identifying any unfairness in developed AI algorithms and assists data researchers alleviate that unfairness. Later in May, Microsoft also released its own way of solving this problem. Now, Google is following in their footsteps.
On 11th September, Mountain View Company launched the What-If tool, which is a modern unfairness-identifying attribute of the TensorBoard web portal for its machine learning software – TensorFlow. Users need a trained model and raw data set to produce visualizations that will look for any tweaks or alterations required in the developed algorithm.
“Probing ‘what if’ scenarios [in AI] often means writing custom, one-off code to analyse a specific model,” James Wexler, AI software engineer at Google, mentioned in a blog post. “Not only is this process inefficient, it makes it hard for non-programmers to participate in the process of shaping and improving ML models.”
Users, working with TensorBoard, can use the What-If tool to manually change the examples from raw data and observe the resulting effects of editing in real time. Further, they can also create plots that will demonstrate how accurate is model’s forecasting with respect to any single attribute.
Counterfactual and algorithm’s unbiased examination are vital to this development. What-If tool just needs a button click and it will compare model’s forecasting at one data point with the forecasting at the next-nearest data point, where the prediction results will be different. One more click will present the outcomes of using various classification thresholds, and third click will automatically consider the constraints required for optimization of unbiasedness.
James Wexler mentioned that the What-If tool has internal usage of identifying attributes of input data that had been formerly overlooked and to determine output patterns that led to enhanced versions of models.
What-If tool is an open source tool and is available to be used by everyone since its release. Along with it, Google released three examples of already trained models to exhibit the abilities of its tool.
“One focus … is making it easier for a broad set of people to examine, evaluate, and debug ML systems,” Wexler wrote. “We look forward to people inside and outside of Google using this tool to better understand ML models and to begin assessing fairness.”
It is no more required for someone to search far for examples of harmful AI systems.
In July, it was publicized by American Civil Liberties Union that Amazon’s facial recognition model – Rekognition – could, when adjusted accordingly, mistakenly identify 28 sitting members of Congress as lawbreakers, with resilient unfairness for people of color.
Washington Post’s recent findings also disclosed that famous Amazon and Google smart speakers were 30% less accurate in comprehending non-American accents in comparison to native accents.