Authored by Max Albert via HackerNoon.com,
The latest congressional technology hearing was as cringeworthy as you would expect.
There were politicians who thought Google was the same company as Apple.
There were politicians that wondered why Google was censoring hate-speech.
There were politicians that thought Sundar Pichai’s salary and some aggressive alpha-male shouting would enable him to reveal the answer to the age old mystery of “is Google tracking our every step?”
Confused? So am I.
Through all the hardships, Pichai remained calm and collected. He provided insight to a group of politicians who clearly lacked expertise. This is difficult to do and I give him credit. For 99% of the hearing, Sundar Pichai was on fire.
But there’s one crucial question that Pichai botched. It was about the ethics of algorithms.
Listen to this question by Rep. Zoe Lofgren (D-CA),
Right now, if you google the word ‘idiot’ under images, a picture of Donald Trump comes up. I just did that,” she said. “How would that happen?”
This is Pichai’s response,
Any time you type in a keyword, as Google we have gone out and crawled and stored copies of billions of [websites’] pages in our index. And we take the keyword and match it against their pages and rank them based on over 200 signals — things like relevance, freshness, popularity, how other people are using it. And based on that, at any given time, we try to rank and find the best search results for that query. And then we evaluate them with external raters, and they evaluate it to objective guidelines. And that’s how we make sure the process is working.
Representative Zoe Lofgren later concludes that she looks forward to working with Pichai on serious issues and,
It’s pretty obvious that bias against conservative voices is not one of them [google’s priorities].
Pichai’s response was not wrong or nefarious. Pichai did an excellent job at explaining the technical-side of how Google handles queries in Layman’s terms.
However this exchange as a whole may be misleading to the public eye. It lends itself to a common, dangerous misconception that sophisticated algorithms are always unbiased.
With this exchange, Rep. Lofgren and Pichai establish a defensive narrative that Google takes hundreds, thousands, even billions of data points into consideration before listing a website at the top. Furthermore, Google’s algorithm takes into account an unfathomable number of ‘objective guidelines’ and ‘external raters’ to evaluate. Lastly but most importantly, algorithms like this are too sophisticated to experience bias.
Of course Pachai knows this narrative is not true. But does Rep. Lofgren know? Do the other congressmen and congresswomen know? Does the public know?
Well the fact remains that algorithms were built by people. People have agendas. When people get to define what is a success and what is a failure, there will always be at least some inherent bias.
Just because a solution was discovered by an algorithm doesn’t necessarily make the solution unbiased. Sometimes, algorithms can make bias decisions and the amount of ‘data’ and ‘guidelines’ the algorithm has access to does not make the algorithm more credible.
For instance, there are criminal justice algorithms that are prone to label African Americans as ‘high risk’ (and thus ineligible for parole) more often than Caucasians. This algorithm has access to a wide array of ‘data’ and ‘objective guidelines’ yet it still makes biased decisions. Why? Because the court system is bias. All of the data the algorithm has access to is bias.
Additionally, there is an infamously bias flight algorithm that chose to remove Dr. Dao from a United Airlines flight and resulted in this traumatic video:
This is another extremely sophisticated algorithm that failed to provide biased-free judgement. So to suggest that Google’s search algorithm is unbiased because it’s a sophisticated algorithm is false. Algorithms can be incredibly prejudice if not careful.
The fact of the matter is, Google’s search algorithm is very close to being unbiased because of meticulous evaluation and consistent reevaluation by the team.
To my knowledge, the only way to validate an algorithm’s credibility is to consistently reevaluate the results by a third party. But even then, the term ‘bias’ is subjective. So this evaluation process is more like a short-answer question than a true or false question.
Pichai’s answer to the question of “how does searching ‘idiot’ reveal a picture of Donald Trump” was technically true but culturally disappointing.
Instead, consider what would’ve happened if Pachai answered Rep. Lofgren’s question with, “we have policies in place so that humans can not directly manipulate search results to make Donald Trump appear on the search of idiot. We’ve proven through independent parties that Google’s search does not show political bias and that this particular query-result could happen to a democratic president under the same conditions. Furthermore we are always reevaluating how the search engine could improve.”
This answer may not instill the same confidence of Pichai’s original answer, but it’s the most honest and complete answer in the context of bias.
Moving into an era where algorithms have more decision-making power, the general public is going to need to learn about what makes an algorithm credible and what makes an algorithm biased.
via RSS https://ift.tt/2STvPKz Tyler Durden