In light of #InternationalWomensDay and #WomensHIstoryMonth, for todays thoughts, let's once again celebrate an amazing woman of color...in science!
Today I invited IAPWE writer (published in the Journal of Neuroscience and amazing woman in her own right), Haley Kynefin to share the research of Timnit Gebru, recent presenter at the NIPS (Neural Information Processing Systems) conference and founder of Black in AI, helping to increase diversity in the field. Artificial Intelligence Research Specialist Timnit Gebru
by Haley Kynefin
Timnit Gebru is best known for her groundbreaking doctorate work at Stanford, analyzing over fifty million Google street view images to predict demographics and voting habits.
But as a pioneering black woman at the forefront of her field, she is also an inspiring leader in the fight for technological fairness and diversity. Originally from Ethiopia, she arrived in the US at the age of 16. After her studies at Stanford she started working with Microsoft as part of their FATE team (Fairness, Accountability, Transparency and Ethics in AI). She is also one of the founders of the group Black in AI, which attempts to increase connectivity between and visibility for black researchers in the field of artificial intelligence. “I went to NIPS(Conference on Neural Information Processing Systems),” she says, in an interview with Technology Review, “and someone was saying there were an estimated 8,500 people. I counted six black people. I was literally panicking [...] that is almost zero percent. I was like ‘We have to do something now.’ [...] Because it is an emergency.” Discrimination in the field of AI comes in multiple forms. For one thing - as Gebru noticed that year at NIPS - the employment landscape in AI research lacks diversity. “It’s important for me to interface with someone that has [a certain type of] domain knowledge,” she says, talking about employing a diversity of backgrounds when embarking on research questions, “in order to know about [the] biases [in that domain].” For another thing, training datasets for machine learning algorithms are usually biased. A well-known study by ProPublica, for example, analyzed a machine learning algorithm called COMPAS, which aims to predict crime recidivism rates, and found that it was biased against blacks. This is an obvious problem. Gebru’s current research aims to uncover some of these algorithmic biases existing in commercial APIs (application program interfaces). APIs are important for providing routines and protocols through which software components interact. She envisions that they can be sold to customers with “datasheets” included, outlining some of the pitfalls inherent in their datasets. This is called “unbiasing”, “debiasing”, or “bias mitigation”. In her latest paper, working alongside Joy Buolamwini, she analyzed 3 commercial gender classification APIs, finding that they performed the worst on dark female faces, and the best on light male faces. Critically, though, Gebru realizes that technical algorithmic bias and field diversity are not separate problems - they are one and the same. “These issues of bias and diversity go hand in hand,” she comments, in an episode of the Google Cloud Platform Podcast. “Sometimes I get kind of frustrated when we only talk about the technical aspects [...] if you have an all-male panel on AI for Ethics or AI for Social Good or something like this, I have very little faith that this is actually AI for social good [...] it’s not just like, [...] creating the next coolest fairness algorithm.” You can find Gebru’s research by following Black in AI on Facebook or Twitter, or join the discussion through their Google Group by signing up here. |
THOUGHTWARDSThoughtwards is a blog celebrating forward thought and the diverse thinkers who think them.
M. Lachi is an award winning recording/performing artist and composer, a published author, and a proponent of forward thinking. Having studied Management at UNC and Music at NYU, M. Lachi employs both savvies in her creative endeavors. For more on M. Lachi's music click here. |