All Transform 2021 sessions are available on demand now. Watch now.
A resume grading algorithm created by Amazon, when training on men’s resumes, taught itself to prefer male candidates and to penalize resumes that included the word “female.”
A major hospital’s algorithm, when asked to assign risk scores to patients, gave white patients similar scores to black patients who were significantly sicker.
“If a movie recommendation is flawed, that’s not the end of the world. But if you’re on the receiving end of a decision [that] is being used by AI, which can be disastrous, “said Huma Abidi, senior director of AI software products and engineering at Intelhe said during a session on AI biases and diversity at VentureBeat’s Transform 2021 virtual conference. Abidi joined Yakaira Nuñez, Senior Director of Research and Knowledge at Sales forceand Fahmida Y Rashid, Executive Editor of VentureBeat.
Changing the human variable
To produce fair algorithms, the data used to train AI must be free from bias. For each dataset, you need to ask yourself where the data came from, if that data is inclusive, if the dataset has been updated, and so on. And you should use model cards, checklists, and risk management strategies at every step of the development process.
“The best possible framework is that we were able to manage that risk from the beginning: we had all the actors in place so we could make sure the process was inclusive, bringing the right people to the room at the right time who were representative of the level. diversity that we wanted to see and the content. So risk management strategies are my favorites. I think … so that we can really mitigate the bias, it will be about risk mitigation and risk management, ”Nuñez said.
Make sure diversity is more than just a buzzword and that your leadership teams and speaker panels reflect the people you want to attract to your company, Núñez said.
Bias causes harm
When you think about diversity, fairness and inclusion work, or bias and racism, the biggest impact tends to be in the areas where people are most at risk, Núñez said. Health care, finance and legal situations, anything that involves the police, and child welfare are sectors where bias causes “the most damage” when it appears. Therefore, when people are working on artificial intelligence initiatives in these spaces to increase productivity or efficiency, it is even more critical that they are deliberately thinking about bias and potential for harm. Each person is responsible and accountable for handling that bias.
Nuñez spoke about how the responsibility of a research and knowledge leader is to curate data so that executives can make informed decisions about product direction. Nuñez is not only thinking about the people who gather the data, but also about the people who may not be in the target market, to provide insights about the people that Salesforce would not have known otherwise.
Núñez regularly asks the team to think about bias and whether it is present in the data, such as asking if a project’s panel of people is diverse. If the feedback does not come from an environment that is representative of the target ecosystem, then that feedback is less helpful.
Those questions “are the little things I can do on a day-to-day basis to try to move the needle a little bit in Salesforce,” Nuñez said.
Changes at the company level
Research has shown that minorities often have to whitewash their resumes to get callbacks and interviews. Businesses and organizations can weave diversity and inclusion into their stated values to address this issue.
“If it’s not part of your core mission statement anymore, it’s really important to add those things… diversity, inclusion, fairness. Just doing that alone will help a lot, ”Abidi said.
It’s important to integrate these values into corporate culture because of the interdisciplinary nature of AI: “It’s not just about engineers; We work with ethicists, we have lawyers, we have legislators. And we all come together to solve this problem, ”said Abidi.
Additionally, company commitments to help correct gender and minority imbalances also provide an end goal for recruiting teams: Intel wants women in 40% of technical roles by 2030. Salesforce aims for 50% of Your US workforce is made up of underrepresented groups, including women, people of color, LGBTQ + employees, people with disabilities, and veterans.
VentureBeat’s mission is to be a digital urban plaza for technical decision makers to gain insight into transformative technology and transact. Our site offers essential information on data technologies and strategies to guide you as you run your organizations. We invite you to become a member of our community, to access:
- updated information on the topics of your interest
- our newsletters
- Exclusive content from thought leaders and discounted access to our treasured events, such as Transform 2021: Learn more
- network features and more
Become a member