Amidst the conversation about what our future workplaces will look like and whether or not robots are going to take all our jobs, relatively little attention has been paid to one critical point: artificial intelligence is no one-size-fits-all solution for bias and discrimination in the workplace.
Unless we actively work to de-bias our tech tools, humans will program our biases right into the operating systems used to power the next generation of our economy. It is well-documented that women, and especially women of color, face overt discrimination and unconscious biases in the workplace. For example, in a 2012 experiment where scientists were presented with identical resumes—one with the name John and the other with the name Jennifer—the scientists offered “John,” the male applicant for a lab manager position, a salary of nearly $4,000 more than they offered to “Jennifer.” And in an experiment run by the Harvard Kennedy School, researchers concluded that when an organization is facing a failure or setback, Black women are more likely to be unfairly judged or harshly criticized for that failure than either Black men or White women. These are just two studies that highlight the discrimination that women of all races experience at work.
As more and more machine learning begins to guide hiring practices and other major employment practices, it is essential that advocates and activists remain vigilant to the dangers of importing these and other human biases into our computerized systems.
This month, Amazon reminded us of the potential for bias when it abandoned its experimental artificial intelligence recruiting tool after finding that the tool discriminated against women job applicants for software developer jobs and other technical posts. Amazon began testing the program in 2014 to reportedly “mechanize the search for top talent.” The tool was trained to filter through applicants by observing patterns in resumes over a ten-year period and then rating the applicants with a one to five-star rating. But Amazon’s computer system taught itself that male candidates were preferable for these posts because the majority of applicants for these posts were male. Its system “penalized resumes that included the word ‘women’s,’” and it downgraded resumes from graduates of two all-women’s colleges. Although Amazon tried to course-correct, its engineers had no way to guarantee that the bias would stop, so thankfully, the company scrapped the program.
In another example of the ways in which tech can work to replicate bias, the ACLU, Outten & Golden LLP, and the Communication Workers of America (CWA) recently filed an Equal Employment Opportunity Commission (“EEOC”) complaint against Facebook and 10 other employers for “unlawfully discriminating on the basis of gender by targeting their job ads on Facebook to male Facebook users only, excluding all women and non-binary users from receiving the ads.” Filed on behalf of three female workers, CWA and the hundreds of thousands of female workers CWA represents, and a class of millions of women allegedly denied information on job opportunities due to their gender, the charges allege that Facebook’s targeted ad platform promoted the named employers’ job advertisements for roles in male-dominated fields, including mechanics and roofing, to only male Facebook users, which is illegal under federal, state, and local civil rights laws, including Title VII of the Civil Rights Act of 1964. Women still make up only about 3% of employees working in construction and trades jobs. Rather than helping to raise this percentage through widespread job recruiting, Facebook’s tools made it possible for employers to continue to discriminate.
Although the days of having separate male and female help-wanted ads in the newspaper are long gone, the tech-enhanced tools used by companies like Facebook and Amazon threaten to resurrect illegal and outdated recruitment and hiring practices. The trend of using technology to supplement the work of Human Resources professionals is growing. In fact, 55 percent of U.S. human resources managers said that they will use artificial intelligence regularly within the next 5 years in their recruiting and hiring efforts. And a number of companies, including Amazon, Goldman Sachs, and Target, used Facebook targeted ad buys to reach out directly to core audiences of potential employees when conducting recruitment. It is therefore absolutely critical that programmers are actively working to de-bias their products or even better, affirmatively build tech products to promote gender parity in the workplace.
For instance, LinkedIn recently released new software that aims to assist companies with improving gender diversity in their recruiting and hiring efforts. The software update modifies how LinkedIn displays candidate search results for recruiters – each page of candidate search results are now displayed to reflect the gender divide in the talent pool in each industry. The software update also enables companies to see gender representation of their overall workforce so they can set goals for gender diversity. And LinkedIn has also committed to help companies improve the language they use in recruitment messages by making it more inclusive. This change is meant to help promote gender diversity in the workplace by highlighting gender imbalances, so employers and industries can work to course correct. While we don’t yet know the outcome of this change, it is a promising attempt to tackle entrenched gender bias and discrimination in the workplace.
Gender bias is just one of many biases that plays out in recruitment, hiring, and within workplace culture that must be recognized, confronted, and eradicated. Many workers face intersecting forms of bias and resulting discrimination based on not just their gender but also on their race, disability status, age, sexual orientation – the list goes on. LinkedIn has acknowledged that “gender is just the beginning” and wants to use its software to improve race and disability status diversity within organizations. We hope they do not stop there, and that other companies similarly recognize the need to use their tech tools to increase access to jobs in workplace free of discrimination for workers who hold many different identities.
In our work at NWLC, we know that irrespective of whether artificial intelligence is intentionally promoting gender diversity in the workplace or unintentionally fostering gender discrimination, there will still be discrimination at work if we do not work to change cultural attitudes while at the same time implementing and enforcing strong legal protections. Quite simply, when it comes to eliminating bias, there is no holy grail. We need to put in the hard work to change laws, policies and attitudes – and ensure those changes make their way into every line of code. The future of work – and the future of workers – depends on it.