How to Fight Back Against Racist Robots
Welcome to the future! We may not have flying cars but you can rent an uber helicopter and many people have small robots that vacuum their living rooms. However, one thing the Jetsons did not quite prepare us for is the rise of racist robots.
A recent news story by India’s, The World is One News (WION) highlighted a United States study led by researchers at John Hopkins University, Georgia Institute of Technology, and University of Washington. The study found robots that are programmed with commonly used artificial intelligence algorithms are prone to racist and sexist biases. Andrew Hundt, study author and post-doctoral fellow at Georgia Institute of Technology stated, “We’re at risk of creating a generation of racist and sexist robots, but people and organizations have decided it’s OK to create these products without addressing the issues.”
In 2021 The Federal Trade Commission (FTC) issued guidelines for businesses to help reduce the use of biased algorithms. The FTC urges companies to test their algorithms against biases, ensure datasets include a broad and diverse range of population data and increase the cultural and racial diversity in staff and technology developers.
The FTC also states that businesses that use algorithms are required to inform users about what type of data is being collected, how it’s being used, and how the algorithm makes decisions based on the information provided. Several companies, including Facebook, have already received fines for data privacy violations.
Popular Youtube gamer, CoryxKenshin lamented to his 14.3 million subscribers about his own experiences navigating algorithm bias on the platform. On August 24, 2022, Cory uploaded a video entitled, “Youtube:Racism and Favoritism”. His complaint stems from an age restriction placed on one of his gaming videos. In contrast, a white creator uploaded a video that featured the same video game scene and similar content but was not flagged for age restriction. Cory details a multi-step process that required reaching out to Youtube corporate offices to remove the biased content restriction.
Cory is not alone in his concerns as other content creators have complained of colorism and ableism in algorithms used by social media and hiring platforms. Algorithms that target common phrases used in cultural lexicons can lead to a disproportionate number of Creators of Color and LGBTQ -identification facing shadowbans, demonization, and social media bans.
So, what can we do about it? Well, here are a few things:
Learn Your Rights
In 2021 the U.S. Equal Employment Opportunity Commission launched the Artificial Intelligence and Algorithmic Fairness Initiative. The initiative is an agency-wide program to ensure that the use of software, including artificial intelligence (AI), machine learning, and other emerging technologies used in hiring and other employment decisions comply with the federal civil rights laws that the EEOC enforces. Visit the EEOC website to learn more about the rights of applicants, employees, and employers. The EEOC offers these critical tips for workers to avoid ability discrimination:
If your disability makes it hard or impossible for you to take the computerized test for a job, here are some things you can do:
- Reach out to the employer’s human resources department. Explain that you are trying to take the test. Explain why the format is hard for you to use.
- You may have to describe your disability. The employer may ask for proof or additional information. Learn what the employer can ask and how your privacy is protected at The ADA: Your Employment Rights as an Individual With a Disability.
- Ask to be evaluated in a way that shows your ability to do the job. You can use the legal words and ask for a “reasonable accommodation,” but you do not have to.
- If the employer says no:
- You can tell the employer about the EEOC’s Q&A on the ADA and Algorithms.
- You can reach out to the EEOC. The EEOC can help you decide on next steps.
Join an Advocacy Organization
Groups like the Algorithmic Justice League (AJL) are fighting back against algorithmic discrimination by leading cutting-edge research and sounding the alarm by testifying on Capitol Hill. You can join the research by confidentially sharing your experiences with algorithmic bias at the link listed below:
https://www.ajl.org/connect/expose-bias-in-ai
Push for Policy Change
The U.S. Congress is currently debating The American Data Privacy and Protection Act. If passed, the bipartisan legislation would create the first comprehensive federal data protection requirements. Mackenzie Holland, news writer at Techtarget.com notes that this bill will require stronger regulations on algorithms and require businesses to know how third-party companies collect and use shared data.
Civil rights organizations including the Lawyers’ Committee for Civil Rights Under Law support the passage of the bill. In August, LCCR and a coalition of organizations sent a letter to U.S. Speaker of the House, Nancy Pelosi (D-Arizona) urging for a House vote on the legislation. The coalition argued that although the bill was a compromise, it is “comprehensive federal privacy and civil rights legislation that will, for the first time, create real and lasting protections for the personal data of hundreds of millions of consumers in America. It will also significantly expand equal opportunity online through strong anti-discrimination provisions, algorithmic bias assessments, and heightened protections for data that reveal sensitive information about a person.” The letter signatories included a diverse range of civil rights, union, privacy, and consumer advocacy organizations such as the Center for Digital Democracy, Communications Workers of America, and Autistic Self-Advocacy Network.
Nevertheless, some organizations argue the bill does not go far enough and may weaken already existing state privacy protections. The Electronic Frontier Foundation (EFF) noted that the “preemption” clause in the bill, “doesn’t only steamroll state data privacy statutes, such as California’s Consumer Privacy Rights Act. It also apparently rolls back protections in a number of other areas, even rights to privacy that states have seen fit to enshrine in their state constitutions. ” Advocates are urging that changes are made to protect existing state privacy laws by removing the preemption clause. EFF also argues that recent amendments added to the bill would list private data companies as government service providers and increase the likelihood of privacy violations and data breaches.
Increase Your Digital Self-Defense
Back to the racist robots. Season 4, episode 5 of the Netflix anthology series, Black Mirror offered a dystopian look at a world ravaged by robotic dogs. The episode titled, “Metalhead” featured real robot technology designed by Boston Dynamics, an American engineering and robotics design company.
In response to the sheer horror of the episode, One Zero at Medium.com released a tutorial on how to disable the robotic bloodhound.
Twitter user, @lenkusov offered these handy visuals:
Digital self-defense also means adding additional protections to your smart technology. CBS This Morning offered these helpful tips:
Multiple studies found that algorithmic bias is a symptom of human bias and prejudices. These biases can lead to increased scrutiny and digital restrictions for organizations and advocates of marginalized backgrounds. Movement for Black Lives offers an informative guide for organizers on how to protect movements from online infiltration and disinformation.
Okay y’all, it’s time to fight for our digital civil rights! Checktheweather.org will continue to follow the growing fight against Bot Supremacy but for the latest global news on this issue visit Stopkillerrobots.org (yes, it’s a real campaign) and any of the links listed above.