On October 18th, Open MIC joined with 22 organizations and individuals in supporting comments filed by the Center for Democracy & Technology (CDT) opposing the U.S. Department of Housing and Urban Development (HUD)’s attempt to weaken protections against housing discrimination.
For decades, HUD and federal courts have recognized disparate-impact liability under the Fair Housing Act (FHA). This means that people are protected not only from intentional housing discrimination, but also from practices that appear neutral yet still result in discriminatory effects, such as:
Reverse redlining practices that result in people of color paying disproportionately higher interest rates on mortgage loans;
Policies that effectively keep people with children out of a neighborhood by limiting the number of people who can live in a single home; and
Buildings that only allow tenants with full-time jobs, disproportionately excluding people with disabilities.
HUD’s proposed rule would seriously weaken these protections by making it much harder for people to have their disparate-impact cases heard in court.
“HUD’s proposed rule would make it practically impossible to challenge housing discrimination that results from the use of an algorithmic model. We need more enforcement against housing discrimination in the digital age, not less,” said CDT Policy Analyst Natasha Duarte.
HUD’s rule explicitly creates several defenses that actors relying on discriminatory algorithms could use to avoid liability:
That none of the model’s inputs are close proxies for a protected class;
That a third party, not the defendant, created the model; or
That a neutral third party determined that the model is empirically derived and statistically sound.
“The changes that HUD is proposing to the standard for proving discrimination in housing are unprecedented and have no basis in computer or data science,” added Duarte. “They would do nothing to prove that a model is nondiscriminatory, and would undermine the ability of HUD and the courts to enforce against housing discrimination.”
The authors of the comments possess deep expertise in computer and data science, as well as civil rights. As the comments explain:
Algorithms are being used to make decisions that impact the availability and cost of housing. These decisions include screening rental applicants, underwriting mortgages, determining the cost of insurance, and targeting online housing offers. These models are seldom designed to take protected characteristics into account, yet they still have the capacity for protected-class discrimination. The datasets and correlations on which they rely can reflect societal bias in non-obvious ways that models may reproduce or reinforce. Models may also create or mask discrimination and bias without regard to societal bias in data. This is precisely the type of discrimination that disparate-impact liability is supposed to address.
“This proposed rulemaking shows a grave misunderstanding of how algorithmic discrimination works,” Duarte said. “HUD should not depart from decades of disparate-impact precedent and should focus on enforcing its existing rules to protect people from housing discrimination.”
CDT’s full comments are available here. You can also read more about CDT’s work on algorithmic fairness here.