Kelley said that Twitter had checked for bias before using the current algorithm, but “didn’t find evidence” at the time. She added that Twitter would open source its algorithm studies to help others “review and replicate.”
There’s no guarantee that Twitter can correct this. However, the experiment does show the very real dangers of algorithmic bias regardless of intent. It could shove people out of the limelight, even if they’re central to a social media post or linked news article. You might have to wait a long while before issues like this are exceptionally rare.
thanks to everyone who raised this. we tested for bias before shipping the model and didn’t find evidence of racial or gender bias in our testing, but it’s clear that we’ve got more analysis to do. we’ll open source our work so others can review and replicate. https://t.co/E6sZV3xboH
— liz kelley (@lizkelley) September 20, 2020
All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.