Twitter has announced it is pulling its algorithm responsible for automatically cropping images amid bias issues.
Twitter began hearing feedback in October 2020 that there were issues with how the algorithm was functioning, that it was not treating everyone equitably. The company investigated and did find issues with it.
Testing showed there was an 8% difference from demographic parity favoring women. Likewise, there was a 4% difference in favor of white people instead of black. Similarly, there was a 7% difference in favor of white women instead of black, and a 2% difference in favor of white men instead of black.
One area where the algorithm did not appear biased was in the realm of the “male gaze.”
We also tested for the “male gaze” by randomly selecting 100 male- and female-presenting images that had more than one area in the image identified by the algorithm as salient and observing how our model chose to crop the image. We didn’t find evidence of objectification bias — in other words, our algorithm did not crop images of men or women on areas other than their faces at a significant rate
Ultimately, however, the biases were enough to make Twitter reevaluate use of the algorithm.
We considered the tradeoffs between the speed and consistency of automated cropping with the potential risks we saw in this research. One of our conclusions is that not everything on Twitter is a good candidate for an algorithm, and in this case, how to crop an image is a decision best made by people.