People are accusing Twitter’s algorithm of being racist, which is sometimes a problem when it comes to algorithms based on Machine Learning.
If you don’t train the algorithm with a representative data set, then it can cause failures. e.g. if your data set only has pictures of males, then the algorithm will have problems when it is then tested out on pictures of females.
This sure is an interesting thread.
Originally, Dantley who works at Twitter, states the background is swaying the algorithm to choose the white guy. Then Graham has followed up with a set of tests.
In his reply, he has 4 pictures. If you click each one, you can see the original image. The image is very tall, which means Twitter has to crop it in order to display in the tweet. It seems that it crops the image around something interesting in the pictures. So it should detect 2 people and then needs to choose one to display in the preview. In each of the 4 examples, it is choosing the white man.
Dantley follows it up with an experiment of his own, by placing them in the same outfit, but he also removed their hands. This was probably just easier to edit, rather than swapping clothes and adding their hands back in. The black guy is then chosen.
I wonder what the outcome of this is going to be. Some people, including Dantley suggest it should crop the image, but I don’t want a massively tall image on my timeline. Maybe they have some other way of handling it that I’m not thinking of.