Discrimination in ML algorithms exists - and no, these are not liberal fairy tales

The human brain, as we all know, is full of prejudices. The question arises: if machine learning "lives" by imitating our brain very closely, then why can't its algorithms be as biased and show the same injustice? Unfortunately, they often do this.





Let's tell you exactly how.





(ML) β€” . . , ML ( ) β€” : , , . .





, , . , , .





, β€œβ€ . , , , . , , , .





(bias) ?

  • , . , , . , . , , , .





  • (NLP) : , , ( , false positive)  1,5 , .





  • , . COMPAS, , , . , , .





?

. , , . ( ):





  • , . β€” , , .





  • , . , β€” - , (word embedding).





  • , . , . , , . , 3 , 35% . .





?

, , . : , , . , , , .





?

, , , . , :





  • , β€” , , , .





  • . ( = , = ) ( = , = ).





, . - . 








All Articles