Monday, February 13, 2017

- There Is (probably) No Bias In Big Data

In the Wapo today there is a silly and deeply misleading OpEd on how big data may be ‘racially biased’, when used in the criminal Justice System. They demand that the algorithms used be subject to ‘peer review’, itself a fairly misleading term in this context, so that any possibility of ‘racial bias’ be purged from it’s relentless logic.

What they really mean is that regardless of the differing rates of crimes committed by people of different races and the differing recidivism rates, the algo should be modified to produce results which recommend the same outcomes of all criminals regardless of race. They want to subject the system to ‘peer review’ to make sure that a bias is added to the system in order to effect this change. It’s essentially applying disparate impact logic to probability math.

It’s true that the data shows many more blacks and Latinos stopped and arrested on a percentage basis compared to whites, and the advocates of adding bias to the system claim this is ‘racism’. But it isn’t, and there is already more than adequate ‘proof’.

When Jared Taylor and Amren produced ‘The Color of Crime’ they compensated for this by using a survey of victims, where the race of the perpetrator is identified specifically by the victim of the crime. If you’re assaulted on the street, you want the perpetrator caught, so you are unlikely to report the race of the blue eye blond haired kid who assaulted you as black or Latino, just to make a political point, regardless of your race.

And the data they gleaned from this source showed that although blacks and Latinos are stopped and arrested more frequently on a percentage bases, the amount they are arrested directly corresponds to the amounts predicted by the victim’s survey. Ergo, no racism, no bias, and no unfairness.

I’m not saying that the algos are perfectly correct in every case. But they are probably a lot more honest than the ‘peers’ that activists would like to subject them to. To impart ‘disparate impact’ logic into the process would be to systemically encode an anti-white and anti-Asian bias into all future work, and though it would produce a result the leftist radicals want, it would do so only by ignoring reality in the same way our ‘human driven’ decision making always has.

No comments: