There are various reasons why we would like to have a machine learning model support decision making: judges are human and humans have biases, as shown in this book. On the other hand, there are reasons why we should be careful when considering the support of machine learning systems: machine learning models inherit human bias (mostly through data). Technology is not value-neutral and it’s created by people who express a set of values in the things they create. Often this has unintended consequences, such as discrimination. Each data point in a dataset represents a criminal case which is characterized by a set of features. Machine learning models learn from these features to separate between recidivists and non-recidivists.
Age at main crime, sex, nationality, sentence, previous number of crimes, year of crime, whether probation was given or not etc. These are features related to demographics and criminal history and we represent them with red.
The literature on fair algorithms mainly derives its fairness concepts from a legal context. Generally, a process or decision is considered fair if it does not discriminate against people on the basis of their membership to a protected group, such as sex or race. In this case we tested for discrimination against foreigners and on the basis of sex. We detect discrimination in a decision making process (indipendent of a human or algorithmic origin) by testing the decision against a “ground truth”(e.g. recividism occured within two years after release date or not). Processes that are overproportionally correct or wrong for a particular protected feature indicate discrimination. Risk assessment tools Tools which assist decision makers are nowadays used in criminal justice, medicine, finance etc. One of the well known tools for recidivism prediction is COMPAS, which stirred some controversy a while ago because arguably it discriminates against African-American people.
SAVRY stands for Structured Assessment of Violent Risk in Youth and it’s used in many countries across the world to assess the risk of violent recidivism for young people. It has been mainly designed to work well for violent crimes and male juveniles. In contrast to COMPAS, SAVRY is a transparent list of features/terms,and the final risk evaluation remains at the discretion of the human expert..
SAVRY is based on a questionnaire which collects information on: early violence, self-harm violence, home violence, poor school achievement, stress and poor coping mechanisms, substance abuse, criminal parent/caregiver. However, SAVRY is expensive, as it requires the expertise and the time to gather all that data. We wanted to see if machine learning models are better in predicting recidivism and if they exhibit any discrimination, in comparison to SAVRY. Experiments In terms of input features we test the following three sets:
Static features (demographics and criminal history features) - red features
SAVRY features - blue features
Static + SAVRY - red+blue features
In terms of machine learning methods, we tested logistic regression, multi-layer perceptron, svm, knn, random forest, naive bayes to report predictive performance (area under the curve), but we stayed with the top two performing ones to evaluate fairness (logistic regression and multi-layer perceptron).
The dataset we used in our experiments comprises 855 juvenile offenders aged 12-17 in Catalonia with crimes committed between 2002 -2010. The release was in 2010 and the recidivism status was followed up in 2013 and 2015.
In our experiments the SAVRY Sum (0.64) and Expert (0.66) have a lower predictive power (lower AUC) than the ML models.
However, when looking at the classification errors done with respect to foreigners (false positive rates for foreigners compared to false positive rates for Spanish), ML is generally more unfair.
In fact, the more non-SAVRY features (red) you introduce, the more you increase the disparity between Spaniards and foreigners. Not using non-SAVRY features is still discriminative (first two bars in the first column) which was expected: many state of the art papers report discrimination even in the absence of problematic features.
The difference between the three sets of features led us to analyze the importance of the features by using machine learning interpretability. With respect to that, while some models are interpretable by definition, for other black-box models we need post-hoc methods which can give an approximate interpretation. One of these frameworks, possibly the most used is LIME.
For logistic regression the importance is given by the weights learned by the model:
For multi-layer perceptron, LIME yields the following global importance:
Note that when you combine non-Savry (red) and SAVRY (blue) features, the model relies more on the non-SAVRY features on demographics and criminal history.
As we mentioned, not using the red features has lead to discrimination. This made us think that the causes of discrimination do not lie in the features but in the data itself. It is established in the literature that one of the important sources of discrimination is data itself. A good indicator of this is the difference in prevalence of recidivism also termed “base rates” between Spaniards and foreigners. Basically, if 46% of the foreigners in your dataset are recidivists compared to solely 32% of the Spaniards, then your model will learn that the features correlated to being a foreigner are important. Such a model will yield more false alarms with respect to foreigners.
Therefore, we conclude that algorithms that predict recidivism in criminal justice should only be used under the awareness of such fairness issues.
BLOG
research, news, machine learning, fairness, bias