Imagine you are using AI to judge whether someone should get parole based on whether they are likely to commit another crime while on parole.The algorithm returns “likely” if it thinks you’re likely to commit a crime while on parole.Who is affected by this algorithm?What is a “false positive” in this case? Which of the people affected by the algorithm might care about the false positive rate? Why?What is a “false negative” in this case? Which of the people affected by the algorithm might care about the false negative rate? Why?Under which circumstances (if any) would you recommend use of this algorithm?