Background Child welfare agencies in many countries are increasingly using predictive analytics to influence decisions about the allocations of resources and services, risk, and intervention.
Analysis The speed with which predictive analytics is being introduced in child welfare services is problematic. Research on this issue raises significant concerns about inequality, transparency, public accountability and oversight.
Conclusion and implications These systems are being introduced before adequate review and necessary public debate on whether they should be used in areas of social care. In order for such debate to occur, there needs to be: a) more information about where and how these systems are being implemented; b) greater effort to generate wider public deliberation about their use; and c) more investigation of their impact on practitioners and families.
Contexte Dans plusieurs pays, les agences de protection de l’enfance recourent de plus en plus à l’analyse prédictive afin d’influencer les décisions sur l’allocation des ressources et services, le risque et les interventions.
Analyse L’empressement avec lequel on a commencé à appliquer l’analyse prédictive aux services de protection de l’enfance pose un problème. En effet, la recherche sur ce sujet soulève des questions significatives sur l’inégalité, la transparence, l’obligation publique de rendre des comptes et la surveillance publique.
Conclusion et implications On a adopté cette approche avant de l’avoir examiné adéquatement ou d’avoir débattu publiquement de sa pertinence pour les services sociaux. Pour qu’un tel débat puisse avoir lieu, il faudrait : a) plus d’information sur où et comment instaurer cette approche; b) un plus grand effort pour encourager des délibérations publiques sur celle-ci; et c) une investigation plus approfondie de son impact sur les praticiens et les familles.
Local authorities in the United Kingdom and administrations in other countries are introducing predictive analytics into child welfare services. In the United Kingdom, local authorities are introducing predictive analytics in part as a response to austerity policies that have led to funds and services being drastically cut. Local authorities in England have had funding from central government cut by nearly a third in real terms since the Conservatives launched their austerity program in 2010 (Innes & Tetlow, 2015). The poorest councils, such as those in the metropolitan areas of the north of England and eastern London boroughs, have been hit the hardest and have been unable to raise sources of revenue in the same way as the wealthier councils on the western side of the city and in the south of England (Gray & Barford, 2018; Petrie, Ayrton, & Tinson, 2018). The corollary is that councils with the most need are the least able to respond, since the cuts have come at the same time as demand has increased. To offset this loss, some councils have adopted predictive analytics systems with the aim of making better use of scarce and decreasing resources in order to identify those in need faster, gain a better understanding of the issues, and facilitate resource allocation. This article is part of the work being conducted at the Data Justice Lab at Cardiff University to investigate the uses of predictive analytics and scoring systems in the public sector. It presents a literature review highlighting the risks that come with the use of predictive analytics and automation in child welfare.
The use of predictive analytics systems for the administration of child welfare services is an example of the kinds of algorithmic tools and techniques being introduced in social services sectors more broadly. The councils of Thurrock, Newham, Tower Hamlets, Hackney and Bristol in the U.K. are, at the time of writing, trialing or using predictive analytics in the administration of their child welfare systems. Some jurisdictions are acquiring technologies from private sector vendors, such as Xantura’s Children’s Safety Profiling Model (London Borough of Hackney, 2017; Xantura, 2018). Some councils, such as Bristol, are developing their own in-house systems. These new decision support systems are being introduced into the administration of child welfare services with good intentions, and as a result of scarcity. However, these systems can result in unintended negative consequences for those who have had their personal information included, often without consent. Very little is known about these systems, as vendors such as Xantura argue that they cannot release the details about the technologies as it may prejudice potential interventions and compromise their commercial interests (LBH, 2017). This article aims to demystify these systems somewhat. It identifies a number of risks and concerns and questions whether new data-driven decision support systems, such as child welfare predictive analytics, should be implemented. The article raises concerns about the closed and insular manner in which these systems are introduced and calls for greater transparency, the implementation of accountability measures, and public engagement.
Predictive analytics can be defined as a system that combines data, algorithms, machine learning, and statistical techniques to predict what may happen in the future. These systems are being used across sectors, including retail, where they are used to make decisions about how to target consumers (Turow, 2014, 2017); health research and services (Powles & Hodson, 2017; Prainsack, 2017); and government, where they are used to try to improve productivity and services (Redden, 2018). In child welfare, predictive analytics is being used to estimate the risk of a child being abused and the likelihood that harmful events will reoccur; it is also being used to understand how different aspects of a child welfare and social system interact in the administration of a child’s welfare and to learn more about agency operations (Teixeira & Boyas, 2017). The results of this literature review suggest that often the goal for these systems is to identify children and families in need and to intervene early. The argument for the application of predictive analytics in child welfare is often that authorities should take advantage of all the tools necessary to prevent harm to a child and help families before they are in crisis. The developers of predictive analytics tools argue that these are to aid social workers to make case-management decision, not to override them (Eubanks, 2018; Dencik, Hintz, Redden, & Warne, 2018). Vendors and administrators stress that predictive analytics should be one tool among many for social workers. In addition, it is argued that these systems can help resource-poor public administrations do more with less. This logic often overshadows how the uses of predictive analytics can result in unintentional harm.
Predictive analytics is also being adopted by child welfare agencies in the United States. Some countries, such as New Zealand and Canada, are considering the use of these systems. For example, the Saskatoon Police Service is currently working with other levels of government and considering its application (Stoneham, Stockdale, & Gossner, 2017). It is difficult to know precisely where and how predictive analytics and other types of algorithmically driven systems are being introduced in child welfare and public services more broadly, since few governments list the types of administrative systems they procure (Treasury Board Secretariat of Canada, 2017). Further, it has been argued that given the risks that come with algorithmically informed decision-making, the use of these systems should be made public and the provision of a list of where and how such systems are in operation is needed (Dencik, Hintz, Redden, & Warne, 2018; Science and Technology Committee, 2018).
The use of predictive analytics in child welfare is part of the continuum of the history of the computerization, automation, and rationalization of social work, in which a range of tools has been introduced to support decision-making processes in the past three decades. These include the introduction of guidelines, checklists, risk-assessment tools, new information, and computerized protocols to modernize systems and procedures (Alfandari, 2017; Gillingham & Graham, 2017). Unfortunately, the introduction of what was claimed to be an innovation also often undermined the practice of social work by impeding decision-making instead of enhancing it, and by directing attention away from the relational aspects of social work (Gillingham, 2018; Munro, 2010; White, Broadhurst, Wastell, Peckover, Hall, & Pithouse, 2009). Given the cost of introducing new information systems, in monetary terms and in terms of the risks to the people these systems are intended to support, it is clearly more ethical and just for governments introducing such changes to appreciate the complexity involved from the start. In the case of using predictive analytics in the administration of child welfare services, this would involve identifying the full range of risks the new systems present, investigating long-term unintended consequences, and enabling greater debate about the changes taking place. The following section introduces the results of a literature review that identifies five broad groups of issues and risks associated with the deployment of predictive analytics in child welfare systems, namely: 1) lack of transparency, 2) bias, 3) accuracy and reliability, 4) stigmatization, and 5) the limits of the data.
It is often difficult for the users of social welfare predictive analytics tools to understand how these systems work and how to interrogate them, especially since this is not a requisite skill set in this line of work. This is also a common problem that has been identified more broadly in the application of algorithmic and artificial intelligence systems in other sectors (Pasquale, 2015). A range of factors can make it difficult for people to interrogate these systems or their outputs. For example, these systems are often black boxed or appear to be a form of “technomagic,” whereby intellectual property rules preclude the ability to assess how they actually work; even if they are open, often their construction involves multiple makers and iterations, which makes it difficult even for specialists to unpack them (Gillingham, 2018; Pasquale, 2015). Also, many of these systems go unquestioned; there is a general belief that these new data systems are objective and neutral, which makes it “normal” not to interrogate them (Kitchin, 2017).
The recent European General Data Protection Regulation (EU 2016), which came into force in May of 2018, was meant to be a corrective measure to address the lack of transparency surrounding data practices more generally. Under the EU GDPR individuals now have the right to request an explanation about how an automated system makes decisions about them. While this is useful, in the case of child welfare, those enlisted in predictive analytics systems do not necessarily know that their data are being used or that decisions about them are being made through the use of these systems (Dencik, Redden, Hintz, & Warne, 2019). This brings about several questions related to transparency. For example, do the data subjects of child welfare predictive analytics know they are being subjected to automated risk assessment? How and by what means should people be informed about automated decision-making? Does the fact that social workers know about their use imply that notice has been given? Finally, if a person is notified, given uneven knowledge and power structures in family investigations, is it likely that someone is able and willing to challenge the outcome of an automated decision (Keddell, 2018)?
The use of predictive analytics in child welfare, as with applications in other areas, such as policing (Shapiro, 2017), sentencing (Angwin, Larson, Mattu, & Kirchner, 2016), performance scoring (O’Neil, 2016), and credit scoring (Hurley & Adebayo, 2016), can reinforce and exacerbate discrimination. In these contexts, predictive analytics systems, similar to other optimizing systems, can become co-creators and shapers of the environments they analyze (McQuillan, 2017; Overdorf, Kulynych, Balsa, Troncoso, & Gürses, 2018). For example, the production of risk scores and automated assessments about families can influence those who work with them. The systems may generate negative feedback loops since there are biases embedded in data systems, whereby data inputs are not necessarily correct, objective, and neutral. As noted by Emily Keddell (2018), child welfare data are not necessarily a record of the truth nor of all incidents of child abuse since they are a record of the “administrative recording of factors—such as reports to child protection services or legal orders” (par. 2). If those who report have class and racial biases about those they are investigating, these will be reproduced in the reporting, the data, and the outputs of the predictive system. There is, therefore, the potential in these circumstances for the model to reinforce longstanding racist and class biases (Eubanks, 2018; O’Neil, 2016).
Previous investigations have provided some information about how bias can enter predictive analytics systems in child welfare. Both Alleghany County in the United States and the New Zealand government enabled investigators to study their systems. In New Zealand, concerns led to further research before deciding not to implement the predictive system investigated. In the United States, Virginia Eubanks (2018) raised concerns about how bias can enter a system through the over-representation of a particular population in the datasets being used. She raised concerns about the choice of datasets being used, assumptions embedded in the types of variables identified as significant, and how these variables are weighted. For example, she discovered the use of variables such as the length of time a parent received social benefits or whether or not they were a single parent. Another problem identified in the Eubanks (2018) study was that the dataset being used to generate the scoring system was comprised only of people receiving benefits. Those accessing private services did not have their data used in the scoring system. The generated risk scores were biased since they were based only on those using public services and not the entire population. Eubanks found that a quarter of the variables used in the Alleghany model were “direct measures of poverty” (2018, p. 156). As she notes, this was particularly significant because in the United States, 75 percent of children investigated for abuse involve investigations for neglect and not physical or sexual abuse. The definition of neglect is subjective (Eubanks, 2018). Philip Gillingham and Timothy Graham’s (2017) study of the New Zealand model noted that the system punished the poor, as the higher weighted variables were proxies for poverty. For example, the public assistance database was used to train the algorithm. As a result, this variable was heavily weighted and the length of time a parent was on benefits was used as a significant variable. In both cases, assumptions about risk were made and those assumptions were reflected in both the choice of datasets and the weighting, ultimately affecting the output guiding decision-making.
Accuracy is a significant issue in risk scoring. Those developing algorithmic systems to identify risk must negotiate between false positives and false negatives. The objective is to balance accurately predicting those at high risk, while not being wrong too often by misidentifying people as high risk (Williams, 2017). In addition, those developing these systems are also trying to ensure they do not miss or overlook people who the system should have scored as risks. Given that people will always be wrongly flagged as a risk with predictive systems, should such systems be used when they could influence the lives of families? Further, how inaccuracy is dealt with is important. What opportunities are people offered to challenge or remove themselves from these systems when a high-risk score is wrong? What support will be provided to those misidentified? How can they challenge these systems, and who will help them do so?
Transparency about how accurate and reliable a system is should be a basic reporting requirement by agencies that use these systems and the vendors that promote them. The number of people a predictive analytics system correctly identifies is not the only way to assess its effectiveness. Assessing effectiveness should also include accounting for how many people were incorrectly identified. Keddell (2018) argues that when it comes to predictive analytics, “the devil is in the detail.” Errors can be common. Christopher Church and Amanda Fairchild’s (2017) assessment of a predictive model used in Los Angeles’ children’s services, which was promoted as highly effective in practice, “produced a false alarm 96 percent of the time” (p. 71). Is the accuracy rate of a predictive analytics model good enough to warrant the risk of its use? Accuracy and reliability are key ethical considerations that should not be left solely to the makers of predictive analytics systems in child welfare or social services.
We know too little about how these risk scores are being used in practice by social workers. The Eubanks (2018) study of social workers in Alleghany demonstrated that the outputs of predictive systems can influence and bias those using them—even when they know that accuracy and reliability are problems. Case study interviews with those implementing data systems in the public sector in the U.K. show little attention is being devoted to investigating how the use of these tools may be changing social work or the lives of those whose data is caught up in these systems (Dencik, Redden, Hintz, & Warne 2019). Research in the United States has found that caseworkers are not getting the training needed to understand and interpret risk scores. In a context where there are few resources available for those identified as at risk, agencies are raising concerns that caseworkers who do not understand the limits of risk scores may try to avert risk by placing children in care because the resources to intervene are not available. The fear is that this will put further pressure on the system by increasing the number of children in care and lead to increased caseloads and more separated families (Teixeira & Boyas 2017).
What are the long-term implications of being wrongfully labelled as at a high risk of abusing your children? Little attention is being devoted to investigating and recording the kind of harm that a false flag can cause. Amnesty International’s (2018) research on the Gang Matrix, a database of suspected gang members, in the U.K. demonstrated how surveillance and secret labels can stigmatize young people throughout their interactions with government, affecting their opportunities and prospects as they look for work, seek housing, and go through school. This reinforces previous research on the effects of labelling young offenders as “high risk” (Restivo & Lanier, 2015). A label presents a symbolic marker that can follow people throughout their interactions with the state, particularly when such markers are digitized and shared (Murphy, Fuleihan, Richards, & Jones, 2011). If someone turns their life around, to what extent are those holding their electronic records able to account for this change? Do those who have been labelled have the opportunity to change how they have been tagged? Do people even know they have been labelled in a system?
The data held by social welfare agencies are limited. This is primarily because data systems and privacy regulations often determine and limit the kind of information that can be recorded. Also, as noted by Gillingham and Graham (2017):
[T]he data that exist on human service information systems are created by people making subjective decisions about what to record and exclude. … [T]he introduction of information systems has reconfigured the kinds of information that social workers use to make decisions about intervention, this has consequences for the kind of big data that are available to be mined—in-depth social explanations of the complex problems faced by service users have been replaced by informational surface descriptions that rely on the codification of characteristics in order to predict the risk of adverse effects. Narrative accounts of the circumstances of service users have been lost in databases with information geared more to operationality than meaning. (p. 139)
In response to this limitation, Gillingham and Graham (2017) argue for “reflexive data science” (p. 143). One way of doing this is to ensure that notes can be used to document the subjective decisions made about data recording and transformations. Another challenge is that data systems such as these are limited by the datasets they use and the kind of information they collect, and this can bias what kind of information can be “known” and the type of information that is treated as valuable. For example, a predictive analytics system that looks for correlations and patterns to identify at-risk children does not tell us much about the kinds of factors that contribute to healthier families. For example, this kind of data does not capture the positive effects of supportive programs such as “Sure Start” in the U.K. or after-school clubs, or the negative effects of cuts to these types of programs. Another limit is that these systems, in their emphasis on correlation over causation, can individualize social problems by directing attention away from the structural causes of social problems (Keddell, 2015). Finally, the risk scores produced by predictive systems must themselves be understood as being limited, since the score is often not accompanied by an explanation of how it was created, how it should be interpreted, and why a person was flagged. For these reasons Church and Fairchild (2017) have argued that such systems must provide “contextual reasoning for why certain cases are being flagged” (p. 78).
The issues and risks identified in this article demonstrate the need for greater public deliberations about the use of predictive analytics in child welfare, particularly now that these systems are being introduced. This should happen sooner rather than later, before government bodies become technologically locked in to private companies through contractual arrangements and technological momentum makes it difficult to change a system (McQuillan, 2017). This article has summarized a number of concerns and risks that were identified through child welfare case studies and a literature review conducted as part of Data Justice Lab research. These concerns and risks relate to1) lack of transparency; 2) bias; 3) accuracy and reliability 4) stigmatization, and 5) the limits of the data.When predictive analytics systems are being considered for social services there needs to be greater public consultation before implementation, which includes the option of the public deciding “no-go” areas. Since identified risks have the potential to disproportionately harm already marginalized communities and may further reinforce inequality, particular effort is needed to engage those who stand to be most affected. Often these systems are well intentioned and implemented by those trying to do more with less. Irrespective of good intentions, there is a danger that the risks and unintentional harms that can be caused by these systems are being dismissed or ignored as more and more public bodies introduce them. This is not in anyone’s interest. The potential problems and risks embedded in these systems, as identified in this article, make clear that critique and dissent, particularly from those affected by these systems as well as practitioners, must be encouraged. Public administrators need to be more transparent about the range of risks that come with these systems. Ethical considerations should be publicly deliberated. Input from those affected by these systems, including practitioners such as social workers, should be part of systems of oversight.
At a social level, it is necessary to be cognizant of the kind of society and future being created through the use of new data systems in public services. Public bodies are building vast and interlinked datasets about the people who rely on public services and then subjecting those whose data are automated and held by these systems to algorithmically informed decision-making (Eubanks, 2018). The over-representation of poor populations in these systems suggests we are moving to a society where the poor are always under suspicion. Discourses and belief systems that there can be simple technocratic solutions to complex social problems can be highly influential (Gillingham, 2018; Morozov, 2013). Such techno-solutionism can direct attention away from the larger political and economic forces leading to family breakdown.
The fact that all of this is already happening puts societies at risk of normalizing datafied practices before there has been a chance for debate (McQuillan, 2017). As argued by Dan McQuillan, addressing these imbalances “requires something radically democratic” (2017, p. 7). Dillon Reisman, Jason Schultz, Kate Crawford, and Meredith Whittaker (2018) recommend a range of due-process infrastructures to govern predictive analytics systems. Cathy O’Neil (2016) argues that accountability should extend across the life cycle of projects and involve auditing the integrity of the data; the terms being used and the definition of success; the accuracy of models, with particular attention to who they fail; the long-term effects of the algorithms being used; and finally, the feedback loops being created through new big data applications. Others suggest changes to governance structures, such as the implementation of people’s councils (McQuillan, 2017) and national algorithm safety boards (Schneiderman, 2016). Common among all of these suggested approaches is that the solution must be more than technical and involve recognizing that challenging the risks that come with datafied systems must be tied to efforts to advance social justice (Dencik et al., 2016).
This research is part of a larger Data Justice Lab co-led project investigating the uses of data systems in the U.K. public sector. The project is funded by the Open Society Foundations. Special thanks to my Data Justice Lab colleagues Lina Dencik, Arne Hintz, Emiliano Treré, Harry Warne, and Jessica Brand.
Alfandari, Ravit. (2017). Systemic barriers to effective utilization of decision-making tools in child protection practice. Child Abuse & Neglect,67, 207–215. Google Scholar | |
Amnesty International. (2018). Met Police using “racially discriminatory” Gangs Matrix database. URL: https://www.amnesty.org.uk/press-releases/met-police-using-racially-discriminatory-gangs-matrix-database [September 30, 2018]. Google Scholar | |
Angwin, J., Larson, J., Mattu, S., & Kirchner, Lauren. (2016, May). Machine bias. ProPublica, 23. URL: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing [September 2, 2016]. Google Scholar | |
Church, Christopher E., & Fairchild, Amanda J. (2017). In search of a silver bullet: Child welfare’s embrace of predictive analytics. Juvenile & Family Court Journal, 68(1), 67–81. Google Scholar | |
Council of the European Union/European Parliament. (2016, April 27). Regulation of the European Parliament and of the Council on the protection of natural persons with regard to the processing of personal data and on the free movement of such data and repealing Directive 95/46/EC(General Data Protection Regulation). URL: https://op.europa.eu/en/publication-detail/-/publication/3e485e15-11bd-11e6-ba9a-01aa75ed71a1/language-en [December 15, 2019]. Google Scholar | |
Dencik, Lina, Hintz, Arne, & Cable, Jonathan. (2016). Towards data justice? The ambiguity of anti-surveillance resistance in political activism. Big Data & Society, July-December, 1–12. doi: 10.1177/2053951716679678 Google Scholar | |
Dencik, Lina, Hintz, Arne, Redden, Joanna, & Warne, Harry. (2018). Data scores as governance: Investigating uses of citizen scoring in public services [Project report]. Cardiff, UK: Data Justice Lab. URL: https://datajustice.files.wordpress.com/2018/12/data-scores-as-governance-project-report2.pdf [March 1, 2019]. Google Scholar | |
Dencik, Lina, Redden, Joanna, Hintz, Arne and Warne, Harry. (2019). The ‘golden view’: data-driven governance in the scoring society. Internet Policy Review, 8(2), doi: 10.14763/2019.2.1413 Google Scholar | |
Eubanks, Virginia. (2018). Automating inequality. New York, NY: Macmillan. Google Scholar | |
Gillingham, Philip. (2018) From bureaucracy to technocracy in a social welfare agency: A cautionary tale. Asia Pacific Journal of Social Work and Development. doi: 10.1080/02185385.2018.1523023 Google Scholar | |
Gillingham, Philip, & Graham, Timothy. (2017). Big data in social welfare: The development of a critical perspective on social work’s latest “electronic turn.” Australian Social Work, 70(2), 135–147. Google Scholar | |
Gray, Mia, & Barford, Anna (2018). The depths of the cuts: The uneven geography of local government austerity. Cambridge Journal of Regions, Economy and Society, 11(3), 541–563. Google Scholar | |
Hurley, Mikella, & Adebayo, Julius (2016). Credit scoring in the era of big data. Yale Journal of Law and Technology, 18(1), 1–69. URL: http://digitalcommons.law.yale.edu/yjolt/vol18/iss1/5 [July 2, 2019] Google Scholar | |
Innes, David, & Tetlow, Gemma. (2015, September). Delivering fiscal squeeze by cutting local government spending. Fiscal Studies, 36(3), 303–325. URL: https://www.ifs.org.uk/publications/8033 [October 5, 2018]. Google Scholar | |
Keddell, Emily. (2018, April 6). Risk prediction tools in child welfare contexts: The devil is in the detail. husITa. URL: http://www.husita.org/risk-prediction-tools-in-child-welfare-contexts-the-devil-in-the-detail/ [August 7, 2018]. Google Scholar | |
Keddell, Emily. (2015). The ethics of predictive risk modelling in the Aotearoa/New Zealand child welfare context: Child abuse prevention or neo-liberal tool? Critical Social Policy, 35(1), 69–88. Google Scholar | |
Kitchin, Rob. (2017). Thinking critically about and researching algorithms. Information Communication and Society, 20(1), 14–29. Google Scholar | |
London Borough of Hackney. (2017, April 17). FOI response to request by Mr. Knuutila. URL: https://www.whatdotheyknow.com/request/documents_relating_to_the_childr#incoming-1143765 [September 10, 2018]. Google Scholar | |
McQuillan, Dan. (2018). People’s councils for ethical machine learning. Social Media + Society April-June, 1–10. Google Scholar | |
Morozov, Evegny. (2013). To save everything click here: Technology solutionism and the urge to fix problems that don’t exist. New York, NY: Allen Lane. Google Scholar | |
Munro, Eileen. (2010, October 1). The Munro review of child protection. Part one: A systems analysis. London, UK:Department for Education. URL: https://www.gov.uk/government/publications/munro-review-of-child-protection-part-1-a-systems-analysis [December 10, 2016]. Google Scholar | |
Murphy, Daniel S., Fuleihan, Brian, Richards, Stephan C., & Jones, Richard S. (2011). The electronic “scarlet letter”: Criminal backgrounding and a perpetual spoiled identity. Journal of Offender Rehabilitation, 50(3), 101–118. Google Scholar | |
O’Neill, Cathy. (2016) Weapons of math destruction. New York, NY: Crown. Google Scholar | |
O’Neill, Cathy. (2017). The era of blind faith in big data must end. Ted Talks. URL: https://www.ted.com/talks/cathy_o_neil_the_era_of_blind_faith_in_big_data_must_ed [October 10, 2017]. Google Scholar | |
Overdorf, Rebekkah, Kulynych, Bogdan, Balsa, Ero, Troncoso, Carmela, & Gürses, Seda. (2018). POTS: protective optimization technologies. Computers and Society. URL: https://arxiv.org/pdf/1806.02711.pdf [October 3, 2018]. Google Scholar | |
Pasquale, Frank. (2015). The black box society: The secret algorithms that control money and information. Boston, MA: Harvard University Press. Google Scholar | |
Petrie, Issy, Ayrton, Carla, & Tinson, Adam. (2018, September 11). A quiet crisis: Changes in local government spending on disadvantage. London, UK: Lloyds Bank Foundation for England and Wales. URL: https://www.npi.org.uk/publications/local-government/quiet-crisis/ [October 5, 2018]. Google Scholar | |
Powles, Julia, & Hodson, Hal. (2017). Google DeepMind and healthcare in an age of algorithms. Health and Technology, 7(4), 351–367. Google Scholar | |
Prainsack, Barbara. (2017). Research for personalized medicine: Time for solidarity. Medicine and Law, 36(1), 87–98. Google Scholar | |
Redden, Joanna. (2018). Democratic governance in an age of datafication: Lessons from mapping government discourses and practices. Big Data & Society. doi: 10.1177/2053951718809145 [February 6, 2019]. Google Scholar | |
Reisman, Dillon, Schultz, Jason, Crawford, Kate, & Whittaker, Meredith. (2018). Algorithmic impact assessments: A practical framework for public agency accountability. AI Now, 1–22. URL: https://ainowinstitute.org/aiareport2018.pdf [July 10, 2018]. Google Scholar | |
Restivo, Emily, & Lanier, Mark M. (2015). Measuring the contextual effects and mitigating factors of labelling theory. Justice Quarterly, 32(1), 116–141. Google Scholar | |
Science and Technology Committee. (2018). Algorithms in decision-making inquiry. London, UK: Parliament. URL: https://www.parliament.uk/business/committees/committees-a-z/commons-select/science-and-technology-committee/inquiries/parliament-2017/algorithms-in-decision-making-17-19/ [January 10, 2019]. Google Scholar | |
Shapiro, Aaron. (2017). Reform predictive policing. Nature. 541 (7638), 458–460. URL: https://www.nature.com/news/reform-predictive-policing-1.21338 [July 2, 2019]. Google Scholar | |
Shneiderman, Ben. (2016). Opinion: The dangers of faulty, biased, or malicious algorithms requires independent oversight. PNAS, 113(48), 13538–13540. Google Scholar | |
Stoneham, Jennifer, Stockdale, Keira, & Gossner, Delphine. (2017, October 5). Emerging development in evidence-based practices in child welfare: the role of proactive, data driven, community safety interventions, Prevention Matters Conference. URL: https://skprevention.ca/wp-content/uploads/2017/09/October-5-1050-1150-Keira-Stockdale-Emerging-Developments.pdf [October 5, 2018]. Google Scholar | |
Teixeira, Christopher, & Boyas, Matthew. (2017, June). Predictive analytics in child welfare. U.S. Department of Health and Human Services. Office of the Assistant Secretary for Planning and Evaluation. URL: https://aspe.hhs.gov/system/files/pdf/257846/PACWAnIntroductionAdministratorsPolicyMakers.pdf [October 5, 2018]. Google Scholar | |
Treasury Board Secretariat of Canada. (2017). Guidelines on the proactive disclosure of contracts. Treasury Board Secretariat. URL: https://www.tbs-sct.gc.ca/pol/doc-eng.aspx?id=14676 [February 10, 2019]. Google Scholar | |
Turow, Joseph. (2014). The daily you. London, UK: Yale University Press. Google Scholar | |
Turow, Joseph. (2017). Aisles have eyes: How retailers track your shopping, strip your privacy, and define your power. London, UK: Yale University Press. Google Scholar | |
White, Sue, Broadhurst, Karen, Wastell, David, Peckover, Sue, Hall, Chris, & Pithouse, Andy. (2009). Whither practice-near research in the modernization programme? Policy blunders in children’s services. Journal of Social Work Practice, 23(4), 401–411. Google Scholar | |
Williams, Simon. (2017, February 17). Errors in Australia’s Centrelink debt recovery system were inevitable as in all complex systems. The Register.URL: https://www.theregister.co.uk/2017/02/17/errors_in_centrelinks_debt_recovery_system_were_inevitable_as_in_all_complex_systems/ [March 11, 2019]. Google Scholar | |
Xantura. (2018).Children’s safeguarding. Xantura. URL: https://www.xantura.com/focus-areas/childrens-safeguarding [March 3, 2019]. Google Scholar |