Volume 44 Issue 2, June 2019, pp. PP-43-PP-50

This article summarizes two initiatives for artificial intelligence (AI) underway in the Canadian public service: public consultation and collaboration in compiling an algorithmic impact assessment, and a symposium on AI and human rights held by Global Affairs Canada. The findings contextualize the national consultations on digital and data transformation and future steps for more inclusive AI governance in Canada.

Cet article offre une synthèse de deux initiatives sur l’intelligence artificielle (IA) en cours dans la fonction publique canadienne : la consultation et collaboration du public dans la compilation d’une évaluation d’impact algorithmique et un symposium sur l’IA et les droits de la personne organisée par Affaires mondiales Canada. Les conclusions permettent de donner un contexte aux consultations nationales sur la transformation du numérique et des données et les mesures à prendre pour une gouvernance en intelligence artificielle plus inclusive au Canada.

Canada’s national consultations on digital and data transformation present artificial intelligence (AI) as a revolution underway. According to the Honourable Navdeep Bains, minister of Innovation, Science and Economic Development (ISED, 2018), “Today, AI [artificial intelligence] and big data are transforming all industries and sectors. They are presenting new opportunities for innovators to create jobs and generate prosperity” (par. 8). This narrow framing infers that government must adapt to technology changes rather than consider what decisions have encouraged the adoption of AI. More so, this framing overlooks the way the Canadian government is actively shaping policy for AI. This article provides a summary of the Canadian government’s own rapid development of AI governance, curiously absent from the consultation. The findings draw on our active engagement in the process and build on the legacy of Canadian communication policy (Shade, 2008). We hope our experiences interacting with Global Affairs Canada and the Treasury Board help document the beginnings of AI governance in Canada less well known that the major funding announcements as part of Pan-Canadian Artificial Intelligence Strategy.

As a policy issue, AI operates at the intersection of big data and automation. On one side, machine learning—an algorithm that improves through experience, and the kind of AI most talked about in Canada—requires massive amounts of training data to optimize its algorithms, a privacy issue. Once trained, AI requires proper implementation, and should be used only when experts deem it acceptable.

Data and automation are old concerns—matters known to communication, media, and science and technology studies. These intersections are highlighted to stress the present motivations for studying the governance of AI. Machine learning depends on the stabilization of information and data as well as the process of creating data, or datafication (Gabrys, Pritchard, & Barratt, 2016; Halpern, 2014; Sterne, 2012; van Dijck, 2013). It cannot be overlooked that two of the biggest players in machine learning, Google and Facebook, have unprecedented access to everyday communications, what amounts to a massive source of training data. In regards to automation, the promise of AI in replacing or assisting workers resonates with long-standing questions about the discourses that make certain jobs, especially feminized ones, automatable (Gray, 2015; Light, 1999; Taylor, 2018). And both data and automation require renewed attention to the political economies, myths, and cultural logics that orient technologies (AI Now Institute, 2018b; Hicks, 2017; Mosco, 2009).

The Canadian government’s experiments with AI are part of a global rush to codify rules and regulations on AI. A few standards have been proposed to govern AI and its underlying data. Many of these standards come from data science and are concerned with ensuring transparent practices and establishing accountable methods of “securing the role of facts in public debate” (Marres, 2018, p. 424). FAIR refers to Findable, Accessible, Interoperable, and Reusable—all guiding principles for data standards developed at the Lorentz Centre in the Netherlands (FORCE11, 2014) Building on FAIR, the Responsible Data Science Initiative has proposed Fairness, Accuracy, Confidentiality, and Transparency (FACT) to address the call to “provide transparency; how to clarify answers in such a way that they become indisputable?” (Kemper & Kolkman, 2018, p. 5).

Regarding the acceptable use of AI, academic and industry discussions have centred on Fairness, Accuracy, Transparency, and Ethics (FATE). In general, these terms question the acceptability of AI—whether its models introduce bias, produce reliable results, and can be understood or explained—as well as suggest what should be the ethical framework of the industry (Barocas, Hood, & Ziewitz, 2013). Ethics has quickly become a solution often pushed by an industry hoping to emphasize the individual choices of developers over the scrutiny of critics calling for more regulation and accountability around the political economy of AI development and other systemic industry factors (Campolo, Sanflippo, Whittaker, & Crawford, 2017).

Neither FAIR, FACT, nor FATE initiatives have led to formal governance institutions or regulation. Overall, standards enforcement around AI remains an open question. Some have suggested enforcement might come from industry self-regulation. Many of the leaders of the AI community in Montréal, particularly at the Université de Montréal and the Institute for Data Valorisation (IVADO), have consulted with citizens and published the Montréal Declaration for the Responsible Development of AI (Declaration of Montréal for a Responsible Development of AI, 2019). The effect of the Montréal Declaration, launched on December 4, 2018, remains to be seen. Not to be outdone, the Toronto Declaration on AI was released at the 2018 RightsCon in Toronto (Access Now, 2018). Self-regulation might come from employees themselves, with labour action becoming more prominent at some of the biggest players in AI, especially Google (Shane, Metz, & Wakabayashi, 2018; Stapleton, Gupta, Whittaker, O’Neil-Hart, Parker, Anderson, & Gaber, 2018).

The Canadian government might be another mechanism to enact these standards. Not, however, through regulation. Despite calls for creating new legislation for AI (Chadwick, 2018) or interpreting existing law (McKelvey, 2018a), there is no legislation pending on AI governance. Instead, inside the Canadian government, two departments have embarked on unusual and promising experiments in developing best practices for AI, crafting policy tools meant for the government to establish industry-wide criteria (Copeland, 2018; Lascoumes & Le Gales, 2007).

The Treasury Board of Canada has been active in drafting policy for the federal government. The process concluded with the publication of the 2019 Directive on Automated Decision-Making (Greenwood, 2019). Work leading to the directive began with a “Digital Disruption White Paper” written in the summer of 2017 (Karlin, 2017, par. 8). The Treasury Board led the process as per its function of setting departmental policy across the federal government. The project lead, Michael Karlin (2017), announced the White Paper on Twitter and Medium on July 4, 2017. The success of the government’s adoption of responsible AI policies remains to be seen, but if approached correctly, Canada’s government could become a model on the national stage for the acceptable use of AI.

The federal government experimented with a highly open consultation process during the development of its AI self-regulation. It invited collaboration on its public GCcollab tool, an external social networking site that was started as a pilot in 2016. The public, but mostly experts, could join its Artificial Intelligence Policy Workspace, where other civil servants shared news and reports (Karlin, 2017). Most public collaboration centred around an open Google document that Karlin published on October 27, 2017. The disruption paper, entitled “Responsible AI in the Government of Canada,” summarized the benefits and risks of AI to the federal government. Through the comment feature of Google Docs, Karlin received feedback from interested members of the public, as well as academics and members of the AI community from across Canada. Karlin also toured a few universities, including Concordia University, to consult with stakeholders. Although the consultation was far from inclusive, the exercise did attempt to look at new ways to engage the public in the policy development process, part of a new emphasis on digital service delivery in the federal government and a promising turn toward new consultation in public policy in general (Bingham, Nabatchi, & O’Leary, 2005).

By March 2018, the report had been translated into an algorithmic impact assessment form for departments to use in their considerations of AI (Karlin, 2018). This tool, modelled after environmental impact assessment, had been popularized only a month prior as a tool for New York City’s task force on “Automated Decision Systems” by AI Now Institute (2018a), a leader in the field, and Nesta in the United Kingdom. The Canadian government tool provides a risk assessment based on:

  • Impact on individuals and entities

  • Impact on government institutions

  • Data management

  • Procedural fairness

  • Complexity

These criteria drew on the report’s final section on “Policy, Ethical, and Legal Considerations of AI,” where it discussed bias and fairness in data, transparency, and accountability, as well as acceptable use.

This tool is just now being used across the federal government due to 2019 Directive on Automated Decision-Making.1 Now will the tool prevent problematic applications? Already, Petra Molnar and Lex Gill (2018) of the Citizen Lab have questioned the potential risks of using it in immigration policy in lieu of more formal, enforced standards. The first adopter of this tool seems to be Justice Canada, looking to use new “legaltech” to predict rulings in tax cases (Beeby, 2018). Applications to date seem low risk, but it also seems clear that though we might know how government considers AI, we are not privy to who engages with it—and whether high-risk situations such as immigration should be considered a no-go zone.

As the Treasury Board formulated national AI policy, Global Affairs Canada (GAC) turned to international issues on AI as a pressing matter for its Digital Inclusion Lab to address. Following through on the request to “promot[e] human rights, women’s empowerment and gender equality” (Office of the Prime Minister of Canada, 2017, par. 16) in a mandate letter to the Minister of Foreign Affairs, the GAC sought to understand critical perspectives on AI governance with a particular eye for bias toward race and gender, and with emphasis on labour and human rights paradigms.

By the fall of 2017, the GAC had begun approaching relevant academics and universities to organize a symposium on AI and human rights. Eventually the GAC collaborated with the Canadian Institute for Advanced Research (CIFAR), one of Canada’s leading sponsors of AI research, to run the symposium. The event was one of the first under CIFAR’s AI and Society program and one of the few examples of funding directed at the social impacts of AI in Canada.

Student teams from ten universities researched areas of potential policy intervention for AI regulation, including combating extremist content online, addressing discrimination and bias, and recognizing climate change and refugee rights through case studies. From their findings, the teams delivered policy recommendations in memorandums for action to the minister of foreign affairs, the Office of Human Rights Freedoms and Inclusion, and the Digital Inclusion Lab. Many presentations rebuked the “disruption” framing that saturates the industry, recognizing that AI exists on a long continuum of technological innovation and calling for reflexive governance based out of existing policy frameworks. By foregrounding a rights-based approach, these student policy recommendations rejected the industry habit of framing AI as unprecedented and therefore difficult to govern. By addressing AI applications out of existing rights-based frameworks, students looked past the hype to recognize many more mundane but essential ways AI is already impacting our daily lives.

The symposium represented an important consideration for governments looking to address the lack of racial and gender diversity in AI development and deployment (Silcoff, 2017). Student groups delivering the recommendations included participants of all genders and with diverse personal and disciplinary backgrounds. In order to be innovative and representative, tech has to move beyond the old boys club and include diverse voices. Affirmative action and equitable employment policies are not keeping pace with the speed of AI innovation, so inclusivity must be prioritized as a governance imperative. The policymakers that influence these markets, and consultations such as the GAC symposium, represent well-intentioned opportunities for intervention that value disciplinary and representative diversity.

The outcomes of this symposium remain unclear at this point. The likely immediate outcome seems to be a collaboration with Chief Information Officer Strategy Council to develop ethical AI standards.In a fractured world, these national initiatives might be part of international strategies to set AI standards, especially when delivered under Canada’s tenure as leader of the G7, and alongside work on the Canada-France Statement on Artificial Intelligence signed in June 2018 (Global Affairs Canada, 2018).

This article summarizes important developments around AI governance in the public service and raises questions about the intent of the ISED consultation. This consultation seems a curious artefact of the Liberal government’s present approach to technology policy. While asking for public opinion openly, the government seems to be rapidly formulating policy with ad hoc consultation mechanisms. Such an approach undermines trust in the process as a way to formulate decision-making.

The attempts at inclusion in these experiments do point to a way forward. Standards around AI can and should be approached from a critical perspective that considers development, deployment, and impact from a wide diversity of voices beyond the tech sector. More so, future consultations might problematize the matter of inclusion altogether. In their work on hybrid forums, Michel Callon, Pierre Lascoumes, and Yannick Barthe (2009) question how humans, nonhumans, and uncertainties might work toward democratic decisions. What would a hybrid forum for AI look like? Feminist science studies (Hayasaki, 2017), daemonic media studies (McKelvey, 2018), and most importantly Indigenous epistemology (Lewis, Arista, Kite, & Pechawis, 2018), provide clues. These approaches, often on the outsides of AI governance in Canada, point toward a much more radical project for inclusive consultation.


1 Questions for the Algorithmic Impact Assessment tool have been published at: https://open.canada.ca/data/en/dataset/4b5d3878-9d79-46d9-a4e6-db328493fc56.

Canadian Institute for Advanced Research, https://www.cifar.ca/ Google Scholar
Facebook, https://facebook.com Google Scholar
Google, https://google.com Google Scholar
Institute for Data Valorisation, https://ivado.ca/en Google Scholar
Access Now. (2018, May 16). The Toronto Declaration: Protecting the rights to equality and non-discrimination in machine learning systems. URL:https://www.accessnow.org/the-toronto-declaration-protecting-the-rights-to-equality-and-non-discrimination-in-machine-learning-systems/ [February 16, 2019]. Google Scholar
AI Now Institute. (2018a, February 21). Algorithmic impact assessments: Toward accountable automation in public agencies. URL: https://medium.com/@AINowInstitute/algorithmic-impact-assessments-toward-accountable-automation-in-public-agencies-bd9856e6fdde [November 14, 2018]. Google Scholar
AI Now Institute. (2018b, November 15). Gender, race and power. URL: https://medium.com/@AINowInstitute/gender-race-and-power-5da81dc14b1b [November 16, 2018]. Google Scholar
Barocas, Solon, Hood, Sophie, & Ziewitz, Malte. (2013). Governing algorithms: A provocation piece [SSRN Scholarly Paper No. ID 2245322]. Rochester, NY: Social Science Research Network. URL: http://papers.ssrn.com/abstract=2245322 [November 14, 2018]. Google Scholar
Beeby, Dean. (2018, September 13). Justice Canada pilot tests applicability of artificial intelligence in litigation. CBC News. URL: https://www.cbc.ca/news/politics/artificial-intelligence-tax-justice-pilot-1.4817242 [November 14, 2018]. Google Scholar
Bingham, Lisa, Blomgren, Nabatchi, Tina, & O’Leary, Rosemary. (2005). The new governance: Practices and processes for stakeholder and citizen participation in the work of government. Public Administration Review, 65(5), 547ߝ558. doi: 10.1111/j.1540-6210.2005.00482.x Google Scholar
Callon, Michel, Lascoumes, Pierre, & Barthe, Yannick. (2009). Acting in an uncertain world: An essay on technical democracy. Cambridge, MA: MIT Press. Google Scholar
Campolo, Alex, Sanflippo, Madelyn, Whittaker, Meredith, & Crawford, Kate. (2017). AI Now 2017 Report. New York, NY: AI Now. URL: https://ainowinstitute.org/AI_Now_2017_Report.pdf [November 14, 2018]. Google Scholar
Chadwick, Paul. (2018, October 28). To regulate AI we need new laws, not just a code of ethics. The Guardian. URL: https://www.theguardian.com/commentisfree/2018/oct/28/regulate-ai-new-laws-code-of-ethics-technology-power [November 14, 2018]. Google Scholar
Copeland, Eddie. (2018, February 20). 10 principles for public sector use of algorithmic decision making. URL: https://www.nesta.org.uk/blog/10-principles-for-public-sector-use-of-algorithmic-decision-making/ [November 14, 2018]. Google Scholar
Declaration of Montréal for a responsible development of AI.(2019). URL: https://www.montrealdeclaration-responsibleai.com [February 16, 2019]. Google Scholar
FORCE11 (2014, September 10) Guiding principles for findable, accessible, interoperable and re-usable data publishing version B1.0. URL: https://www.force11.org/fairprinciples [November 14, 2018]. Google Scholar
Global Affairs Canada. (2018, July 6). Canada-France statement on artificial intelligence.URL: https://international.gc.ca/world-monde/international_relations-relations_internationales/europe/2018-06-07-france_ai-ia_france.aspx?lang=eng [November 14, 2018]. Google Scholar
Gabrys, Jennifer, Pritchard, Helen, & Barratt, Benjamin. (2016). Just good enough data: Figuring data citizenships through air pollution sensing and data stories. Big Data & Society, 3(2), 1ߝ14. doi: 10.1177/2053951716679677 Google Scholar
Gray, Mary. (2015, November 12). The paradox of automation’s “last mile.”Social Media Collective. URL: https://socialmediacollective.org/2015/11/12/the-paradox-of-automations-last-mile/ [November 14, 2018]. Google Scholar
Greenwood, M. (2019, March 8). Canada’s new federal directive makes ethical AI a national issue. URL: https://techvibes.com/2019/03/08/canadas-new-federal-directive-makes-ethical-ai-a-national-issue [April 7, 2019]. Google Scholar
Halpern, Orit. (2014). Beautiful data: A history of vision and reason since 1945. Durham, NC: Duke University Press. Google Scholar
Hayasaki, Erika. (2017, January 16). Is AI sexist? Foreign Policy Magazine, (January/February). URL: https://foreignpolicy.com/2017/01/16/women-vs-the-machine/ [November 14, 2018]. Google Scholar
Hicks, Marie. (2017). Programmed inequality: How Britain discarded women technologists and lost its edge in computing. Cambridge, MA: MIT Press. Google Scholar
Innovation, Science and Economic Development Canada. (2018, June 19). Government of Canada launches national consultations on digital and data transformation. URL:https://www.canada.ca/en/innovation-science-economic-development/news/2018/06/government-of-canada-launches-national-consultations-on-digital-and-data-transformation.html [November 14, 2018]. Google Scholar
Karlin, Michael. (2017, July 5). Responsible AI in the Government of Canada: A sneak peek. URL: https://medium.com/code-for-canada/responsible-ai-in-the-government-of-canada-a-sneak-peek-973727477bdf [November 14, 2018]. Google Scholar
Karlin, Michael. (2018, March 18). A Canadian algorithmic impact assessment. URL: https://medium.com/@supergovernance/a-canadian-algorithmic-impact-assessment-128a2b2e7f85 [November 14, 2018]. Google Scholar
Kemper, Jakko, & Kolkman, Daan. (2018, ). Transparent to whom? No algorithmic accountability without a critical audience. Information, Communication & Society, (Advance online publication). doi: 10.1080/1369118X.2018.1477967 Google Scholar
Lascoumes, Pierre, & Le Gales, Patrick. (2007). Introduction: Understanding public policy through its instruments? From the nature of instruments to the sociology of public policy instrumentation. Governance, 20(1), 1ߝ21. doi: 10.1111/j.1468-0491.2007.00342.x Google Scholar
Light, Jennifer S. (1999). When computers were women. Technology and Culture, 40(3), 455ߝ483. Google Scholar
Lewis, Jason Edward, Arista, Noelana, Pechawis, Acher, & Kite, Suzanne (2018). Making kin with the machines. Journal of Design and Science, (3.5). URL: https://doi.org/10.21428/bfafd97b Google Scholar
Marres, Noortje. (2018). Why we can’t have our facts back. Engaging Science, Technology, and Society, 4, 423ߝ443. URL: https://doi.org/10.17351/ests2018.188 [November 14, 2018]. Google Scholar
McKelvey, Fenwick. (2018a, May 21). Use the Charter to guide AI governance. Policy Options. URL: http://policyoptions.irpp.org/magazines/may-2018/use-the-charter-to-guide-ai-governance/ [June 6, 2018]. Google Scholar
McKelvey, Fenwick. (2018b). Internet daemons: Digital communications possessed. Minneapolis, MN: University of Minnesota Press. Google Scholar
Molnar, Petra, & Gill, Lex. (2018). Bots at the gate: A human rights analysis of automated decision-making in Canada’s immigration and refugee system. Toronto, ON: Citizen Lab. URL: https://citizenlab.ca/wp-content/uploads/2018/09/IHRP-Automated-Systems-Report-Web-V2.pdf [February 16, 2019]. Google Scholar
Mosco, Vincent. (2009). The political economy of communication: Rethinking and renewal (2nd edition). Thousand Oaks, CA: SAGE Publications. Google Scholar
Office of the Prime Minister of Canada. (2017, February 1). Minister of Foreign Affairs mandate letter. URL: https://pm.gc.ca/eng/minister-foreign-affairs-mandate-letter. [November 14, 2018] Google Scholar
Shade, Leslie Regan. (2008). Public interest activism in Canadian ICT policy: Blowing in the policy winds. Global Media Journal - Canadian Edition,1(1), 107ߝ121. Google Scholar
Shane, Scott, Metz, Cade, & Wakabayashi, Daisuke. (2018, May 30). How a Pentagon contract became an identity crisis for Google. The New York Times. URL: https://www.nytimes.com/2018/05/30/technology/google-project-maven-pentagon.html [February 16, 2019]. Google Scholar
Silcoff, Sean. (2017, November 1). ߢWe absolutely have a problem’: Canada’s tech sector gender gap. The Globe and Mail. URL: https://www.theglobeandmail.com/technology/we-absolutely-have-a-problem-canadas-tech-sector-gender-gap/article36789423/ [February 16, 2019]. Google Scholar
Stapleton, Claire, Gupta, Tanuja, Whittaker, Meredith, O’Neil-Hart, Celie, Parker, Stephanie, Anderson, Erica, & Gaber, Amr. (2018, November 1). We’re the organizers of the Google walkout. Here are our demands. The Cut. URL: https://www.thecut.com/2018/11/google-walkout-organizers-explain-demands.html [November 14, 2018]. Google Scholar
Sterne, Jonathan. (2012). MP3: The meaning of a format. Durham, NC: Duke University Press. Google Scholar
Taylor, Astra. (2018, October 2). The automation charade. Logic. URL: https://logicmag.io/05-the-automation-charade/ [November 14, 2018]. Google Scholar
van Dijck, José. (2013). The culture of connectivity: A critical history of social media. Oxford, UK: Oxford University Press. Google Scholar