Pineapple

Ethics and the Inclusive Governance of Artificial Intelligence

Algorithm

Data

Knowledge

Access to Knowledge

Intellectual Property / Intellectual Property Rights

Cyber Security

Open source software


2021-05-05 11:40:36 1646 0

This year the Access to Knowledge for Development Centre’s (A2K4D) Eighth Annual Workshop was titled Digital Technologies, Innovation, and Inclusive Growth: Alternative Narratives. The workshop brought together multidisciplinary researchers, experts, and stakeholders from the community, including representatives from international organizations, and A2K4D’s local, regional, and global networks. One session was organized as part of the center’s Inclusive Internet Governance initiative and focused on the contemporary debates on the intersection between ethics, inclusion, and the governance of Artificial Intelligence (AI). A2K4D’s Inclusive Internet Governance initiative aims to highlight the importance of Internet Governance for Egypt through a series of events. Three panelists contributed to the discussion moderated by Nagham El-Houssamy, Senior Researcher at A2K4D; these were Sherif El-Kassas, Professor at the American University in Cairo (AUC); AUC, Baher Esmat, Vice President of Stakeholder Engagement for the Middle East at the International Corporation for Assigned Names and Numbers (ICANN), and Stefanie Felsberger, Senior Researcher at A2K4D. The discussion highlighted the importance of AI and its central role in the future. Current global debates on the positive and negative impacts of AI have become much more nuanced: instead of asking whether AI has a positive or negative impact, the questions are who will be affected, how and when does AI have a positive or negative impact, and, for who. Consequently, it becomes important to ask what the type of governance is required to positively encourage positive AI impact. The discussions focused on two intersecting topics: first, the ethics of AI and its governance; second, regulation and inclusion. Ethics and Governance The first half of the session focused on AI ethics and governance. In many countries, discussions about AI ethics and governance dominate the news and public spaces, but in Egypt the topic has not been as prominent as there are not as many AI applications being implemented or developed as there are, for example, in the United States. Nevertheless, Felsberger argued that many of the elements that discourage or encourage AI production are already in place and how they are governed right now determines the future of AI. The field of AI is also developing at a rapid pace. In order to not be left behind, it is crucial to discuss topics such as local AI production and data governance as well as AI governance and ethics. This also facilitates participation in the global debates on AI ethics and governance. Two of the most discussed topics in AI governance are accountability and fairness of AI systems. Baher Esmat explained that both accountability and fairness are difficult to achieve due to technological challenges. The behavior of algorithms is often difficult to understand and explain, even for the programmers who develop them. For example, people assume that Facebook is in control over exactly how its algorithms work and, therefore, Facebook should be able to fix any problem easily. In reality, a programmer does not always understand how an algorithm arrives at its conclusion. Algorithms are biased based on the data we give them, clarifies Esmat. He explains that these programs learn how to behave based on data input into the software during a so-called training period. The more complex the task the program was meant to solve and the more data was available, the more unpredictable the resulting algorithm will become. Esmat stressed that regulation and Internet Governance are crucial tools to ensure that these systems benefit and serve the public. However, regulation is not popular among the large technology companies. Except for Microsoft, most companies favor the current situation which allows them to develop standards and practices away from public scrutiny. This established an ongoing dilemma, for all Internet users whose data is collected and used to develop algorithms or sell user-specific advertising information – but who have no control over how this data is used. El-Kassas described this current business model of online companies as surveillance capitalism. He asked how we can avoid the monopolization of data and create a model that benefits both the users and the producers. To answer his question, he explained that there are different ways to produce AI and not all require companies to hoard data indefinitely. Smaller companies are experimenting with and developing technological solutions that allow programs to get to the same results without accessing and storing individual data. Achieving Inclusion in AI In the second round of questions, the panelists discussed different facets of inclusion: what does inclusion in AI mean and what are different ways to achieve greater inclusion in AI? Inclusion in AI is often taken to mean a fairer representation of all societal groups, particularly marginalized ones, in data sets. This means that, for example, facial recognition software should be equally adept at recognizing faces of all races and genders; it is currently not. Today, white cis-men fare much better than any other group. There are life-threatening dangers that result from this data bias: it was recently revealed that self-driving cars are “more likely” to injure people of color as opposed to white people. Felsberger added an A2K4D perspective to this understanding of inclusion. Access to Knowledge (A2K) in its most basic understanding refers to physical access to, for example, a book and to the ability to understand its contents. Following the center’s definition, it also includes the ability to create and share knowledge, to participate in and shape the production of knowledge based on one’s personal context. When applied to inclusion and AI, the A2K4D paradigm takes inclusion beyond the meaning of representation in a database to also mean participation in the creation production of AI. But how to encourage inclusion; inclusion that goes beyond representation in AI to mean local production of AI that is relevant to the same context? El-Kassas drew from a chapter he co-wrote with Nagla Rizk, Founding Director of A2K4D. The chapter looks at “The Software Industry in Egypt: What Role for Open Source?” and is published in the Access to Knowledge in Egypt book, edited by Nagla Rizk and Lea Shaver. He outlined how the concept of Free Open Source Software (FOSS) works: According to Egypt’s FOSS Strategy, software is free and open source when its “underlying programming source code is freely available to access, modify, and redistribute.” He added that most big players such as Facebook and Google publish their code and software for people to use on their platforms. This is beneficial to them because it means that other people test and improve their software, and it is also useful for those who cannot write algorithms all by themselves. FOSS has, thus, the potential to improve software altogether and provides the possibility for increased participation in the writing of software. It does not, however, allow people to influence the big players in tech or what they do with this powerful software. This is where Internet Governance comes in. Esmat asked what lessons could be drawn from multi-stakeholderism, the principle that underpins internet governance mechanisms, to achieve more inclusion in AI. According to him, there is not one single approach to Internet Governance, but in order to maximize accountability and transparency, multi-stakeholder approaches seek to include as many stakeholders as possible in any policy process or forum. This includes representatives from governments, academia, civil society, human rights organizations, and private businesses across the globe. In addition, there are general principles that characterize multi-stakeholderism that were developed over many years and discussions. For example, one principle is to guarantee that Internet Governance processes are open and inclusive, including those in developing and least developed economies. Another is to develop strategies that ensure transparency and accountability, which means that all involved multi-stakeholders (companies and governments alike) need to all have these measures in place. Esmat also mentioned the need to promote principles such as cultural and language diversity online, respect for human rights, and – echoing El-Kassas – the promotion of open standards, upon which the Internet was originally built. Esmat concluded that the multi-stakeholder model is much more time-consuming than a top-down model, but that over time it has proven itself as the more robust and better approach. In conclusion, participants in this session stressed how important democratization is to access to, and production of, technology. They suggested different economic models such as FOSS and international governance approaches to achieve these goals. They agreed that many questions still remain unanswered, especially in the area of liability. If a machine makes a decision with unintended consequences, who will be held responsible for the damage caused? While there are many different answers, we can hope that the multi-stakeholder approach can help us find a common response.