by Sandro Severoni
In his novelette “The Bicentennial man” published in 1976, sixteen years later used as reference for the book “The Positronic Man” from him and Robert Silverberg, and in 1999 for the movie from Chris Columbus, Isaac Asimov depicted the character of a robot who evolves during two centuries from being a robot, created to serve as butler to a family that had early on discovered that it had some abilities to carve objects from wood, to a conscious being wanting to consider itself a man. As Thomas Metzinger mentioned in 2009 in his “The science of mind and the myth of the self”: “it is conceivable that someday we will be able to construct artificial agents (…) self-sustaining systems”, this may bring to consider that, if we were able to create such agents in self-sustaining machines possessing some kind of consciousness qualities, it will raise concerns about their moral status and how they are treated, included into a society, receive rights and legal protection.
Metzinger observed that consciousness may require a certain group of facts, related to a way of “living in a single world”, is available to the being. In other words, when a world “appears” to the being, consciousness may raise.
But this couldn’t be sufficient: a being shall also consider itself not just as someone, but believing it is there and part of the surrounding world. And this could move an artifact into “object of moral concern (…) potentially able to suffer” and maybe to receive care (or perhaps maintenance, being it a system?) for its well-being.
Starting from these basic considerations and trying to analyze the framework from a mere governance standpoint, we can reflect then on:
- How sovereign entities could politically address the conceiving, production, maintenance, and “retirement” of such an artificial being.
- How to set-up and control a hypothetical supply chain to produce artificial beings.
- How and who to be in charge to set and enforce policies to separate and distinguish human behaviors by artifacts’ ones.
Over these sensitive questions, there is a fundamental issue arising above and beyond.
Responsibility and Artificial Intelligence (AI)
Relation between responsibility and AI involves various aspects, from ethical implications, accountability, and governance of such systems that we may refer later also as “entities”, or even “replicants”.
In relation with AI, we consider the concept of responsibility to be split in two: moral responsibility and causal responsibility, where the first is linked to the idea that entities can be held accountable for their actions, in face of moral principles and obligations, while the second one refers to the cause-and-effect relationship between an action performed by an entity and the consequences of this action.
Talking about moral responsibility in AI, we should focus on the correctness or wrongness of the decisions and actions taken by AI agents and, for causal responsibility, to identify the extent to which an action performed by an AI entity directly caused a particular effect.
Although they are put in relation, moral and causal relationships may respectively differ in terms of assessment and accountability (evaluation of actions and how actions should be acclaimed or charged vs. how actions contributed to the results) and scope (wider relations between intentions, values, and actions vs. causal links between actions and outcomes).
It should also be noted that if someone is considered causally responsible, this doesn’t always mean that it would also be morally responsible.
In fact, one basic assumption in moral philosophy is that an entity cannot be considered as morally responsible for something if it hasn’t the ability (and free will) to do it.
In this sense, the question of moral responsibility and the level of autonomy in AI is central. As well as the ability to make informed and rational choices, considering and understanding the consequences of the actions that entities undertake.
The framework that the subject of responsibility and AI depicts should then contemplate specific key topics, such as the responsibilities of AI manufacturers and providers (design, training and deployment of AI systems ethically addressing issues on biases, transparency, privacy, fairness, socioeconomic and environmental impacts among others) and users (awareness of capabilities and limitations of AI entities, as well as its alignment with legal and ethical standards and accountability on AI-based decision-making traceable processes).
Forward-looking responsibility and AI
When we address the so-called “forward-looking responsibility” in the framework of AI, we refer to anticipating possible future implications related to the design, development, delivery to users and maintenance of AI entities/systems. We can identify and summarize some of these implications:
- Ethically design and develop AI, considering potential ethical issues such as bias, fairness, privacy, transparency, long term implications, as well as the alignment with societal values from the very early beginning of an AI making.
- Anticipate impact on societies of AI, particularly on employment, education, social dynamics etc., evaluating and mitigating related risks.
- Monitor and adapt continuously AI entities, to keep them aligned with evolution of societal standards in a continuous improvement perspective.
- International cooperation and knowledge sharing and research, between involved technical and social disciplines to anticipate, guide and set regulations on emerging issues and dilemmas related to AI developments.
- Public involvement and participation. Engage societies and communities in the development, use and governance of AI to maximize the reflection of common values and concerns into them.
- Management of data in a responsible manner, giving adequate priority to privacy, security and consent issues through the definition, implementation, application, and assessment of compliance programs to comply with legal and ethical standards.
- Training and education development, to spread awareness, knowledge, and critical thinking to continuously let communities understand, give tools and address AI ethical, social, and technical issues.
Through a proactive forward-looking responsibility approach to AI, ethical developments and potential challenges could be timely addressed, to maximize a responsible and beneficial use of these emerging technologies.
We have introduced the relation between responsibility and AI engages a certain number of complex aspects, from ethics to accountability and governance of such systems.
Let’s now analyze a bit closer some of the issues in terms of control of AI.
The issue of control over AI
“I’ve seen things you people wouldn’t believe… Attack ships on fire off the shoulders of Orion… I watched C-beams glitter in the dark near the Tannhauser gate. All those moments will be lost in time, like tears in rain… Time to die.” (Rutger Hauer as Roy Batty in Blade Runner movie, 1982).
This monologue is probably one of the most famous and emotionally engaging ones in movie history, surely for sci-fi. Roy Batty is one of the “replicants” that detective Rick Deckard (played by Harrison Ford in Ridley Scott’ movie, based on Philip K. Dick’ novel “Do android dream of electric sheep?”) must “retire” from service, because mutinied and illegally landed on Earth from off-planet colonies.
According to director’s plot, this kind of bio-engineered android, mainly created for military “end use”, may have superior strength and intelligence than humans. And they appear to have consciousness, so to be very difficult for the “blade runner” detectives to distinguish them from humans. And control them.
As well as “The bicentennial man” from Isaac Asimov, this “Ego machines”, as defined by Thomas Metzinger, have existential unresolved questions, one among the others: be considered and have same expectations of humans.
Control over AI is then a key, may be the real, issue in the present days and more in the time to come.
Simply because it may involve a certain number of complex aspects involving human supervision, ethical guidance, governance and leadership, compliance to laws and regulations, management of data and validation of procedures and processes, if we must address and ensure that AI entities shall function responsibly and transparently to serve humanity at the best.
Some of the key aspects to be then considered in effective AI control, it is supposed to include:
- Human supervision and judgment, to keep maximum control and oversight over AI systems to monitor and change in case, any AI decision and action that may involve ethical issues.
- AI algorithmic, modeling and data control, to enable developers, governmental and international entities to transparently understand, guide and verify algorithms, models, and data.
- Governmental and intergovernmental regulations, to guarantee transparent and accountable development and use of AI technologies based on guidelines, standards and policies reflecting ethical principles and local or global societal values.
Some additional reflections on algorithmic biases
Considering that algorithmic modeling in AI is central because it inherits the biases that training data and design approach have been used during the development phase, some thoughts can be highlighted about:
- Counting on historical data and patterns for their training, AI algorithms may learn, give strength, and magnify biases against specific social groups with the possible result of discriminating them.
- AI algorithms’ developers possible lack of openness and perspectives may introduce further biases when designing algorithms at development stages, with additional impacts in social inequalities.
- Further biases may deliberately be introduced into AI deep and complex algorithms, making their detection and counteraction quite difficult to be applied and fixed.
- Outcomes of AI algorithms may offer additional and significant reflections on biases, about definition and verification of correctness and responsibility.
- Identification and mitigation of algorithmic biases and related risks is then a continuously evolving matter for anyone involved in AI disciplines, from law and policy makers worldwide, to university and research operators, to developers, to societies and finally, to citizens.
Open issues on AI and politics
It may be interesting to analyze how states are managing their digital survivability in a picture like the one we are facing amid the AI booming, where even physical borders are increasingly becoming “liquid” in respect to communities.
In this scenario, what the possible effects of AI for democracy (or “polyarchy”, using Robert Dahl’s definition of it) we must figure out in the middle of global events, pandemics, and war, especially for common people.
Beyond doubt, how in a representative democracy we could then deal and overcome any bias in a possible selection of delegates and candidates, possibly performed with the concurrence of AI?
Conclusions
As already mentioned in the introduction, Isaac Asimov was a sensible actor in early anticipating the issue of responsibility and control of what we’re used to call, may be inappropriately, “AI”, drafting more than 80 years ago his “Laws of Robotics”, a great reference in the related debate, although limited or conflicting in terms of definitions or applicability.
In this framework, it would also be interesting to explore what the role of sovereign countries may be in putting biases in AI to defend their scopes, values, and even their survivability, on the long run.
In this sense, selecting and recruiting people with the support of AI to work for public administrations, may become a challenge and an issue.
As well as the role of private entities, in particular multinational corporations, in acting as AI providers for states, and on the other hand, dependencies of states from multinational corporations, especially in sensitive algorithmic processing, it may become a further issue and, of course, a risk if not appropriately compliant and respectful of human laws and regulations.
The views and opinions expressed in this article are fully personal and may not necessarily reflect the official policies of the mentioned third parties.
The author, Sandro Severoni, is a Compliance and Technological Governance Senior Expert and President of the Scientific Committee and Board Member at Assocompliance. He is also responsible for Engineering Performance Management at Telespazio S.p.A.