Sunday 30 January 2022

Ethics and AI: How Human flaws translate to Artificial Intelligence?

“Two things are infinite: the universe and human stupidity; and I’m not sure about the universe!”

Humans are perfect and human behavior is impeccable. This claim is not true by any stretch of the imagination. With a population bulging over 7 billion, history of wars, political and geographical differences of varying proportions, virtually uncountable religious and cultural beliefs and the inherent complexity of the human psyche; it is safe to say that human behavior is equally difficult to model as it is far from the ideal. Almost any action being undertaken by man throughout the globe has an inevitable uncertainty of outcome as humans are bound to make mistakes. The fundamental purpose of machine automation is to eliminate this very inconsistency and inefficiency associated with humans. Dealing with things like inconsistency and inefficiency is easy with machines that function in closed environments. But there is a whole other aspect to human limitations. There are decisions that humans can’t make; not because of our biological shortcomings but rather due to the sheer scope of the implications of such decisions. And such decisions are not obscure at all. We face many situations in our day to day life where our actions (or lack of any) can potentially lead to serious consequences but we continue to be arbitrarily irresponsible out of moral and/or mechanical incompetence. This, unfortunately, is the accepted state of mankind but what worries me a lot is the continuous process of giving control over to autonomous machines for such delicate and potentially disastrous scenarios.

A common discussion in most AI ethics workshops (including this one) is around the hypothetical and menacing situation of a train on course to kill either a group of 5 people or a single person and the decision to choose between 1 death and 5 is left to the audience by virtue of a button which changes the train’s course. There are other variations that offer a choice between the life of a toddler and an old man. All these scenarios offer the same fundamental question of can we quantify or evaluate the life or death of anyone like that? But this question is still just on the surface. The much more important and hidden question is should we quantify or evaluate human life or death? Sure, to some people (including myself), letting 5 people or a toddler die is much worse than letting a single person or an old man die. But a single person does not have the right and therefore, should not have the power to make such a decision. This example is popular because it leads directly into the realm of self-driving cars. I have realized that designing a self-driving car, for example, is not just a matter of building a system intelligent enough to handle all the physical and mechanical intricacies of the road. It must also include ways to handle misfortunate situations where human lives are at risk. And that is where I feel there is no right approach. A human shouldn’t decide which person to kill (by letting the car drive over in case of an accident) and so shouldn’t any AI system created by humans.

Another discussion covered in the workshop was about the various types of bias induced in autonomous and intelligent software. The basis of many of these biases was rooted in human discrimination. As mentioned previously, our perception of the world is far from ideal. There is a tremendous imbalance of power and influence in human society. Power corrupts and absolute power corrupts absolutely. People, corporations and even governments do not always act for the common good. Their actions are sometimes motivated by greed (wrong use of data collection), misguided by bias (systems used by police) or ignorant of the long-term effects (a unanimous shift towards autonomous weapons). And unfortunately, the fate of the majority of AI is in their hands. So much of software development is incremental and if corporations or governments continue to churn out software that is not transparent and fair this can lead to software gaining not only in intelligence and influence but also in secrecy and malintent. This is a dangerous cycle that is only gaining momentum and it needs to be corrected.

Mimicking human judgment is hard enough. It may even be impossible. But one thing is for sure, there are situations where even humans can’t comprehend all implications of an action and as such, we cannot expect any AI in the same situation to do the “right thing”. In these impossible situations, both humans and AI will be out of options but with a more potent and ruthless AI, the scale of destruction will be much greater. Therefore, it is important to not give absolute power to AI in such cases. And then there are situations where I feel it is imperative to separate AI from human bias, greed, and corruption. History suggests that we are excellent in self-inflicted damage. So, as mankind is marching forward in the creation of things that are potentially more capable than humans then we must be very careful that we do not end up with things that do more damage than good.

No comments:

Post a Comment