Sunday, 30 January 2022

Ethics and AI: How Human flaws translate to Artificial Intelligence?

“Two things are infinite: the universe and human stupidity; and I’m not sure about the universe!”

Humans are perfect and human behavior is impeccable. This claim is not true by any stretch of the imagination. With a population bulging over 7 billion, history of wars, political and geographical differences of varying proportions, virtually uncountable religious and cultural beliefs and the inherent complexity of the human psyche; it is safe to say that human behavior is equally difficult to model as it is far from the ideal. Almost any action being undertaken by man throughout the globe has an inevitable uncertainty of outcome as humans are bound to make mistakes. The fundamental purpose of machine automation is to eliminate this very inconsistency and inefficiency associated with humans. Dealing with things like inconsistency and inefficiency is easy with machines that function in closed environments. But there is a whole other aspect to human limitations. There are decisions that humans can’t make; not because of our biological shortcomings but rather due to the sheer scope of the implications of such decisions. And such decisions are not obscure at all. We face many situations in our day to day life where our actions (or lack of any) can potentially lead to serious consequences but we continue to be arbitrarily irresponsible out of moral and/or mechanical incompetence. This, unfortunately, is the accepted state of mankind but what worries me a lot is the continuous process of giving control over to autonomous machines for such delicate and potentially disastrous scenarios.

A common discussion in most AI ethics workshops (including this one) is around the hypothetical and menacing situation of a train on course to kill either a group of 5 people or a single person and the decision to choose between 1 death and 5 is left to the audience by virtue of a button which changes the train’s course. There are other variations that offer a choice between the life of a toddler and an old man. All these scenarios offer the same fundamental question of can we quantify or evaluate the life or death of anyone like that? But this question is still just on the surface. The much more important and hidden question is should we quantify or evaluate human life or death? Sure, to some people (including myself), letting 5 people or a toddler die is much worse than letting a single person or an old man die. But a single person does not have the right and therefore, should not have the power to make such a decision. This example is popular because it leads directly into the realm of self-driving cars. I have realized that designing a self-driving car, for example, is not just a matter of building a system intelligent enough to handle all the physical and mechanical intricacies of the road. It must also include ways to handle misfortunate situations where human lives are at risk. And that is where I feel there is no right approach. A human shouldn’t decide which person to kill (by letting the car drive over in case of an accident) and so shouldn’t any AI system created by humans.

Another discussion covered in the workshop was about the various types of bias induced in autonomous and intelligent software. The basis of many of these biases was rooted in human discrimination. As mentioned previously, our perception of the world is far from ideal. There is a tremendous imbalance of power and influence in human society. Power corrupts and absolute power corrupts absolutely. People, corporations and even governments do not always act for the common good. Their actions are sometimes motivated by greed (wrong use of data collection), misguided by bias (systems used by police) or ignorant of the long-term effects (a unanimous shift towards autonomous weapons). And unfortunately, the fate of the majority of AI is in their hands. So much of software development is incremental and if corporations or governments continue to churn out software that is not transparent and fair this can lead to software gaining not only in intelligence and influence but also in secrecy and malintent. This is a dangerous cycle that is only gaining momentum and it needs to be corrected.

Mimicking human judgment is hard enough. It may even be impossible. But one thing is for sure, there are situations where even humans can’t comprehend all implications of an action and as such, we cannot expect any AI in the same situation to do the “right thing”. In these impossible situations, both humans and AI will be out of options but with a more potent and ruthless AI, the scale of destruction will be much greater. Therefore, it is important to not give absolute power to AI in such cases. And then there are situations where I feel it is imperative to separate AI from human bias, greed, and corruption. History suggests that we are excellent in self-inflicted damage. So, as mankind is marching forward in the creation of things that are potentially more capable than humans then we must be very careful that we do not end up with things that do more damage than good.

Ethics and AI: The Charade of Privacy

It is very difficult to put a definitive status on privacy in today’s digital age. I am not sure if privacy is completely dead or if it will die in a certain period of time. But one thing I am quite confident in is that privacy is dying and it is dying fast. The diminishing privacy is not a consequence of a single phenomenon. It is in fact, caused by the combination of a variety of things. Ever increasing volume of data being generated, computer algorithms becoming abundantly sophisticated and humans growing in eagerness to share every aspect of their lives with the world; these to me are the primary factors that are proving to be fatal to the concept of privacy as we know it.

The very first thing we need to consider is the near unfathomable size of data being produced by a single human on average. As per the statistic given in the key note speech at 2017 CeBIT Global Conference by Dr. Michal Kosinski, a single human produced 500 MB of data per day. This statistic is over two years old and therefore this number must have increased by factors already. Not all of this data is produced intentionally by humans. In fact, most of it is gathered silently by companies governing the Internet. “If you are not paying for a service, you are the product.” This quote sums up the situation perfectly. If companies on the Internet are incentivizing the end of human privacy then us humans are sponsoring it by giving away the perfect resource to extract out every little detail there is to extract from our lives. For me, the problem is not that data is being collected in various manners from our digital footprint, but it is the fact that we are so careless in our choices when it comes to data and so many of us are keen on voluntarily broadcasting their lives to the world. We are actively contributing to this ocean of data that is basically an essence of our online existence. For example, it is one thing that our location data is being recorded by Google, but what can be made of the pictures we share on Instagram, blatantly showing off where we are and what we are doing. Companies and the government are data hungry and humans are so full of themselves that they can’t stand the thought of being unnoticed. And this creates a recipe of disaster for anonymity on the Internet.

Artificial Intelligence is burgeoning and algorithms are becoming smarter, quicker and more efficient in the way in which they make use of the information available to them. Algorithms today don’t need a complete and connected dataset pertaining to a person in order piece together a decent analysis. Even if there are attempts of scrambling sensitive information in order to lose meaning in the shuffle, computers are pretty good at recognizing even the feeblest of patterns and eventually singling out humans from these patterns. One example of how efficient machine learning techniques are at extrapolating personality trait from a very basic set of data is given in a study that used Facebook likes of few million people and predicted things like sexual orientation, political views and general personality. The study showed that with a few hundred likes, the algorithm does a better job at predicting a person’s behavior than all of that person’s friends, family and spouse. It is established that keeping secrets from family and spouse is never easy, then how can a person’s privacy be kept from such algorithms that know more with so little amounts of data? The simple answer is that it is impossible to maintain privacy specially with amount of access we have allowed in our lives. The fact of the matter is that cracking privacy is like finding patterns and computers are pretty good at cracking patterns specially when we are not making it hard for computers by providing as much data as they like.

Attempting to protect privacy in my opinion is a lost cause. All the companies that employ user data need to be regulated. But how can we have hope when the regulator i.e. the government itself is big on gathering data from people and using it to control us.  To be honest, I don’t see much harm in all of this. I am not complaining about the lack of privacy, I am just stating its inevitability. For me, privacy is relevant with respect to other people. As long data is being kept from people in a person’s social circle, it is fine if some corporation uses it further their business. Afterall, the overall goal is to better server the consumer and if that’s the case then I see no problems with our behavior being monitored. Instead of making futile efforts to stop being controlled and monitored, we should make efforts to ensure that we are being controlled and monitored for the right reasons.