Being fully efficient, always doing what you’re told, always doing what you’re programmed to do, is not always the most human thing.

Sometimes, it’s disobeying. Sometimes, it’s saying “No, I’m not going to do this, right?”.

And if you automate everything, so it always does what it’s supposed to do, sometimes..that can lead to very inhuman things — Zeynep Tufekci (Coded Bias, 2020)

During this global pandemic, on an afternoon of me binging on two extremely informative documentaries consecutively, I tried to grasp a primary comprehension on the topic of AI. Several hours later, this is what I took from a multitude of articles and researches..

Alan Turing, an English computer scientist, first officially introduced the concept of AI in 1950, when he published a seminal paper “Computing Machinery and Intelligence”. This paper stated his principle regarding the “Turing’s test”, where he observed whether a machine could carry a conversation that was indistinguishable from that with a human.

It was Turing, who had first proposed the idea surrounding the possibility of building machines that had the ability to think. His invention of the Turing test suggested the plausibility of such a creation.

Later, a conference organised by John McCarthy, Marvin Minsky, Nathan Rochester and Claude Shannon, termed The Dartmouth Workshop of 1956, was where this field concerning the ability of a machine to simulate aspects of human learning and intelligence was coined to be Artificial Intelligence.

This 1956 conference, soon gained its title as the “Birth Of AI”.

Once I attained this basic understanding around AI, I came across the topic of bias and it’s counteractive relation with ethics. Here’s what I found…

Presently, after 70 years since the conference, AI has been exposed and used to classify vast amounts of data. The movies— Coded Bias (2020) and Social Dilemma (2020), provide an insight on how these softwares, programmed to navigate through these massive datasets, tend to develop categories based on preconceptions that are often generalised or stereotypical— a concept called bias.

Just like every road vehicle adheres to traffic rules, it is vital that AI softwares are engineered with ethics in their algorithms. The fact that ethics enable society to stay devoid of or privy to any unjust or prejudiced decision or actions, determines its relevance in AI algorithms.

Watching these two exceptional documentaries, I realised they clarified questions around the WHY in regards to ethics, and the HOW implementing AI, without ethics, could put society at a disadvantage.

One of the main reasons unethical AI systems would inconvenience people is by snatching their means to access basic necessities. Rises in cases of unemployment are a result of machines maintaining discriminatory biases in terms of age, gender, race and disabilities. These biases are often caused due to imbalances found in the historical data inputs by AI developers. Unfortunately, certain systems functioning with these biases, enable machines in assisting the rich in becoming richer and the poor remaining penniless, inevitably implementing capitalism in several countries.

The extreme codependence on AI systems has machines making predictions, decisions and classifications that affects countless individuals. This lack of human supervision in terms of regulating actions and bias running through AI softwares, has lead to an immense violation of the rights of civil citizens, as well as a lack of accountability from the human developers who have programmed said softwares.

As we’ve come to find, AI feeds on massive data sets to develop its efficiency. However, data sets and recorded statistics are but euphemisms for the massive loads of personal information being unethically recorded of individuals. These reveal their individual details and interests to AI implementers, who thereon use customised digital content to forge addiction. This phenomenon leads to an array of ethical issues regarding denial of individual autonomy, freedom against surveillance and invasion of privacy, exploiting users as correspondent sources for corporate revenue.

Speaking of the constant unauthorised feed of personal details being digested by these softwares, it feels wrong to leave out the issue of polarisation, specifically political polarisation. It is now entirely plausible for voters or candidates to form strategies to deploy statistics and information, catered very carefully to an individual’s social feed, in order to manipulate their views and votes. This process is a consequence of a system called Interactive Democracy. This novel experience influences the delegation of votes for and against representatives, by converting an in-person activity into an online event. This leveraging of technical developments is another unethical disadvantage that numerous citizens have to face.

However, researching information of how unethical AI can be detrimental to communities is not to say that AI systems aren’t meant to be integrated into societies. These reasons are meant to accentuate on how ethics can ensure that AI softwares can simulate human thinking and intelligence while projecting moral behaviour and egalitarianism.

If one really looks at it, Artificial Intelligence can potentially be used to unravel treatments for once-incurable diseases, to forecast weather and natural disasters, and to perform daily tasks with great efficiency. AI systems, if trusted, can act more as a competent assistants, engaging with society as per our necessity, rather than confining guides that place restrictions into an individual’s lifestyle.

But that’s just it.. Trust. We, as a vastly diverse society, can truly reap the social benefits of AI developments by working on our trust in them. One extremely adept method used by companies to build upon the accountability that runs within AI systems with the providence of explanation-based collateral systems.

The General Data Protection Regulation (GDPR), enforced as law in EU back in 2018, is one such amendment. This enables users to exercise their ‘right to explanation’ in regards to the algorithmic decisions made about them. To simply put it, such policies help increment trust in AI systems by providing a clear demonstration to humans as to how the machine evaluated their input and why it recommended certain outcomes.

Upon researching, I was introduced to the eminent relevance that AI has in today’s world and our day-to-day activities: whether it be using a GPS to reach your friend’s house, or using Google to rapidly provide answers most relevant to our queries, or using Siri to hum out the actual names of songs.

The prominent roadblock then, for AI systems, seems to be a lack of acceptance of its ever-growing value. If AI is to be accepted through vast and varied communities all around the world, it must have the ability to align itself with the social norms and values present in different communities. This modification can build the efficiency of AI systems in the environment they are about to function in, while assuring compliance from softwares towards existing legislations and policies, that exist in local communities.

What I’ve derived from my research and perceptions, is that a guarantee in the integrity of data, protection of every individual’s autonomy and existence of ethical policies can more than provide society with the ability to take complete advantage of the capabilities in AI systems while minimising the possible underlying consequences.

ai hashtagartificialintelligence hashtagethicalai hashtagartificialintelligenceai hashtagethicsinai hashtagmachinelearning hashtagmachinelearningalgorithms hashtagmachinelearningsolutions hashtagmachinelearningengineer hashtagmachinelearningmodels hashtagmachinelearningtools hashtagdeeplearning hashtagdeeplearningai hashtagdeepneuralnetworks hashtagdeepfakes hashtagdeeplearningalgorithms hashtagdeepmind hashtagdeepknowledge hashtagdeeplearningmachine hashtagartificialintelligencetechnology hashtagartificialneuralnetworks