By Peer Javeed Iqbal
July 6,
2024
Artificial
Intelligence Systems Are Designed to Mimic Certain Human Brain Functions, such
as Learning and Problem-Solving, But They Are Not Identical to The Human Brain
in Their Structure or Operation. Operationally It Is Quite Different from The
Human Brain, And It Would Be Inappropriate to Use the Brain as A Metaphor to
Define Artificial Intelligence.
-------
Representational
Image
------
We live in
thrilling times where the pace at which technology is evolving and impacting
our lives, for the better, is accelerating exponentially. We are now at a point
where the human ability to adapt to technological changes is behind the rate at
which science and technology are changing the world.
Humanity is
at the cusp of the fourth industrial revolution; this time, it is not the power
of coal, steam, or electricity that is driving the change. Instead, it is the
innate attribute of our very existence–data–and its interpretation in many
dimensions that is at the core of change. To get an appreciation of how fast
change is happening we can say that nearly 90% of all the world’s data has been
generated in the past couple of years. This is true for almost every sphere of
life, but the one field that will be a game-changer is at the intersection of
Artificial Intelligence (AI) and the human brain. AI has the potential to
revolutionize many aspects of our lives, from healthcare and medicine to
transportation and communication. As we continue to make progress in this
field, it will be essential to consider the ethical and societal implications
of these advancements and ensure that they are used to benefit humanity. Data
is turning out to be the new currency, with machine learning and AI taking on
more and more human tasks and doing it better than before.
AI systems
can perpetuate and even amplify such biases present in the data used to train
them, leading to unfair and discriminatory outcomes. This is because AI systems,
particularly those that use machine learning, are only as unbiased as the data
they are trained on. If the data used to train an AI system contains biases,
the system will also be biased.
For
example, facial recognition systems have been found to have higher error rates
for people with darker skin tones and women because the training data used to
develop these systems was not diverse enough. Similarly, natural language
processing systems have been found to perpetuate sexist stereotypes because the
data used to train them was sourced from text written primarily by men, for
men. In the year 2018, researchers found that an AI system used for hiring was
less likely to recommend female candidates for jobs in male-dominated fields
because the training data used to develop the system was mostly from resumes
submitted by men. In 2020, researchers found that an AI system for identifying
hate speech on social media was more likely to flag tweets written in African
American English as hate speech because the training data used to develop the
system did not include enough examples of this dialect. Besides, AI systems,
like any other technology, can malfunction or be hacked with malicious intent,
resulting in intended or unintended consequences. In the year 2016, a
self-driving car operated by Tesla was involved in a fatal accident in which
the car’s sensor failed to detect a white semi-truck turning across its path,
also an AI-controlled trading algorithm caused a “flash crash” on the stock
market, causing a rapid drop in the value of several stocks before they quickly
recovered. In 2019, an AI-controlled robot killed a human worker at a
Volkswagen plant in Germany. The incident was caused by a malfunction in the
robot’s safety system.
All these
findings highlight the importance of having diverse and inclusive data sets
when training AI systems and monitoring the performance of these systems to
detect and address any biases. Additionally, it is important to have diverse
teams working on the development and deployment of these systems to ensure that
all perspectives are considered. It’s important for researchers, developers,
and policymakers to be aware of these risks and to take steps to mitigate them,
such as thorough testing, safety protocols, and transparency in the development
process. So does all this mean AI is doing more evil than good?
Nothing can
be farther from the truth. Today, AI is ubiquitous, touching our lives in ways
more than we can comprehend. It is bound to have a few hiccups as scientists
continue to try to understand the inner mechanics of the human brain that AI
hopes and aims to mimic.
AI has made
some remarkable advances in the past few decades, the most famous of which was
the game of chess in 1997, where IBM’s Deep Blue beat world chess champion
Garry Kasparov in a six-game match. Today AI systems play an ever-increasing
role in aiding and augmenting human intelligence. In medicine, AI-based
techniques have been used to analyze medical images, such as X-rays and CT
scans, with accuracy comparable to that of human radiologists. In drug
discovery, AI systems have been used to screen large numbers of chemical
compounds for potential drug candidates. AI-based systems have been able to
identify new drug candidates that humans would not have been able to find. In
natural language processing, AI-based systems have been able to generate
human-like text, translate languages, summarize large text, and even develop
coherent and informative answers to complex questions.
In the
field of computer vision, AI-based systems have been able to outperform humans
in tasks such as object detection, image classification, and facial
recognition. In the field of self-driving cars, AI systems have been used to
control vehicles with high safety and reliability.
AI’s
utilitarianism is not limited to applied technologies alone. AI has been used
to make significant strides in solving problems in mathematics and physics. It
is used to simulate complex physical systems, such as the behaviour of
subatomic particles and the dynamics of fluids, and to analyse large data sets
from particle accelerators and telescopes, leading to discoveries in fields
such as high-energy physics and astronomy.
These are
but just a few amongst hundreds of different examples that demonstrate AI’s
ability to perform tasks that were previously thought to be the exclusive
domain of humans. But how about something comparable to human creativity, such
as art, music, or poetry? In 2021 Open AI introduced its AI model DALL-E which
can create artificial images, even abstract ones, based on prompts or themes
provided by human beings. How is this possible? Researchers trained AI systems
on large datasets of existing artwork to learn the styles and techniques of
famous painters and then used these systems to generate new paintings in the
same style and create new styles of painting, using techniques such as deep
learning, generative models, and neural networks. In November 2022, Open AI
released its latest AI model on natural language processing called ChatGPT. This
model is trained to interact with humans in a conversational way that feels
like a human. Once I prompted Chat GPT to write a poem reflecting the synergy
between humans and machines. A few lines that I got back from the AI model
were:
In the
dance of metal and flesh, they meet, where circuitry hums to the heartbeat’s
beat. In realms of silicon and neurons entwined, a harmony of creation is
brilliantly designed.
AI systems
are designed to mimic certain human brain functions, such as learning and
problem-solving, but they are not identical to the human brain in their
structure or operation. Operationally it is quite different from the human
brain, and it would be inappropriate to use the brain as a metaphor to define
AI.
AI systems
are typically based on algorithms and mathematical models, while the brain
comprises complex neuronal networks that communicate through electrical and
chemical signals. AI systems can be trained to perform tasks by being fed large
amounts of data and using that data to make decisions, while the brain can
learn and adapt in response to experiences and new information. The brain can
reorganize itself after injury, while AI systems generally do not have this
aptitude. AI systems are limited by their programming and the data they have been
exposed to, while the human brain has the ability to make novel associations
and connections. The brain can process and integrate a wide range of sensory
information, while most AI systems are limited to processing data that they are
explicitly programmed to handle. The brain is also capable of creative thought
and abstract reasoning, while most AI systems are designed to perform specific
tasks and cannot think creatively.
The human
brain has had 300,000 years of development in the human species, and nearly
seven million years of evolution. In comparison, artificial intelligence is
relatively a new man-made technology, hardly a blip in time. A true test of
AI's ingenuity would be whether it can spawn another artificial intelligence on
its own, without human intervention. Until then, the spongy 1.4 kg organ of
flesh and blood between our ears will continue to define what it means to be
human.
------
Peer Javeed Iqbal is working in IGNOU Regional Centre Srinagar. He has
expertise in Cyber Law and Information Security and also teaches computer
science students at IGNOU.
Source:
Will The Machines Rule The World?
New Age Islam, Islam Online, Islamic Website, African Muslim News, Arab World News, South Asia News, Indian Muslim News, World Muslim News, Women in Islam, Islamic Feminism, Arab Women, Women In Arab, Islamophobia in America, Muslim Women in West, Islam Women and Feminism