Mind and Machine- Two Sides of the Same Coin?

09 June 2020 | Views

Brain science and AI have been progressing in a closely knit fashion since the past several decades

Ancient Greek philosophers had spent much of their time pondering about what truly makes one intelligent. But this concept was embraced in science and research only about half a century ago.

Ever since its inception, neuroscience has strived to understand how the brain processes information, makes decisions, and interacts with the environment. But in the mid-20th century, arose a new school of thought – how can we emulate intelligence in an artificial system?

Brain science and AI have been progressing in a closely knit fashion since the past several decades. Soon after the birth of modern computers, research on AI gained momentum, with the goal of building machines that can “think”.

With the advent of microscopy in the early 1900s, researchers began probing into neuronal connections in brain tissues which inspired computer scientists to mould the Artificial Neural Network (ANN), one of the earliest and most effective models in the history of AI.

In 1949, Hebbian learning, one of the oldest learning algorithms, was subtly inspired by the dynamics of biological neurons. The essential principle : when synapses fire repeatedly, the connection gets strengthened. In a biological context, this is one of the ways that we learn and remember things, the way that “potentiation” of memory occurs.  The same holds true for an ANN.

Following this development, ANNs witnessed an enormous surge in research. A singular achievement was the perceptron. Developed by Frank Rosenblatt in 1957, this single-layer ANN can process multiple inputs, and laid the foundation for the subsequent multilayer networks. The 1981 Nobel Prize in Physiology or Medicine was awarded to Hubel and Wiesel for elucidating visual processing by capturing the responses of neurons when exposed to different images.

Their work proved that biological systems use successive layers with nonlinear computations to transform simple inputs into complex features, paving the way for modern AI research.

Activation of one neuron in our brain creates a spike in potential which is transmitted as electrical signals. On an analogous principle, the neuronal cell body equates to a node, dendrites to the input, axon to output and the synapse to the weights in an ANN.

In ANNs, mathematical tools such as linear combinations and sigmoid functions are used to compute the action potential at a certain node. A typical ANN consists of input, output, and hidden layers with the complexity increasing by adding more hidden layers.

AI essentially aims to investigate theories and build computer systems equipped to perform tasks that require human intelligence, decision-making and control.  Thus, it has an important role in understanding the nuances of the brain.

In fact, it is rapidly metamorphosing into an indispensable tool in neuroscience. Machine Learning has greatly simplified the interpretation of data obtained from techniques such as Functional Magnetic Resonance Imaging (fMRI). But it can also benefit from this, because a detailed analysis of how the brain works could provide important insights into simulating these artificially.

Neural networks are only a rough representation of the brain, as it models neurons as numbers in a high dimensional matrix. On the contrary, our brains function as highly sophisticated devices with the help of electro-chemical signals. That makes us unique as individuals. It is an undeniable fact that AI has left an indelible mark on the world in several facets. Nevertheless, for machines to achieve the computational power and mystique of the human brain, may well remain an impossible feat for years to come.

 

Sukanya Chakraborty1 , Siddhartha Sen2

  1. Indian Institute of Science Education and Research, Berhampur
  2. V. College of Engineering, Bengaluru

Comments

× Your session has been expired. Please click here to Sign-in or Sign-up
   New User? Create Account