Brain and Self Intelligence | Daeyeol Lee | TEDxKFAS


Translator: Tanya Cushman
Reviewer: Peter van de Ven So, the brain is a marvelous machine;
it can do lots of things. It’s responsible for all the thoughts, emotions and complex decisions
that you’re making, and everyone in this room
has one, and only one. Okay? So with this marvelous machine, we’ve created AI, which is getting
more and more powerful every day, and now we’re spending a lot of time thinking about when AI might exceed us
and begin to threaten us, and also we have
the challenge of controlling AI so that it can benefit human society. So the question arises as to whether we can use our brain in AI
to understand ourselves, because in order
to understand the questions about the relationship
between humans and AI, not only do we have to understand
how the AI is created, what it can do, but we also have to understand who we are and what
we are capable of doing. My main argument today is that we know far more about AI
than about the human brain. That’s why I chose this image. Because if you look
at this image carefully – which was painted by my favorite artist, René Magritte – you can see that the mirror shows you
an accurate reflection of the book, a small book, but it’s not what you would expect to see for the image of the person
standing in front of the mirror, and symbolically and figuratively, I’d like to argue that AI
is like that small book and we still don’t know
exactly who we are, even if you use the most powerful
scientific engine that we have developed in our history. And to illustrate that, I’d like to point out several
interesting features of our human brain, what the brains do well, which we might be
somewhat counterintuitive; this might be different from what you always assume
what the brain really does. As a first illustration, I want you to focus on the red dot
at the center of this image and try not to move your eyes. Try to fixate the red dot, and notice the fact
that while you’re doing that, probably all of you can see three other colored dots
in your peripheral vision: yellow, green and white dot, okay? But you’ll notice that those dots
will disappear from your vision once the moon begins to rotate, and please raise your hand – so that I have a sense
as to whether this is working or not – when the dots begin to disappear. Thank you. So this is a very famous
illusion in psychology called “emotion-induced blindness.” It’s been confusing and baffling
to many psychologists because it’s not that easy to explain
as to why this happens. But at a higher level,
this is quite expected because this happens because our brain
is a predictive machine. A lot of us think that the brain gives us a very accurate and reliable
representation of the outside world, like the library, where you can save
information for countless books and you can retrieve the information
you want from a particular page, or a theater, where you can see the movements
of all the actors accurately. But that’s not what the brain is good at. What the brain is trying to do is always
trying to predict what’s going to happen. It’s tuned and constantly looking
for things that are unpredictable, that you did not predict, and then tries to minimize
the errors or discrepancies between what you predicted and what you observed
in the outside world. And the reason why these dots
disappear after a few seconds is because those dots
are completely predictable, and therefore, the brain
loses interest quickly, and therefore, it begins to erase it
from your mental activity. This kind of explanation can deal with lots of other
complex phenomena in the brain. For example, if you present
any sensory stimulus, such as a flash or sound, and then maintain it for while
and then turn it off, the neurons in the brain
do not follow that pattern precisely; instead, what they tend to do is that they show vigorous
response, a huge response, at the onset of a sensory stimulus, and then quickly reduce their activity, and then when the stimulus is removed,
it briefly suppresses activity. The reason why they do that –
one explanation for that – is because these neurons are constantly looking for
surprising, unpredictable events. So when the stimulus is turned on
unpredictably, neurons respond vigorously, and then when it
is turned off unexpectedly, they briefly suppress activity. But when there are no changes
in the environment, neurons actually are not that interested. In another example that’s consistent
with this kind of explanation is this illusion called rotating snakes. If you fixate any point in this image
and do not move your eyes, you can confirm that this image
is completely static; there is nothing in this screen
that’s moving around. But as soon as you begin
to move your eyes around, you’ll notice that a lot
of these circles briefly rotate. And again, this is very confusing. Why do we see motion
that’s not there in the physical world? An interesting study
that was recently published, that if you build
a deep-learning network – this is a cutting-edge AI model
of our human condition – and then train that network
to try to predict what’s going to be the next frame in many different movies
that you use to train this network, those kinds of deep-learning networks
will also see the illusory motion that you see with
this rotating-snake illusion. Again, suggesting that the reason
why you see these illusory motions is because your brain is more interested in predicting what’s going to happen
in the next screen rather than analyzing
what’s currently on the screen. Okay, so that’s one
important feature of our brain that we may not be aware of all the time, that our brain is a predictive machine. So we might be able to predict how the difference between
AI and humans will change, but it might be harder
for us to understand exactly what human brains are and what the difference
between AI and the human brain is. Second important feature
of the human brain and mind that I’d like to point out is that we are very efficient
in categorizing things. So when I show you an image like this, everyone in this room can immediately
recognize they’re all apples and there are green apples and red apples, and, of course, there are
many other features in this image that you instantaneously ignore, such as subtle changes in the colors and subtle changes
in the size of the apples, but we can effortlessly classify and
categorize different objects that we see, and this is very hard for AI to do, okay? In fact, many AI algorithms kind of
show off their performance by showing that they can categorize different images
into cats and dogs and so on, but humans are really,
really good at this. But this comes at a cost as well because as soon as you start
categorizing things into discrete domains, you start to ignore
the subtle differences between them. So the question is, Why are we so good at categorizing things, and why, also, so good at ignoring rich variations
among different individual objects? So the reason why we are so good
at ignoring different features is because the brain evolved not to accurately represent
the outside world but to choose the most
beneficial action for us. And that requires different kinds
of functions, of properties. So for example, imagine
that you’re both tired and hungry. So you need to sleep and eat
at almost the same time. And imagine that you are, like, indecisive and you’re trying to switch
between these two activities constantly; you’re going back and forth
between the bed and the kitchen. You will not accomplish
either of these objectives. So in order to deal with these kinds
of conflicting situations, what the brain has to do is build networks
that have these kinds of structures, where neurons that are responsible
for selecting sleeping behavior, once they get chosen, that they have to maintain their activity and then suppress the activities
of competing neurons that might be promoting eating so that while you’re sleeping you don’t get up constantly
to go to the kitchen and vice-versa. So these two functions
are often referred to as a self-excitation that maintains the activity
of the neuron persistently and then mutual inhibition because you don’t want the competing
actions to interrupt the ongoing activity. And this type of action selection might be the the reason why
even when we’re trying to evaluate different objects
we see in the sensory world, we tend to categorize things right away. So, third feature of our mind and brain
is the fact that we’re extremely social, and therefore our brain
is also extremely social. And the reason is because social activity is more important than individual
decision-making in everyday life, in almost every moment in our lives, and also this requires
very complicated computations. Because unlike when you
are making decisions individually, when you’re acting in a social context, what’s most important is for you to accurately predict
what other people will do because success and failures of action really depends upon your ability
to predict what other people will do. But this gets complicated very quickly because both you and I
have similar hardware, our brains, and it has similar objectives, and therefore, just like I’m trying
to predict what you’re going to do, you are also trying to predict
what I am going to do. Therefore, for me to be able to predict
what you’re going to do, accurately, I also need to be able to predict what you’re going to predict
that I’m going to do. And this can go on many, many times
and creates a very complicated loop. Therefore, it creates lots
of room for errors, and a lot of mental
disabilities that we have actually correspond to the failures
in this type of mental simulations in imagining what other people
might be thinking and so on. So that’s the cost of having
an extremely successful social brain that deals with these complicated
social situations. One last feature
that I’d like to point out is that our brains are analog computers, and this is important to keep in mind because in order to understand the best strategies that we have to adopt
when we’re dealing with AI, rapidly developing AI technology, is that we like to understand the difference between human brains
and digital computers and often tend to think that a lot of the features
that digital computers have might be applicable to human brains. But at the fundamental level,
the human brain is an analog computer. It processes analog signals rather
than processing bits and zeros and ones, you know, binary number scheme. And one good example is that there are elements
in both human brains and supercomputers that work like a switch, whose changes have a huge effect
on the ongoing activities of the system by small changes in the input. In the computer, it’s called a transistor
that does the work of the switching. In the human brain,
there are elements called synapses, which is where two different neurons meet and where the information exchange occurs. And when you magnify these structures,
transistors and synapses, you can see that, physically, the transistor has
a much simpler structure. Synapses are much more complicated, and they can process analog information
in a much more complicated way than a computer transistor. So, and this also can explain some additional similarities
between human brains and analog computers because just like the human brain, analog computers
can perform different functions depending upon the connections
that you make between different parts. And similarly in the human brain,
the brain is programmed; it’s not by entering some computer codes, but instead, altering the connections
between different neurons. So that’s another similarity
that makes a lot of sense when you begin to realize
that the human brain is more like an analog computer
than a digital computer. So understanding these differences,
and also similarities, between human brains
and computers and AI and robots can actually help us solve
some difficult, ethical challenges that we face ahead. For example, when you watch a sci-fi movie
where robots act very much like humans, it’s very difficult to withhold
some empathy for these robots and also allow these robots
to have some rights, which I think is very dangerous, and I think it could follow from some
of the tendencies that our brain has, as I’ve already explained. Personally, I think that the reason
why we feel empathy for the robots that look
like us and behave like us is not that different from the emotional
response that we might show when somebody, for example, tries to destroy
some physical objects we value. For example, if you’re a musical fan and if you’re attached
to objects like a piano and then somebody smashes them, it’s actually very painful to watch them, even though that piano
has no mental activity, has no emotion and so on. I think those are due
to the similar properties of our human brain and human mind. And one last thing I’d like to point out is that getting complete self-knowledge,
even using marvelous scientific tools like the tools that are currently
developed in neuroscience, actually is really, really hard. To, in a way, to get complete
self-knowledge or self-intelligence, you’ll probably have to record
all the neurons in our brain and then analyze them and extract
all the meaningful patterns from them, which is not only physically impossible but also has a logical
paradox in it as well, which has been noticed by many philosophers and mathematicians
and logicians in the last century, which has been adopted
in many science-fiction movies. So, for example, in this particular
Star Trek episode, Captain Kirk and his crew
were visiting a planet; in fact, they were trapped in a planet
which was dominated by androids. And in order to escape from them, Captain Kirk told this android
that “I’m lying,” which includes reference to himself. And therefore, when this robot begins
to analyze the truth of this statement, it got overheated and burned itself, and therefore, it allowed Captain Kirk
to escape that planet very safely. So, in closing, I’d like
to actually do a short experiment. I have asked the camera crew, earlier, to aim the camera to the screen
as I’m exiting the stage, which is kind of to give feedback, which has been noted almost 60 years ago, when people started connecting
the video camera to the TV screen, and in a way, you can see that
as an analogy to a simple physical device, like the computer camera and a screen
trying to understand itself, and you can see that it’s – even for a camera, which is much simpler
than our human brains, this is a very complicated process. So I hope that you enjoy this video. Thank you. (Applause)

Leave a Reply

Your email address will not be published. Required fields are marked *