Comparing Changing Views of Intelligence Over Time (2015) | ||
While
one may be forgiven for thinking that there are no new ideas and that
movie sequels and remakes are a waste of time, few other media allow us a
cheap, yet broad ranging survey of trends over time. Art imitates life.
Compare art that is as close to apples to apples as possible and do it
over time and we get truly poignant social surveys. Robocop
(1987) was a dystopic movie set in the crime ridden, industrially
abandoned city of Detroit. With corporations taking over and internal
politics taking front seat, a desperate Vice President and his team of
engineers in lab coats cobble together an automated, computerized law
enforcement police robot. Armed with automated detection sensors,
automated legs, and automated cannons, the Vice President demonstrates the
robot to a boardroom meeting whereupon the robot promptly suffers a glitch
and automatically executes an innocent employee. A young executive makes
this into an opportunity to showcase his approach to integrating a man in
the loop as a cyborg. The cyborg, a blend of critically injured police
officer and prosthetic, becomes Robocop. With a sentient intelligence in
the loop, Robocop protects the innocent, serves the public trust, and
upholds the law. Moral of the story:
corporations are bad and so are programmed machines. Only human
guidance can fit in with policing humans. Robocop
(2014) is a futuristic movie set in China and Detroit. With
military-corporate robotic drones taking over foreign military and
occupation duties, a market share hungry executive uses focus groups and
decides to try to give his robots a human face to make robotic policing a
palatable alternative in Detroit. A critically injured police officer gets
grafted into a prosthetic and becomes Robocop. However, he cannot compete
with the existing fully automated military robots since his being in the
loop only slows down his shooting. So the executive gets his scientists
and engineers to bypass the human intelligence of the injured police
officer and fully automate the Robocop.
The new improved Robocop is successful because it is effectively an
all robot efficient killing machine. The machine algorithm programming
controls the human. Moral of the story: corporations are bad. Machines are
good overlords. Get out of the way, humans. Has
computing technology advanced so far in 27 years that audiences which once
accepted robotics as fancy toasters that can glitch and shock the users
can now suspend their disbelief enough to accept them as superior
overlords?
Art imitates life. What does life say? Researchers hope that we
adopt their automated
driving cars by 2020.
Why?
Because they want to protect their kids from the dangers of driving
cars by letting the machine act as the overlords.
Should we move over and ride auto-driving cars or have computer
controlled Robocops?
Have computers or their algorithms gotten smart or intelligent
enough to take over?
Would their taking over save lives? Let
us analyze these disturbing questions singularly and rationally.
(1)
Would computer overlords taking over save lives?
In
art, a Robocop that shoots quickly and accurately at bad guys to rescue
innocents would undoubtedly save the lives of the innocents.
In life, a properly programmed driving computer could detect an
obstacle and by default hit the brakes and undoubtedly save the lives of
any innocent passersby.
So yes, a computer overlord – if and only if properly programmed
and tested – would save lives.
(2)
Have computers then gotten smart enough or intelligent enough to
warrant taking over? Or if
not, would the trend lead to them one day being so intelligent?
A
computer computes.
As any computer science student would know, a computer does
precisely what the program tells it.
Any deviation at this stage signals a broken computer.
So the smart-ness or intelligence of any computer relies upon the
program instruction codes – the rulebook per se.
The programmer is the scribe who translates the goal into
unambiguous instruction sets.
The manager is the writer who sets the goals.
A computer carrying out the instruction set is a toaster cooking
sliced bread as desired in service of the user.
It is precise, infallible, helpful, and efficient.
But a toaster is not intelligent.
A rulebook is not intelligent.
A program is not intelligent.
A computer is not intelligent.
They are the signs and results of someone’s intelligence and
cleverness, but they are not intelligent.
A
computer driven car is smart and safe just as a timed toaster oven
automatically shuts off after a set time or – perhaps in truly cleverly
built toasters – after the smoke detector senses smoke.
But a cleverly programmed car is just as smart and intelligent as a
toaster with timers and smoke sensors. (3) Should we move over and welcome our computer overlords? Philosophers
urge the computers to somehow share our values.
Other philosophers complain that passing control to the computer
makes human skills atrophy.
These are fine complaints.
But they are akin to saying a would-be overlord is acceptable if
they dictate us with our present values.
Or the primary complaint is that blindly following the dictates of
the would-be overlord makes our living skills atrophy.
Should one be more disturbed at the thought of a would-be overlord
or the thought that these are the best complaints against handing over
control to a would-be overlord?
In
Robocop (1987), the protagonist cyborg at first coldly and efficiently
dispatches bad guys with highly accurate and precise gunfire.
But he eventually recovers his warmth and humanity, which causes
him to follow leads and make arrests in contravention to his programming.
At the end, he identifies himself with his former human name.
In Robocop (2014), the protagonist cyborg at first warmly and
humanly hesitates in shooting bad guys.
But the lab scientists lobotomize and drug him up so that the
computer programming makes him a better gunslinger.
The conflict is not about resurfacing the humanity and what makes a
better Robocop; rather it is whether it is ethical to make him a better
Robocop by removing parts of his original brain.
The story glosses over the question of whether this would truly
make him a better Robocop as if it were a given with no need for
discussion.
If the audience only goes along with the ethics, or values, or
skills atrophy arguments but does not consider this glaring argument, then
this would be a great disservice to all those who serve in police
uniforms.
It is also a great disservice to defining or setting the goal of
intelligence.
Like a straw-man argument, it is pointless to argue since the
entire scope of the argument is irrelevant, distracting, and beside the
real point. The
real point is that a police officer typically has a gun but is not about
the gun.
If shooting bad guys were the primary means and Key Performing
Indicator (KPI) of cops saving innocents, then shooting all the bad guys
quickly and efficiently nets a perfect score.
Since bad guys were all once people, the only way to guarantee
shooting all the bad guys is to shoot everyone.
Then there shall be no bad guys left to harm the innocents.
In more palatable terms, the only way to guarantee bad guys never
harm innocents is to quarantine or imprison everyone.
This is not what a police officer does. The
real point is that if we want a world with safe driving where nobody ever
gets hurt while driving then we should force everyone to take public
transportation.
A self-driving car is public transportation scaled such that there
are no crowds per effective unit.
This is not what driving is about. The
real point is that intelligence is not about saving lives, nor about being
accurate, nor fast, nor efficient.
Stating that a machine is fast, accurate, and saves lives therefore
it is smart and intelligent is a straw-man argument.
The latter does not follow from the former.
It can be useful to build something fast and accurate and precise.
But that is not intelligence.
Those are the wrong KPIs for defining and setting the goal of
intelligence.
|