The Implications of Technology Investing in Neural Networks (2013) | ||
Google
recently hired
Geoffrey Hinton,
a computer science professor focused on neural computational
machine learning models.
Hinton is well known from an earlier landmark paper, Rumelhart,
Hinton, and Williams (1986).
As every graduate student in both neural systems and machine learning
disciplines know, that paper demonstrated how to construct a self-adaptive
Multi-Layer Perceptron with a backpropagating learning method. It helped
revive interest in neural networks as applied machine learning models
derived from neuroscience findings.
What does this hiring indicate is Google's vote for the future? Today,
neural networks embed behind the scenes in the research and development of
mundane business and commercial functions.
One popular example is of IBM's Watson that competed
live on Jeopardy.
Neural networks formed the top layer in a highly heterogeneous
mixture of expert systems that extracted the question and answer intents
that form the contest.
The researchers credited neural networks as finally enabling Watson
to produce acceptable human-like accuracy. Another example is of
Microsoft's Bing translator
and voice recognition
system.
Microsoft collabrated with Carnegie Mellon University and
University of Toronto researchers on neural networks to reduce speech
translation error rates from 1 in 5 words to about 1 in 8. Holding even
simple question and answer conversations or translating speech are complex
tasks only humans are known to perform at human levels. Replicating
hallmark human activity may require replicating hallmark human brains.
Does Google believe neural networks can replicate human brains on some
level? Brains
mean a great deal different things to different scientists. The neural
networks that model them mean a great deal many more.
Deep learning neural nets and Restricted Boltzmann Machines
self-program after several passes through a training environment. They
then "fixate" or "equilibrate" and get "released
into production," depending on the user's needs. A very interesting
property of these networks, especially for computer scientists is their
O(c) production runtime with respect to the data. This computational
notation says the processing time is not affected by the data size. It is
fast. In
order to achieve this remarkable feat, the Hinton-esque deep learning
neural network requires the operator to pre-define the network size and
structure. The user essentially needs to pre-allocate a fixed memory and
CPU size. Then the user also needs to pre-designate a highly controlled
training environment for this network.
The network automatically self-designs its rules within these
constraints.
Finally, the network gets frozen and never learns again. In effect,
these networks are literally brain washed. In a closet. With brain clamps.
And then lobotomized. The typical data and setup methodology (e.g. Hinton,
et al., 2006) demonstrates this dance of data mining, validation, and
selection. Of
course, these are machine learning models. Not only is this procedure
acceptable, it is highly appropriate for machine products. It generates
results. It generates them consistently. It generates them fast. These are
computer science goals. The
question is not whether Hinton-esque deep learning neural networks can
transform datasets, it is whether human brains and human level tasks are
about transforming datasets. By hiring Hinton, Google shows they believe
it is, or that is not their goal. |