In a sense, artificial intelligence, neuroscience, and electrophysics have a natural, predestined relationship. Because of the rapid development of electrophysics and the follow-up new development of electrical neurophysiology, especially after the significant breakthrough of electric information engineering, every development of electrical neurophysiology inspired a new anticipation of artificial intelligence. This anticipation is more urgent when electrical information physics and neuroscience come together.
When the mysterious and novel electromagnetic wave was discovered, its feature of information transfer attracted the attention of European and American science enthusiasts, who strived to be the first to perform scientific experiments and who formed such a long scientific trend. Driven by their efforts, in only a hundred years, this new area became the frontier of scientific adventure full of mystery, creation, and competition.
The development of electromagnetic theory and technology constantly brought fresh inspiration and imagination for the world of neurophysiology. When electromagnetic waves were applied progressively as a communication tool for language expression, signal transmission, signal sensing and conversion, signal storage, and so on, the supposition about electronic signals in the organisms became clearer and more definite. Scholars more often looked to the animal nervous system to see how the electrical signals system detected, transmitted, preserved, and exchanged information.
For example, at that time, there was a very prevalent notion that the nerve signals might be transmitted as fast as an electric current in a wire, and scientists therefore regarded the nerve impulses as immeasurable. They believed that a high-speed transmitted signal was the physiological basis for the animal and human to realize complex behaviors and make intricate decisions. Certainly today, we already know that in a typical cortical neuron, the nerve impulses travel at about 1.5 meters per second, slightly slower than a person riding a bike. However, even in the long myelinated axons that run through the spinal cord, the nerve impulses travel much faster, and the fastest speed they can reach is merely a hundred meters per second.
Then, electrophysiological tests again uncovered much inchoate evidence related to encephalic electrical signaling function, such as the evoked potentials in specific cortical areas and the brain waves of human consciousness. Especially important in the 1940s was the great technological invention of the electronic computer, perhaps one of the most important in the history of electronic information physics. This invention originally was a representation of the mathematical logic operation based on electrical technology (in essence, an abacus is also a representation of a mathematical operation based on some structural technique). Because this new technology meant people could make large-scale mathematical computations at high speeds, many people thought of the computer like human brain and vividly called it the "electrical brain." Later, the information processor model became the rage. People enjoyed talking about the analogy between the brain and the computer, thinking that the brain was essentially a high-performance super information processor.
It was so exciting that these findings further stimulated optimism and motivated imaginations to focus on artificial intelligence. Electronic engineering and neuroscience were both in an atmosphere of analogous imagination and optimistic expectations.
For instance, Alan M. Turing, the father of the computer, believed a "thinking machine" would become a reality by the end of the twentieth century. The debate on artificial intelligence has been ongoing. It was in such a historical background that the brain-simulating model was born. The nerve and brain study, which belongs to life science, was once again influenced by the revolution in electrical technology that belongs to physics. The life science research borrowed ideas from the achievements of nonliving physics. This is exactly the essential difference between functional biology and adaptational biology, as discussed in DSS volumes I and II.
Ultimately, the simulation ideal of brain function stimulated by the development of the computer has led the technological experiments of artificial neural networks. A group of people closely related to the computer studies initiated and developed a precise artificial neural network ideal. Although they were interested in nerves, their methodology no doubt was based on physics and computer technology. Some of them had even been motivated to study neural network models with the expectation of making computers that were more efficient. Unfortunately, they did not truly understand the essence of life issues because of their limited knowledge of life science.
The artificial neural network—the term itself shows the extent that neural simulation technology has reached over the past two hundred years. Here, the distance between neural science and artificial intelligence seems as thin as a piece of paper.
Warren McCulloch and Walter Pitts are two of the scientists who made initial attempts in 1943. Their research showed that the "network," which consists of very simple units linked together, could be accounted for with logic and arithmetic functions. If the unit of the network is somewhat like a simplified neuron, then this unit can simulate its characteristics. This basis is used to simulate the motor response of some parts of a nervous system to explore and examine the hypothesis of the brain and neural mechanisms.
Although this kind of simple network cannot match the complexity of a brain, it is possible for us to understand the behavior of neurons. The researchers believed that the characteristics displayed in the simple network would show up in a more complex network with general features. For instance, simple networks may be of reference value for the study of brain circuit functions. Furthermore, they believe that if a neural network model is proved to be similar to the real situation, then they can study the model components' characteristics and their associations and the result can be compared with the prediction of the animal's neural responses. This may be more repeatable and controllable than doing the same experiment on animals.
From the 1970s to the 1980s, the study of brain neural network models was in its most vigorous period. Optimistic scholars believed that with the aid of these novel methods, they would get a new understanding of the internal mechanism of nerves, which was unreadable in the past. Despite the fact that the neural network model still had a long way to go, it had made a good start.
However, it is a pity that the study of the artificial neural network was faced with the dilemma that the study of electrical information decoding had met and received a lot of criticism and doubts.
In fact, the actual behavior of neurons is very different from the module behavior of the network model. The simulated performance is linear, while the prototype performance is nonlinear. This makes the understanding gained through the simulation irrelevant to the prediction. In a valid pulse reaction of neurons, the electrochemical processes of axons, synapses, and dendrites unavoidably have time delays and signal transformations in the processing procedures; the designers of the neural network models deliberately avoid them. However, those properties may be the most significant advantage of the neurons.
Many experimenters admitted that artificial neural network experiments could not reveal the reality of the brain. For example, for artificial neural networks, the electroneurographic signal can transfer in a reverse direction, which is completely impractical in real neurons. This difference is so serious that it is not a simulation anymore.
The artificial neural network has the same historical background as the experiments of other kinds of artificial intelligence. They are based on functional physics' experimental methods and theories. Since the beginning of the twentieth century, neurologists have always been tempted by the brain model based on the physical information network. They first likened the encephalic nervous tissue to a telephone switchboard, the function of which was to receive and transmit signals. Thenceforth they likened it to a computer, adding the signal processing part. In a word, the design of an artificial neural network cannot come out of the functionalist framework even if it is unlike the computer. In 1747, the Frenchman La Mettrie placed life on the basis of pure mechanistic philosophy in his work: Man a Machine. Since then, this framework has always been the nature of functionalism.
Every development of automatic machines (such as robots, computers, etc.) made people believe even more that a human is essentially only a super automatic machine. People tried to make the essence of life compatible with that of the automatic machine in a unified theory, such as control theory and systematology. The study of the neural network model had a sturdy analog methodology. Each major development of physics, especially the revolution in electrophysical concepts and technology, might cause a new round of leaping forward in analog neurology.
From the modern perspective of adaptational biology, the two propositions of "life" and "machine" are different things. Even if the proposition "life is similar to a machine" is tenable, it does not prove the proposition "life is exactly like a machine" to be true. The running basis of life builds purely neither on the mechanics nor on physics and chemistry, although animals are able to walk or have electrical and chemical activities in vivo. Therefore, this is a very childish method, making a forced analogy between the working principles of life and the chemical, electrical, or mechanical machine. This is a typical concept of functionalism.
The creation of the electron machine took humanity two hundred years. However, the creation of the animal life machine took the earth billions of years. It is exceedingly difficult for a machine two hundred years in the making to match something created through dozens of billions of years. The latter should have more mysteries and tricks than the former. Deep-structure theory has revealed why the nature of life is different from a functional machine.
According to deep-structure theory, innovative artificial intelligence should be based on the analysis of life's connotation meaning structure and completely break away from the present functional theories, such as control theory and information theory. We have discussed the concept of connotation meaning—that is, all kinds of organic molecules and their chemical effects can be endowed with the vitastate relations and then become life. These vitastate relations are exactly the connotation meaning structure of the molecules and their chemical activities. Thus, this brings about the formation of a brand-new substance—life.
The connotation meaning study can be traced back to the formative years of evolution. Darwin discussed the connotation meaning problem of higher animals in his later work. The evolutionist natural selection theory and adaptationism have strongly alluded to the idea that meaning relationships exist in all kinds of organisms, lower and higher. Moreover, human beings have created numerous tool systems, including machinery industry, automation and control, artificial intelligence, and so on, from which they can also identify meaning relationships, such as the conditional selection of humanities for program instructions.
For example, there is a debate on "narrow information" and "broad information" in information science. The former studies information sources, the description and transmission of information channels, signal modulation and demodulation, and so forth, and the latter studies communication's semantics and pragmatics. Informational theory has successfully solved the theorization and modeling of the former problem, but when it studies the latter issue, it runs into trouble. The study of artificial intelligence can be a typical case manifesting this problem. If an artificially intelligent machine is developed to understand semantics, it will immediately encounter a difficult situation. It is noteworthy that moving artificial intelligence toward the understanding of semantics is actually a leap from the physical stage to the biotic stage. Once it touches on the life meaning structure, a huge gap immediately emerges.
In short, traditional artificial intelligence builds on physics and the technical system of functionalism. Artificial intelligence should be redefined from the sense of deep-structure theory. We need to rebuild the theoretical basis of the design of artificial intelligence.