User:Oheunjeo/sandbox

= Juyang Weng =

Dr. Juyang Weng is a controversial figure and pioneer researcher in artificial intelligence (AI). He made fundamental contributions to two major schools in AI, symbolic AI and connectionist AI (connectionists were not considered AI until the recent high wave in deep learning). Symbolic AI methods use human handcrafted symbols that are based on a given task. This means that humans need to understand the task first to implement it. Connectionist AI methods use neural networks; but neural networks have been blamed to be a black box. Weng bridged the two AI schools by making neural networks transparent (i.e. not a black box) with clear logic that the symbolic AI school handcrafted. This is like getting the best of two worlds, because the logic from neural networks are emergent — meaning autonomously generated from the neural network's own experience.

Weng, co-authored with Dr. Narendra Ahuja and Dr. Thomas S. Huang, published the first deep learning network for 3-D called Cresceptron. Unfortunately, although many later works used major ideas of Cresceptron, such as HMAX and Convolutional Neural Networks (CNN) for 3-D words, few cited Cresceptron as they should. (See "Work" below for more details). However, although the major ideas from Cresceptron are similar, CNNs often use a learning method called error backpropagation. Dr. Weng claims that error backpropagation for neural networks, especially for high-dimensional neural networks, amounts to data falsification (i.e. using test sets as training sets).

In contrast, Cresceptron used an unsupervised learning method, known as Hebbian learning. The idea of Hebbian learning is if an input line and the output line of the neuron fight together, the corresponding weight for this input line is strengthened. The original idea of Hebb's mechanism is "neurons that fire together, link together." However, Hebbian learning does not give neural networks sufficient logic that a typical AI system requires. That is why many CNN networks are known more than pattern recognition machines.

Weng reached his goal of bridging two AI schools through what he called Autonomous Programming for General Purposes (APFGP). APFGP is a naturally or artificially intelligent machine that should construct its own programs inside the "skull" through autonomous interactions with the physical world, which may or may not include human teachers. Dr. Weng has proved mathematically that Developmental Networks (DN) can learn to become the emergent controller of an emergent Universal Turing Machine (Universal TM). A traditional Universal TM, first published by Alan Turing in 1936, is symbolic. The Universal TM is a model for modern general purpose computers. Therefore, a DN is able to learn how to program itself for general purposes in the sense of Universal TM. While a traditional Universal TM cannot program itself because it requires a human programmer to write a program for its input tape, an Emergent DN is able to generate a neural network inside its "skull" as a Universal TM. The programs that such an Emergent DN learns are from the physical world. The Emergent DN integrates such programs in the physical world one piece at a time incrementally, from simple to complex. This integration is made into the current network in real time, like how a brain integrates rules from the physical world. Before Weng, APFGP was not raised, either for natural intelligence or artificial intelligence. APFGP means human programmers should not be in the loop of task-specific programming for rules inside the "skull". Likewise, the brain does not require parents to program brain networks of their child. They only interact with that child as the part of the physical environment. In other words, machine learning is fully automated across lifetime. This theory was first published in Weng (arXiv ), which requires rigorous mathematical proof.

Dr. Weng is a professor at the Department of Computer Science and Engineering at Michigan State University (MSU). He is also a co-founder of the Embodied Intelligence Laboratory at MSU and a faculty member of the MSU Cognitive Science Program, and the MSU Neuroscience Program. He is the founder of GENISAMA LLC, a startup that focuses on consumer products for brain-inspired machine learning systems.

Early life and biography
Weng was born in Shanghai, Republic of China and was graduated with a Bachelor of Science degree in computer science by Fudan University in 1982. He then furthered his educational expertise by moving to the United States and receiving a Master of Science and Ph.D degree of the field from the University of Illinois at Urbana-Champaign in the years 1985 and 1989, respectively. He joined Michigan State University as a professor in the Department of Computer Science and Engineering in the year 1992. He is currently an active member of the MSU Cognitive Science Program and the MSU Neuroscience Program. Weng also has published over two hundred findings over the span of his research, and continues to actively conduct studies that merge the topics of computer science, cognitive science, and brain science.

Cresceptron: First Deep Learning Network for 3-D
Weng is an early developer in the idea of deep learning, and his research contributions heavily motivated and influenced the development of the pertaining field. His most widely known finding is Cresceptron[1][2], the first-ever published deep learning network that learns three-dimensional (3-D) features without a 3-D model. Cresceptron, although not as recognized due to failure of citations, introduced new departures (listed in the order of significance) that revolutionized current deep learning. This was the first deep learning network that:


 * recognizes 3-D objects from 2-D images,


 * not only performs 3-D object detection, but also 3-D recognition,


 * performs segmentation on cluttered 3D scenes,


 * uses what is now called max-pooling, a widely-used discretization process that applies a max filter to sub-regions of a 2-D input to reduce dimensionality and prevent overfitting of data.

In addition, with the development of what is widely-known as Neural Networks, Weng questioned some properties of backpropagation, a widely used algorithm for supervised learning of Neural Networks. This was due to the fact that backpropagation requires using test sets as training sets, which leads to data falsification. Weng counteracted this issue in Cresceptron by using hebbian learning.

Autonomous Mental Development (AMD)
During his early research, Weng felt a limitation in the development mechanisms of deep learning networks. In 2001, Weng was the first person to mention the need for programs' learning methods to be task-nonspecific. He named this idea Autonomous Mental Development (AMD). AMD allows the ability of the learning program or algorithm to learn on open-ended tasks, starting from a simple task to progressively more difficult tasks.

This idea was presented in the NSF/DARPA- funded Workshop on Development and Learning at Michigan State University in 2000.

Developmental Networks (DN)
Developmental Networks have their embodiments Where-What Networks (WWN). They represent a new kind of emergent representation inspired by biological brains. In these kinds of neural representations, a neuron is not just a feature of sensory inputs, but also a feature of muscle inputs. By muscle inputs, we mean context, like a state in finite automaton. This seems to be a fundamental breakthrough in the way we understand what a brain network does. What a brain does is not pattern recognition machine that recognizes only input patterns, but rather, the brain tries to relate attended context in space and time as a state in the muscles. That state is combined with current sensory inputs to give rise to the next state (also the next action, as a special case of the next state). Dr. Weng successfully used DNs to formulate how a symbolic finite automaton (FA) can become emergent where symbols are all emergent as vectors (neuronal firing patterns). Dr. Weng has proved that a DN can learn any FA. Furthermore, he proved that the control of any TM, including Universal TM, is a finite automaton. Therefore, a DN can learn any TM and any Universal TM.

Auto-Programming for General Purposes (APFGP)
During his research, Weng found that there exist current limitations in the use case of Neural Networks in artificial intelligence, for different tasks require different learning methods. Therefore, Weng began conducting research related to auto-programming for general purposes (APFGP), Universal Turing Machines (Universal TM) and other developmental networks that have potential and open possibilities to compute brain-like auto-programming to fulfill general purposes.

After 14 years of conducting research since 2001, Weng successfully established the idea of APFGP. Weng argues that currently, natural intelligence (NI) is fundamentally crippled in developing programs, and APFGP aims to solve this issue by blending the theories of NI with theory, mathematical proofs, and experiments for the three bottleneck subjects pertaining to AI: vision, audition, and natural language acquisition. With this, programs can learn from basic tasks during their early development stage, then learn progressively more complicated tasks. An arXiv paper was published to organize his findings.

His personal research yielded results that show such potential in Universal Turing Machines (Universal TM), GENISAMA Turing Machines (GENISAMA TM), and when only considering pattern recognition, Neural Networks (NN).