IMPORTANT NOTE: This file is no longer updated (after 1994) ! For up-to-date file descriptions, abstracts, and publication details, please see the file IDIAP-NN.bib, also available by WWW and FTP in this directory (/pub/papers/neural). ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Last update December 1994. This is a collection of abstracts of publications of the neural computation group at IDIAP. It can be obtained by anonymous ftp: ftp Maya.IDIAP.CH (or: ftp 192.33.221.1) path: pub/papers/neural/ABSTRACTS and can also be accessed through the IDIAP Neural Network homepage: http://www.idiap.ch/html/idiap-networks.html Pre-prints of most of these publications are also available by ftp, and can be found in the directories: pub/papers/neural and pub/papers/graph-theory. These pre-prints are also accessible via the World Wide Web: http://www.idiap.ch/html/idiap-othertext.html For each of the IDIAP pre-prints, its authors, title, a short description or abstract, a list of keywords, and a contact electronic mail address follows: Files: fiesler.formalization.ps and fiesler.formalization.ps.Z Title: Neural Network Classification and Formalization Author: E. Fiesler Abstract: In order to assist the field of neural networks in maturing, a formalization and a solid foundation are essential. Additionally, to permit the introduction of formal proofs, it is essential to have an all-encompassing formal mathematical definition of a neural network. This publication offers a neural network formalization consisting of a topological taxonomy, a uniform nomenclature, and an accompanying consistent mnemonic notation. Supported by this formalization, both a flexible hierarchical and a universal mathematical definition are presented. Keywords: formalization, standardization, terminology, nomenclature, definition, mnemonic notation, topological taxonomy, neural network classification, neural network determination Contact: EFiesler@IDIAP.CH _________________________________________________________________________ File: fiesler.discretization.ps.Z Title: A Weight Discretization Paradigm for Optical Neural Networks Authors: E. Fiesler, A. Choudry, and H. J. Caulfield Note: This paper describes a weight discretization method for backpropagation based neural networks. A more complete paper is currently in its final state of preparation. Keywords: weight discretization, weight quantization, optical computing, electronic neural networks, optical neural networks, backpropagation, hardware implementation Contact: EFiesler@IDIAP.CH _________________________________________________________________________ File: saxena.optical-nn.ps.Z Title: Adaptive Multilayer Optical Neural Network with Optical Thresholding Authors: I. Saxena and E. Fiesler Note: Invited paper for Optical Engineering. Keywords: multilayer optical neural network, optical computing, liquid crystal light valve, LCLV, activation function, transfer characteristic, response curve, steepness, gain, sigmoidal curve fit, weight discretization, optical multilayer neural network, thresholding, optical neural network, massive parallelism, modifiable optical interconnect, all-optical feed-forward neural computer, simulation, benchmark, backpropagation, multilayer perceptron, optical thresholding, modular optical neurocomputer, liquid crystal television screen, LCTV, spatial light modulator, SLM, dual-wavelength optical computer Contact: ISaxena@IDIAP.CH _________________________________________________________________________ Files: fiesler.ontogenic-summary.ps and fiesler.ontogenic-summary.ps.Z Title: Comparative Bibliography of Ontogenic Neural Networks Author: E. Fiesler Abstract: One of the most powerful aspects of neural networks is their ability to adapt to problems by changing their interconnection strengths according to a given learning rule. On the other hand, one of the main drawbacks of neural networks is the lack of knowledge for determining the topology of the network, that is, the number of layers and number of neurons per layer. A relatively new class of neural networks tries to overcome this problem by letting the network also automatically adapt its topology to the problem. These are the so called ontogenic neural networks. This publication provides an comparative survey of ontogenic neural networks accompanied by a large bibliography. Keywords: ontogenic neural network, topology modifying neural network, self-modifying neural network, growing neural network, pruning neural network, shrinking neural network, neural network topology Contact: EFiesler@IDIAP.CH _________________________________________________________________________ File: bellido.w-distribution.ps.Z Title: Do Backpropagation Trained Neural Networks have Normal Weight Distributions ? Author: I. Bellido and E. Fiesler Abstract: Although artificial neural networks are employed in an ever growing variety of applications, their inner workings are still viewed as a black box, which is due to the complexity of the non-linear dynamics that govern neural network learning. The key parameters in this learning process are the so called interconnection strengths or weights of the connections between the neurons. Because of the lack of data, mathematical approaches for studying the "inside" of neural networks have to resort to assumptions like a Normal distribution of the weight values. In order to better understand what goes on inside neural networks, a thorough study of the real probability distribution of the weight values is important. Besides this, knowledge about weight distributions is also a main ingredient for weight reduction schemes enabling the creation of partially connected neural networks and for network capacity calculations. This paper reports on the findings of an extensive empirical study of the distributions of weights in backpropagation neural networks, and tests formally whether the weights of a trained neural network have indeed a Normal distribution. Keywords: interconnection strength distribution, weight distribution, Normal distribution, Gaussian distribution, statistics, neural network complexity, benchmarks, error backpropagation Contact: IBM@dit.UPM.Es _________________________________________________________________________ File: thimm.simulator_comparison.ps.Z Title: Modular Object-Oriented Neural Network Simulators and Topology Generalizations Authors: G. Thimm, R. Grau, and E. Fiesler Abstract: A growing number of neural networks are based on topologies that deviate from the standard fixed first order fully interlayer connected ones. Although there currently exist a variety of neural network simulators, few are flexible enough to facilitate substantial topology alterations. Some novel modular object-oriented neural network simulators promise modifications and extensions to be made with minimal effort. Two of these simulators are described and compared: OpenSimulator (version 3.1) and Sesame (version 4.5). An extension of these simulators to high-order and ontogenic neural networks is outlined. Keywords: neural network simulator, neural network topology, neural network connectivity, ontogenic neural network, high order neural network, Sigma-Pi neural network, object-oriented, Sesame, OpenSimulator Contact: Thimm@IDIAP.CH _________________________________________________________________________ File: fiesler.minimal.ps.Z Title: Minimal and High Order Neural Network Topologies Author: E. Fiesler Abstract: One of the main goals of current neural network research consists in finding the optimal network topology for a given problem. Several methods have been proposed, but a mutual comparison of the resulting topologies remains a problem until proven minimal topologies are found. This paper describes a straightforward method for finding the minimal feed-forward topology for completely defined problems, based on an extensive search followed by mutual substitution. This method has been applied to finding the minimal topologies of the `exclusive OR' problem for the most important inter- connection schemes. Since the number of possible topologies grows very fast with the size of the network, it is only feasible to analytically determine the minimal topology for small networks. In general, the smallest topologies can be obtained using high order neural networks. Small solutions to well known benchmark problems using high order neural networks are presented. It is shown that high order networks without a hidden layer can theoretically compute any Boolean or other bivalued function. More concrete, constructive algorithms are given for creating both fully connected and sparsely connected high order neural networks for the computation of an arbitrary Boolean function. Keywords: neural network topology, high order neural network, minimal topology, partially connected neural network, partly connected neural network, sparse neural network, sigma-pi neural network, neural network architecture, feed-forward neural network Contact: EFiesler@IDIAP.CH fiesler.max-connectivity.ps.Z _________________________________________________________________________ File: fiesler.max-connectivity.ps.Z Title: Connectivity Maximization of Layered Neural Networks for Supervised Learning Author: E. Fiesler Abstract: One of the main problems in current artificial neural network engineering is the lack of design rules for layered neural network topologies, namely how many hidden layers and how many neurons per hidden layer to choose for a neural network. This paper offers a theoretical basis for approaching this problem. Formally proven theories are developed which maximize the interconnection topology of layered neural networks, which use supervised learning, to obtain a maximum number of inter- connections and therefore allow a maximum potential storage capacity. The results presented here depend only on the neural network architecture and are therefore learning rule independent. Keywords: neural network topology, neural network connectivity, neural network architecture, neural network capacity Contact: EFiesler@IDIAP.CH