Computer Science Department

CS791S: Neural Networks (Fall 99)

  • Meets: TR 1:00 - 2:15 pm. (344 SEM)

  • Instructor: Dr. George Bebis

  • Text:

    M. Hagan, H. Demuth and M. Beale,Neural Network Design, PWS Publishing Company, 1996.

  • Other helpful texts:

    • S. Haykin, Neural Networks: A Comprehensive Foundation 2nd edition, (Prentice Hall, 1999)
    • K. Mehrotra, C. Mohan, and S. Ranka, Elements of Artificial Neural Networks, MIT Press, 1997.
    • C. Looney, Pattern Recognition Using Neural Networks, Oxford University Press, 1997
    • C. Bishop, Neural Networks for Pattern Recognition, Oxford University Press, 1995.
    • J. Hertz, A. Krogh, R.G. Palmer, Introduction to the Theory of Neural Computation (Addison-Wesley, 1991)

  • Useful links to Neural Network resources:

  • Useful links to Pattern Recognition resources:

  • Useful links to Computer Vision resources:

  • Useful links to Artificial Intelligence resources:

  • Other useful information:

    Course Description

    Neural networks provide a model of computation drastically different from traditional computers. Typically, neural networks are not explicitly programmed to perform a given task; rather, they learn to do the task from examples of desired input/output behavior. The networks automatically generalize their processing knowledge into previously unseen situations, and they perform well even when the input is noisy, incomplete or inaccurate. These properties are well-suited for modeling tasks in ill-structured domains such as face recognition, speech recognition and motor control.

    This course will cover basic neural network architectures and learning algorithms, for applications in pattern recognition, image processing, and computer vision. Three forms of learning will be introduced (i.e., supervised, unsupervised and reinforcement learning) and applications of these will be discussed. The students will have a chance to try out several of these models on practical problems.

    This is an advanced level course suited for graduate students in Computer Science and Engineering. It is primarily intended for students who are interested in doing research in the areas of Neural Networks and Computer Vision. There are many open problems in this areas suitable for investigation by Master's students leading to a professional paper or master thesis. The course contains three main parts:

  • Lectures (instructor)
  • Paper presentations/discussion (students)
  • Semester project (students)

    Course Outline (tentative)

  • Introduction (Chapter 1)
  • Neuron Model and Network Architectures (Chapter 2)
  • An Illustrative Example (Chapter 3)
  • Perceptron Learning Rule (Chapter 4)
  • Background on Linear Algebra (Chapters 5 & 6)
  • Supervised Hebbian Learning (Chapter 7)
  • Background on performance surfaces and optimization (Chapters 8 & 9)
  • Widrow-Hoff Learning (Chapter 10)
  • Backpropagation (Chapters 11 & 12)
  • Associative Learning (Chapter 13)
  • Competitive Networks (Chapter 14)

    Course Prerequisites

    The course requires that the students have previously taken courses on programming (CS201, CS202), data structures (CS308), calculus (MATH 181, MATH 182), and linear algebra (MATH 330). Courses on image processing, computer vision, machine learning, artificial intelligence, and genetic algorithms are recommended, but not required.

    Exams and Assignments

    Grading will be based on a midterm exam, paper presentations, participation in class discussion, and a semester project. The paper presentations will be related to the project. Each student is expected to complete a semester project which should be carried out on an individual basis (no group-projects will be allowed). Ideally, the project will target a specific problem relevant to the graduate stduent's own research interests (assuming they are Computer Vision related). Some neural network applications, suitable for a semester project (the relative papers can be found here ) include:

  • Object Recognition Using Neural Networks
  • Face Recognition Using Neural Networks
  • Face Detection Using Neural Networks
  • Fingerprint Recognition Using Neural Networks
  • Eye-gaze Estimation and Tracking Using Neural Networks
  • Face-pose Estimation and Tracking Using Neural Networks


    For the project, students are encouraged to use the Stuttgart Neural Network Simulator (SNNS). SNNS is a popular simulator for neural networks developed at the Institute for Parallel and Distributed High Performance Systems (IPVR) at the University of Stuttgart. It contains many popular neural netwok algorithms such as Backpropagation, Cascade Correlation, Quickpropagation, LVQ, SOM, etc. (please, refer to this page for more information; also, a copy of the user manual has been placed on reserve in DeLaMare Library; the on-line version of the manual can be found here). It has a very nice user interface and can also generate C code which you can incorporate in your own programs.

    SNNS has been installed on the Sun machines under the directory /image/SNNSv4.1. Before using the simulator, you need to include the following command in your .cshrc file:

    set path=($path     /image/SNNSv4.1/xgui/bin/sun_solaris       /image/SNNSv4.1/tools/bin/sun_solaris )

    To invoke the simulator, simply enter snns.


    Department of Computer Science, University of Nevada, Reno, NV 89557
    Page created and maintained by: Dr. George Bebis (