Intro to Theory

Computational Neuroscience, Fall 2022

Larry Abbott, Ken Miller, Ashok Litwin Kumar, Stefano Fusi, Sean Escola

TAs: Elom Amematsro, Ho Yin Chau, David Clark, Zhenrui Liao

Meetings: Tuesdays & Thursdays 2:00-3:30 (JLG L5-084)

Text - Theoretical Neuroscience by P. Dayan and L.F. Abbott (MIT Press)

Webpage -


Download Schedule


6         (Larry) Introduction to Course and to Computational Neuroscience   

8         (Larry)  Electrical Properties of Neurons, Integrate-and-Fire Model

13       (Larry) Numerical Methods, Filtering (Assignment 1)

15       (Larry) The Hodgkin-Huxley Model

20       (Larry) Types of Neuron Models and Networks (Assignment 2)    

21        Assignment 1 Due 

22       (Stefano) Adaptation, Synapses, Synaptic Plasticity

27       (Sean) Generalized Linear Models

28       Assignment 2 Due

29       (Ken) Linear Algebra I (Assignment 3)


4         (Ken) Linear Algebra II

6         (Ken) PCA and Dimensionality Reduction

11        (Ken) Rate Networks/E-I networks I (Assignment 4)             

12        Assignment 3 Due

13       (Ken) Rate Networks/E-I networks II

18       (Ken) Unsupervised/Hebbian Learning, Developmental Models (Assignment 5)

19        Assignment 4 Due

20       (Ashok) Introduction to Probability, Encoding, Decoding

25       (Ashok) Decoding, Fisher Information I

26        Assignment 5 Due

27        (Ashok) Decoding, Fisher Information II (Assignment 6)


1         (Ashok) Information Theory

3         (Ashok) Optimization I (Assignment 7)

8         Holiday

9         Assignment 6 Due

10       (Ashok) Optimization II

15       (Stefano) The Perceptron (Assignment 8)

16       Assignment 7 Due

17       (Stefano) Multilayer Perceptrons and Mixed Selectivity

22       (Stefano) Deep Learning (Assignment 9)

23       Assignment 8 Due

24       Holiday  

29      (Sean) Learning in Recurrent Networks        


1       (Stefano) Continual Learning and Catastrophic Forgetting

6       (Stefano) Reinforcement Learning (Assignment 10)

7        Assignment 9 Due

8        (Larry) Course Wrap up       

14      Assignment 10 Due


Introduction to Theoretical Neuroscience (Fall 2021)

Introduction to Theoretical Neuroscience (Spring 2021)

Introduction to Theoretical Neuroscience (Spring 2020)

Introduction to Theoretical Neuroscience (Spring 2019)

Introduction to Theoretical Neuroscience (Spring 2018)

Mathematical Tools

Mathematical Tools for Theoretical Neuroscience (NBHV GU4359)

Download schedule

Spring 2022

Class Assistants: Dan Tyulmankov ([email protected]), Zhenrui Liao ([email protected]), Ching Fang ([email protected]), Jack Lindsey ([email protected]), Amol Pasarkar ([email protected])

Faculty contact: Prof. Ken Miller ([email protected])

Time: Tuesdays, Thursdays 10:10 - 11:25am
Place: L8-084
Webpage: CourseWorks (announcements, assignments, readings)

Description: An introduction to mathematical concepts used in theoretical neuroscience aimed to give a minimal requisite background for NBHV G4360, Introduction to Theoretical Neuroscience. The target audience is students with limited mathematical background who are interested in rapidly acquiring the vocabulary and basic mathematical skills for studying theoretical neuroscience, or who wish to gain a deeper exposure to mathematical concepts than offered by NBHV G4360. Topics include single- and multivariable calculus, linear algebra, differential equations, signals and systems, and probability. Examples and applications are drawn primarily from theoretical and computational neuroscience. 

Prerequisites: Basic prior exposure to trigonometry, calculus, and vector/matrix operations at the high school level.


  • Undergraduate and graduate students: Must register** on SSOL.
  • All others: Please fill out this form.

(**If you’re only interested in attending a subset of lectures, register Pass/Fail and contact Dan) 


Mathematical Tools for Theoretical Neuroscience (Spring 2021)

Mathematical Tools for Theoretical Neuroscience (Spring 2020)

Computational Statistics

Computational Statistics (Stat GR6104), Spring 2022

This is a Ph.D.-level course in computational statistics. A link to the most recent previous iteration of this course is here.

Note: instructor permission is required to take this class for students outside of the Statistics Ph.D. program.

Time: Tu 2:10-3:40pm
Place: Zoom for a while, then JLG L7-119
ProfessorLiam Paninski; Email: liam at stat dot columbia dot edu. Hours by appointment.

Course goals: (partially adapted from the preface of Givens' and Hoeting's book): Computation plays a central role in modern statistics and machine learning. This course aims to cover topics needed to develop a broad working knowledge of modern computational statistics. We seek to develop a practical understanding of how and why existing methods work, enabling effective use of modern statistical methods. Achieving these goals requires familiarity with diverse topics in statistical computing, computational statistics, computer science, and numerical analysis. Our choice of topics reflects our view of what is central to this evolving field, and what will be interesting and useful. A key theme is scalability to problems of high dimensionality, which are of most interest to many recent applications.
Some important topics will be omitted because high-quality solutions are already available in most software. For example, the generation of pseudo-random numbers is a classic topic, but existing methods built in to standard software packages will suffice for our needs. On the other hand, we will spend a bit of time on some classical numerical linear algebra ideas, because choosing the right method for solving a linear equation (for example) can have a huge impact on the time it takes to solve a problem in practice, particularly if there is some special structure that we can exploit.

Audience: The course will be aimed at first- and second-year students in the Statistics Ph.D. program. Students from other departments or programs are welcome, space permitting; instructor permission required.

Background: The level of mathematics expected does not extend much beyond standard calculus and linear algebra. Breadth of mathematical training is more helpful than depth; we prefer to focus on the big picture of how algorithms work and to sweep under the rug some of the nitty-gritty numerical details. The expected level of statistics is equivalent to that obtained by a graduate student in his or her first year of study of the theory of statistics and probability. An understanding of maximum likelihood methods, Bayesian methods, elementary asymptotic theory, Markov chains, and linear models is most important.

Programming: With respect to computer programming, good students can learn as they go. We'll forgo much language-specific examples, algorithms, or coding; I won't be teaching much programming per se, but rather will focus on the overarching ideas and techniques. For the exercises and projects, I recommend you choose a high-level, interactive package that permits the flexible design of graphical displays and includes supporting statistics and probability functions, e.g., R, Python, or MATLAB.

Evaluation: Final grades will be based on class participation and a student project.

Deterministic optimization
- Newton-Raphson, conjugate gradients, preconditioning, quasi-Newton methods, Fisher scoring, EM and its various derivatives
- Numerical recipes for linear algebra: matrix inverse, LU, Cholesky decompositions, low-rank updates, SVD, banded matrices, Toeplitz matrices and the FFT, Kronecker products (separable matrices), sparse matrix solvers
- Convex analysis: convex functions, duality, KKT conditions, interior point methods, projected gradients, augmented Lagrangian methods, convex relaxations
- Applications: support vector machines, splines, Gaussian processes, isotonic regression, LASSO and LARS regression

Graphical models: dynamic programming, hidden Markov models, forward-backward algorithm, Kalman filter, Markov random fields

Stochastic optimization: Robbins-Monro and Kiefer-Wolfowitz algorithms, simulated annealing, stochastic gradient methods

Deterministic integration: Gaussian quadrature, quasi-Monte Carlo. Application: expectation propagation

Monte Carlo methods
- Rejection sampling, importance sampling, variance reduction methods (Rao-Blackwellization, stratified sampling)
- MCMC methods: Gibbs sampling, Metropolis-Hastings, Langevin methods, Hamiltonian Monte Carlo, slice sampling. Implementation issues: burnin, monitoring convergence
- Sequential Monte Carlo (particle filtering)
- Variational and stochastic variational inference

Givens and Hoeting (2005) Computational statistics
Robert and Casella (2004) Monte Carlo Statistical Methods
Boyd and Vandenberghe (2004), Convex Optimization.
Press et al, Numerical Recipes
Sun and Yuan (2006), Optimization theory and methods
Fletcher (2000) Practical methods of optimization
Searle (2006) Matrix Algebra Useful for Statistics
Spall (2003), Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control
Shewchuk (1994), An Introduction to the Conjugate Gradient Method Without the Agonizing Pain
Boyd et al (2011), Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers

Advanced Theory

Spring 2022

Meetings: Tuesdays, 10.00 am 

Location: Jerome L Greene Science Center, Room L7-119

For the zoom link contact [email protected]


Methods for decoding and interpreting neural data

  • Feb 01 Methods on calculating receptive fields (Ji Xia) Material

  • Feb 08 Information theory (Jeff Johnston)

  • Feb 15 cancelled

Neural representations

  • Feb 22 Geometry of abstractions (theory session) (Valeria Fascianelli) slides

  • Mar 01 Geometry of abstractions (hands-on session) (Lorenzo Posani)

Network dynamics

  • Mar 08 Solving very nonlinear problems with Homotopy Analysis Method (Serena Di Santo) notes notebook

  • Mar 29 Forgetting in attractor networks (Samuel Muscinelli)

  • Apr 05 Mean-field models of network dynamics (Alessandro Sanzeni + Mario Dipoppa)

Dynamics of learning

  • Apr 12 Learning dynamics in feedforward neural networks (Manuel Beiran + Rainer Engelken)

  • Apr 19 Learning dynamics recurrent neural networks (Manuel Beiran + Rainer Engelken)


  • Apr 26 Introduction to some causality issues in neuroscience (Laureline Logiaco)

  • May 3 Causality and latent variable models (Josh Glaser)

  • May 10 student presentations

  • May 17 Bonus session: Many methods, one problem: modern inference techniques as applied to linear regression (Juri Minxha)

Advanced Theory Seminar (Summer/Fall 2020)
Advanced Theory Seminar (Spring 2020) website
Advanced Theory Seminar (Spring 2019) website

Statistical analysis of neural data

Statistical analysis of neural data (GR8201), Fall 2021

This is a Ph.D.-level topics course in statistical analysis of neural data. Students from statistics, neuroscience, and engineering are all welcome to attend. A link to the last iteration of this course is here.

Time: F 10-11:30
Place: JLG L8-084
ProfessorLiam Paninski; Office: Zoom. Email: [email protected]. Hours by appointment.

View Schedule

Prerequisite: A good working knowledge of basic statistical concepts (likelihood, Bayes' rule, Poisson processes, Markov chains, Gaussian random vectors), including especially linear-algebraic concepts related to regression and principal components analysis, is necessary. No previous experience with neural data is required.

Evaluation: Final grades will be based on class participation and a student project. Additional informal exercises will be suggested, but not required. The project can involve either the implementation and justification of a novel analysis technique, or a standard analysis applied to a novel data set. Students can work in pairs or alone (if you work in pairs, of course, the project has to be twice as impressive). See this page for some links to available datasets; or talk to other students in the class, many of whom have collected their own datasets.

Course goals: We will introduce a number of advanced statistical techniques relevant in neuroscience. Each technique will be illustrated via application to problems in neuroscience. The focus will be on the analysis of single and multiple spike train and calcium imaging data, with a few applications to analyzing intracellular voltage and dendritic imaging data. Note that this class will not focus on MRI or EEG data. A brief list of statistical concepts and corresponding neuroscience applications is below.

Page Last Updated: 9/6/2022