Course Curriculum and Syllabus for M.Tech Program in Signal Processing and Machine Learning
(Program Code: M0204)

This new M.Tech. programme has started from July 2019.

Semester I:

Code Course Name L-T-P Credits
EE 520 Signal Processing Algorithms and Architectures 3-0-0 6
EE 523 Introduction to Machine Learning 3-0-0 6
EE 524 Machine Learning Laboratory 0-0-3 3
EE 590 Linear Algebra and Optimization 3-0-0 6
EE 591 Probability and Stochastic Processes 3-0-0 6
EE 6/7XX Elective 1 3-0-0 6


Semester II:

Code Course Name L-T-P Credits
EE 521 Digital Signal Processors Lab 0-0-3 3
EE 522 Statistical Signal Processing 3-0-0 6
EE 525 Advanced Topics in Machine Learning 3-0-0 3
EE 6/7XX Elective 2 3-0-0 6
EE 6/7XX Elective 3 3-0-0 6
EE 6/7XX Elective 4 3-0-0 6


Semester III:

Code Course Name L-T-P Credits
EE 698 Project Phase-I 0-0-24 24


Semester IV:

Code Course Name L-T-P Credits
EE 699 Project Phase-II 0-0-24 24


Syllabus:

Signal Processing Algorithms and Architectures (EE 520)
L-T-P-C : 3-0-0-6
Course Contents:

Orthogonal transforms: DFT, DCT and Haar; Properties of DFT; Computation of DFT: FFT and structures, Decimation in time, Decimation in frequency; Linear convolution using DFT; Digital filter structures: Basic FIR/IIR filter structures, FIR/IIR Cascaded lattice structures, Parallel allpass realization of IIR transfer functions, Sine- cosine generator; Computational complexity of filter structures; Multirate signal processing: Basic structures for sampling rate conversion, Decimators and Interpolators; Multistage design of interpolators and decimators; Polyphase decomposition and FIR structures; Computationally efficient sampling rate converters; Arbitrary sampling rate converters based on interpolation algorithms: Lagrange interpolation, Spline interpolation; Quadrature mirror filter banks; Conditions for perfect reconstruction; Applications in subband coding; Digital Signal Processors introduction: Computational characteristics of DSP algorithms and applications; Techniques for enhancing computational throughput: Harvard architecture, parallelism, pipelining, dedicated multiplier, split ALU and barrel shifter; TMS320C64xx architecture: CPU data paths and control, general purpose register files, register file cross paths, memory load and store paths, data address paths, parallel operations, resource constraints.

Texts/References:
  1. R. Chassaing and D. Reay, Digital signal processing and applications with TMS320C6713 and TMS320C6416, Wiley, 2008.
  2. S. K. Mitra, Digital Signal Processing: A Computer Based Approach, 3rd Edn., TMH, 2008.
  3. J. G. Proakis and D. G. Manolakis, Digital Signal Processing:
  4. Principles, Algorithms and Applications, Pearson Prentice Hall, 2007


Introduction to Machine Learning (EE 523)
L-T-P-C : 3-0-0-6
Course Contents:

Introduction to learning: Supervised and Unsupervised, Generative and Discriminative models, Classification and Regression problems; Feature selection, dimensionality reduction using PCA; Bayesian classification, Discriminative classifiers: Perceptrons, Multi-layer perceptron, RBF Networks, Decision Trees, Support Vector Machines; Unsupervised learning: EM Algorithm; K-Means clustering, DBSCAN, Hierarchical Agglomerative Clustering, Density estimation in learning, Mean-shift clustering; Classification performance analysis; Ensemble methods: Ensemble strategies, boosting and bagging; Sequence Models: Hidden Markov Models, Probabilistic Suffix Trees; Applications and Case studies.

Texts/References:
  1. E. Alpaydin, Introduction to Machine Learning, 3rd Edition, Prentice Hall (India) 2015.
  2. R. O. Duda, P. E. Hart and D. G. Stork, Pattern Classification, 2nd Edn., Wiley India, 2007.
  3. C. M. Bishop, Pattern Recognition and Machine Learning (Information Science and Statistics), Springer, 2006.
  4. S. O. Haykin, Neural Networks and Learning Machines, 3rd Edition, Pearson Education (India), 2016


Machine Learning Laboratory (EE 524)
L-T-P-C : 0-0-3-3
Course Contents:

Texts/References:


Linear Algebra and Optimization (EE 590)
L-T-P-C : 3-0-0-6
Course Contents:

Linear Algebra - vector spaces, linear independence, bases and dimension, linear maps and matrices, eigenvalues, invariant subspaces, inner products, norms, orthonormal bases, spectral theorem, isometries, polar and singular value decomposition, operators on real and complex vector spaces, characteristic polynomial, minimal polynomial; optimization - sequences and limits, derivative matrix, level sets and gradients, Taylor series; unconstrained optimization - necessary and sufficient conditions for optima, convex sets, convex functions, optima of convex functions, steepest descent, Newton and quasi Newton methods, conjugate direction methods; constrained optimization - linear and non-linear constraints, equality and inequality constraints, optimality conditions, constrained convex optimization, projected gradient methods, penalty methods.

Texts/References:
  1. S. Axler, Linear Algebra Done Right, 2nd Edn., Springer, 1997.
  2. E. K. P. Chong and S. H. Zak, An Introduction to Optimization, 2nd Edn., Wiley India Pvt. Ltd., 2010.
  3. G. Strang, Linear Algebra and Its Applications, Nelson Engineering, 2007.
  4. D. C. Lay, Linear Algebra and Its Applications, 3rd Edition, Pearson, 2002.
  5. D. G. Luenberger and Y. Ye, Linear and Nonlinear Programming, 3rd Edn., Springer, 2010.


Probability and Stochastic Processes (EE 591)
L-T-P-C : 3-0-0-6
Course Contents:

Axiomatic definitions of probability; conditional probability, independence and Bayes theorem, continuity property of probabilities, Borel-Cantelli Lemma; random variable: probability distribution, density and mass functions, functions of a random variable; expectation, characteristic and moment-generating functions; Chebyshev, Markov and Chernoff bounds; jointly distributed random variables: joint distribution and density functions, joint moments, conditional distributions and expectations, functions of random variables; random vector- mean vector and covariance matrix, Gaussian random vectors; sequence of random variables: almost sure and mean-square convergences, convergences in probability and in distribution, laws of large numbers, central limit theorem; random process: probabilistic structure of a random process; mean, autocorrelation and autocovariance functions; stationarity - strict- sense stationary and wide-sense stationary (WSS) processes: time averages and ergodicity; spectral representation of a real WSS process-power spectral density, cross-power spectral density, linear time-invariant systems with WSS process as an input- time and frequency domain analyses; examples of random processes: white noise, Gaussian, Poisson and Markov processes.

Texts/References:
  1. H. Stark and J. W. Woods, Probability and Random Processes with Applications to Signal Processing, Prentice Hall, 2002.
  2. A. Papoulis and S. U. Pillai, Probability, Random Variables and Stochastic Processes, 4th Edn., McGraw-Hill, 2002.
  3. B. Hajek, An Exploration of Random Processes for Engineers, ECE534 Course Notes, 2011. http://www.ifp.illinois.edu/~hajek/Papers/randomprocesses.html


Digital Signal Processors Lab (EE 521)
L-T-P-C : 0-0-3-3
Course Contents:

Fundamentals: Familiarization to Code Composer Studio; development cycle on TMS320C64xx kit; Generation of signals, Fourier representation and z-transform, sampling theorem in time and frequency domains, convolution and correlation, DFT and FFT; FIR and IIR filters; sampling rate converters. Applications: Adaptive filter and experiments on communication such as generation of a n-tuple PN sequence, generation of a white noise sequence using the PN sequence, restoration of a sinusoidal signal embedded in white noise by Wiener Filtering; speech and multi-media applications.

Texts/References:
  1. R. Chassaing and D. Reay, Digital signal processing and applications with TMS320C6713 and TMS320C6416, Wiley, 2008.
  2. TMS320C64x Technical Overview, Texas Instruments, Dallas, TX, 2001.
  3. TMS320C6000 Peripherals Reference Guide, Texas Instruments, Dallas, TX, 2001.
  4. TMS320C6000 CPU and Instruction Set Reference Guide, Texas Instruments, Dallas, TX, 2000.
  5. IEEE Signal Processing Magazine: Oct 88, Jan 89, July 97, Jan 98, March 98 and March 2000.


Statistical Signal Processing (EE 522)
L-T-P-C : 3-0-0-6
Course Contents:

Stationary processes, Strict sense stationarity, Wide sense stationarity and Cyclostationarity for continuous time and discrete time random processes; Discrete time Markov processes, Spectral analysis of wide sense stationary processes, Synthesis models for white noise; Signal synthesis models, Detection of signals in noisy environments, Setting up Hypotheses for varied application scenarios, Simple Hypothesis and Composite Hypothesis, - Includes both the multi-state problem as well as uncertainties in the observation model parameters; Bayesian, MIN-MAX and Neyman-Pearson; Introduction to Sufficient statistics; channel sensing, signal detection, signal bandwidth estimation; Bayesian, Maximum Likelihood (ML), Minimum Mean Square estimation (MMSE) procedures; Fisher information and Cramer-Rao Bound; Applications to Kalman filter tracking; Introduction to Adaptive filters, Least Mean Squares (LMS) algorithm, Recursive least squares (RLS).

Texts/References:
  1. H. L. Van Trees, Detection, Estimation and Modulation Theory, Part I, John Wiley, 1968.
  2. H. V. Poor, An Introduction to Signal Detection and Estimation, 2nd edition, Springer, 1994.
  3. S. J. Orfanidis, Optimum Signal Processing, 2nd Edition, 2007 republication of the 1988 McGraw-Hill edition.


Advanced Topics in Machine Learning (EE 525)
L-T-P-C : 3-0-0-6
Course Contents:

Kernel Methods: Review of SVM, Classification and Regression using SVM, Properties of Kernels, Non-Mercer Kernels, Kernel Selection, Multiple Kernel Learning, Kernel PCA; Probabilistic Graphical Models: Bayesian networks, Undirected models, Bayesian learning, structure learning, Inference on graphical models, exponential families; Deep Learning: Review of Multi-layer Perceptrons, Backpropagation Algorithms, Stochastic Gradient Descent, Loss and Activation functions, Regularization strategies, Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), Long Short-Term Memory Units (LSTM), Auto encoders; Reinforcement Learning: Introduction to Reinforcement Learning, Multi-armed Bandit Problem, Finite Markov Decision Processes, Dynamic Programming, Eligibility Traces, Policy Gradient Methods, Deep-Q Learning; Applications and Case Studies.

Texts/References:
  1. J. Shawe-Taylor and Nello Cristianini, Kernel Methods for Pattern Analysis, Cambridge University Press, 2004.
  2. D. Koller and N. Friedman, Probabilistic Graphical Models – Principles and Techniques, MIT Press, 2009.
  3. I. Goodfellow, Y. Bengio , A. Courville, Deep Learning, MIT Press, 2017
  4. R. Sutton, Reinforcement Learning – An Introduction, MIT Press, 1998