Modules / Lectures
Module NameDownload

Sl.No Chapter Name MP4 Download Transcript Download
1IntroductionDownloadPDF unavailable
2Operations on a CorpusDownloadPDF unavailable
3Probability and NLPDownloadPDF unavailable
4Vector Space modelsDownloadPDF unavailable
5Sequence LearningDownloadPDF unavailable
6Machine TranslationDownloadPDF unavailable
7PreprocessingDownloadPDF unavailable
8Statistical Properties of Words - Part 01DownloadPDF unavailable
9Statistical Properties of Words - Part 02DownloadPDF unavailable
10Statistical Properties of Words - Part 03DownloadPDF unavailable
11Vector Space Models for NLPDownloadPDF unavailable
12Document Similarity - Demo, Inverted index, ExerciseDownloadPDF unavailable
13Vector Representation of wordsDownloadPDF unavailable
14Contextual understanding of textDownloadPDF unavailable
15Co-occurence matrix, n-gramsDownloadPDF unavailable
16Collocations, Dense word VectorsDownloadPDF unavailable
17SVD, Dimensionality reduction, DemoDownloadPDF unavailable
18Query ProcessingDownloadPDF unavailable
19Topic ModelingDownloadPDF unavailable
20Examples for word predictionDownloadPDF unavailable
21Introduction to Probability in the context of NLPDownloadPDF unavailable
22Joint and conditional probabilities, independence with examplesDownloadPDF unavailable
23The definition of probabilistic language modelDownloadPDF unavailable
24Chain rule and Markov assumptionDownloadPDF unavailable
25Generative ModelsDownloadPDF unavailable
26Bigram and Trigram Language models -peeking indide the model buildingDownloadPDF unavailable
27Out of vocabulary words and curse of dimensionalityDownloadPDF unavailable
28Exercise DownloadPDF unavailable
29Naive-Bayes, classificationDownloadPDF unavailable
30Machine learning, perceptron, linearly separableDownloadPDF unavailable
31Linear Models for ClaassificationDownloadPDF unavailable
32Biological Neural NetworkDownloadPDF unavailable
33Perceptron DownloadPDF unavailable
34Perceptron LearningDownloadPDF unavailable
35Logical XORDownloadPDF unavailable
36Activation FunctionsDownloadPDF unavailable
37Gradient DescentDownloadPDF unavailable
38Feedforward and Backpropagation Neural NetworkDownloadPDF unavailable
39Why Word2Vec?DownloadPDF unavailable
40What are CBOW and Skip-Gram Models?DownloadPDF unavailable
41One word learning architectureDownloadPDF unavailable
42Forward pass for Word2VecDownloadPDF unavailable
43Matrix Operations ExplainedDownloadPDF unavailable
44CBOW and Skip Gram ModelsDownloadPDF unavailable
45Building Skip-gram model using PythonDownloadPDF unavailable
46Reduction of complexity - sub-sampling, negative samplingDownloadPDF unavailable
47Binay tree, Hierarchical softmaxDownloadPDF unavailable
48Mapping the output layer to SoftmaxDownloadPDF unavailable
49Updating the weights using hierarchical softmaxDownloadPDF unavailable
50Discussion on the results obtained from word2vecDownloadPDF unavailable
51Recap and IntroductionDownloadPDF unavailable
52ANN as a LM and its limitationsDownloadPDF unavailable
53Sequence Learning and its applicationsDownloadPDF unavailable
54Introuduction to Recurrent Neural NetworkDownloadPDF unavailable
55Unrolled RNNDownloadPDF unavailable
56RNN - Based Language ModelDownloadPDF unavailable
57BPTT - Forward PassDownloadPDF unavailable
58BPTT - Derivatives for W,V and UDownloadPDF unavailable
59BPTT - Exploding and vanishing gradientDownloadPDF unavailable
60LSTMDownloadPDF unavailable
61Truncated BPTTDownloadPDF unavailable
62GRUDownloadPDF unavailable
63Introduction and Historical Approaches to Machine TranslationDownloadPDF unavailable
64What is SMT?DownloadPDF unavailable
65Noisy Channel Model, Bayes Rule, Language ModelDownloadPDF unavailable
66Translation Model, Alignment VariablesDownloadPDF unavailable
67Alignments again!DownloadPDF unavailable
68IBM Model 2DownloadPDF unavailable