forked from shunliz/Machine-Learning
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathSUMMARY.md
172 lines (170 loc) · 8.75 KB
/
SUMMARY.md
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
# Summary
* [前言](README.md)
* [第一部分 数学基础](math/math.md)
* [第一章 数学分析](math/analytic/introduction.md)
* [梯度下降](math/analytic/gradient_descent.md)
* [数值计算](math/analytic/shu-zhi-ji-suan.md)
* [过拟合数学原理与解决方案](math/analytic/overfitting.md)
* [交叉验证](math/analytic/cross-validation.md)
* [最小二乘](math/analytic/least-square.md)
* [第二章 概率论](math/probability.md)
* [统计学习方法概论](math/probability/prob-methodology.md)
* [最大似然估计](math/probability/mle.md)
* [蒙特卡罗方法](math/probability/mcmc1.md)
* [马尔科夫链](math/probability/markov-chain.md)
* [MCMC采样和M-H采样](math/probability/mcmc-mh.md)
* [Gibbs采样](math/probability/gibbs.md)
* [第三章 矩阵和线性代数](math/linear-matrix/linear-matrix.md)
* [第二部分 机器学习](ml/ml.md)
* [第四章 机器学习基础](ml/pythonml.md)
* [Python及其数学库](ml/pythonml/pythonji-qi-shu-xue-ku.md)
* [机器学习库](ml/pythonml/ji-qi-xue-xi-ku.md)
* [模型度量](ml/pythonml/ml-metrics.md)
* [生成模型和判别模型](ml/pythonml/gen-descri.md)
* [第六课:数据清洗和特征选择](ml/clean-feature/cleanup-feature.md)
* [PCA](ml/clean-feature/pca.md)
* [ICA](ml/clean-feature/ica.md)
* [One-hot编码](ml/clean-feature/one-hot.md)
* [scikit-learn PCA](ml/clean-feature/scikit-pca.md)
* [线性判别分析LDA](ml/clean-feature/xian-xing-pan-bie-fen-xi-lda.md)
* [用scikit-learn进行LDA降维](ml/clean-feature/scikit-lda.md)
* [奇异值分解\(SVD\)原理与在降维中的应用](ml/clean-feature/svd.md)
* [局部线性嵌入\(LLE\)原理](ml/clean-feature/lle.md)
* [scikit-learn LLE](ml/clean-feature/scikit-lle.md)
* [spark特征选择](ml/clean-feature/spark-fselect.md)
* [Spark特征提取](ml/clean-feature/spark-fextract.md)
* [第七课: 回归](ml/regression/regression.md)
* [1. 线性回归](ml/regression/linear-regression.md)
* [10.最大熵模型](ml/regression/max-entropy.md)
* [11.K-L散度](ml/regression/kl.md)
* [坐标下降和最小角](ml/regression/cordinate-angle.md)
* [线性回归小结](ml/regression/linear-regression-summary.md)
* [Logistic回归](ml/regression/logistic.md)
* [Logistic回归小结](ml/regression/logistichui-gui-xiao-jie.md)
* [第九课:决策树](ml/decisiontree.md)
* [ID3](ml/decisiontree/id3.md)
* [C4.5](ml/decisiontree/c45.md)
* [CART](ml/decisiontree/cart.md)
* [总结](ml/decisiontree/summary.md)
* [实现代码](ml/decisiontree/code.md)
* [第十三课:SVM](ml/svm.md)
* [感知机模型](ml/svm/gan-zhi-ji-mo-xing.md)
* [线性SVM](ml/svm/linear-svm.md)
* [软间隔最大化模型](ml/svm/soft-margin-max.md)
* [核函数](ml/svm/kernel-method.md)
* [SMO算法原理](ml/svm/smo.md)
* [SVM回归](ml/svm/svm-regression.md)
* [scikit-learn SVM](ml/svm/scikit-learn-svm.md)
* [支持向量机高斯核调参](ml/svm/gaosi-kernel.md)
* [SVM代码实现](ml/svm/svm-code.md)
* [集成学习](ml/integrate.md)
* [Adaboost原理](ml/integrate/adaboost.md)
* [scikit-learn Adaboost](ml/integrate/scikit-learn-adaboost.md)
* [梯度提升树(GBDT)](ml/integrate/gbdt.md)
* [scikit GBDT](ml/integrate/scikit-gbdt.md)
* [Bagging与随机森林](ml/integrate/random-forest.md)
* [XGBOOST](ml/integrate/xgboost.md)
* [scikit-learn 随机森林](ml/integrate/scikit-learn-rf.md)
* [第十五课:聚类](ml/cluster.md)
* [K-Mean](ml/cluster/kmeans.md)
* [KNN](ml/cluster/KNN.md)
* [scikit-learn KNN](ml/cluster/knnshi-jian.md)
* [KNN 代码](ml/cluster/knn-code.md)
* [scikit-learn K-Means](ml/cluster/scikit-k-means.md)
* [BIRCH聚类算法原理](ml/cluster/birch.md)
* [scikit-learn BIRCH](ml/cluster/scikit-learn-birch.md)
* [DBSCAN密度聚类算法](ml/cluster/dbscan.md)
* [scikit-learn DBSCAN](ml/cluster/scikit-learn-dbscan.md)
* [谱聚类(spectral clustering)原理](ml/cluster/spectral.md)
* [scikit-learn 谱聚类](ml/cluster/scikit-spectral.md)
* [近邻传播算法](ml/cluster/ap.md)
* [混合高斯模型](ml/cluster/gmm.md)
* [关联分析](ml/associative/associative.md)
* [典型关联分析\(CCA\)原理](ml/associative/cca.md)
* [Apriori算法原理](ml/associative/apriori.md)
* [FP Tree算法原理](ml/associative/fptree.md)
* [PrefixSpan算法原理](ml/associative/prefixspan.md)
* [Spark FP Tree算法和PrefixSpan算法](ml/associative/spark-fptree-prefixspan.md)
* [推荐算法](ml/recommand/recommand.md)
* [矩阵分解协同过滤推荐算法](ml/recommand/matrix-filter.md)
* [SimRank协同过滤推荐算法](ml/recommand/simrank.md)
* [Spark矩阵分解推荐算法](ml/recomand/spark-factor.md)
* [分解机\(Factorization Machines\)推荐算法原理](ml/recommand/fm.md)
* [美团推荐算法](ml/recommand/meituan.md)
* [第十七课:EM算法](ml/em/em.md)
* [第十九课:贝叶斯网络](ml/bayes.md)
* [朴素贝叶斯](ml/bayes/po-su-bei-xie-si.md)
* [scikit-learn朴素贝叶斯](ml/bayes/scikit-simple-bayes.md)
* [朴素贝叶斯实际应用](ml/bayes/simple-bayes-real-use.md)
* [朴素贝叶斯代码](ml/bayes/simple-bayes-code.md)
* [第二十一课:LDA主题模型](ml/lda/lda.md)
* [第二十三课:隐马尔科夫模型HMM](ml/hmm/hmm.md)
* [HMM前向后向算法评估观察序列概率](ml/hmm/hmm-forward-backward.md)
* [鲍姆-韦尔奇算法求解HMM参数](ml/hmm/bmwl-hmm.md)
* [维特比算法解码隐藏状态序列](ml/hmm/viterb-hmm.md)
* [用hmmlearn学习隐马尔科夫模型HMM](ml/hmm/hmmlearn.md)
* [条件随机场CRF](ml/crf/crf.md)
* [从随机场到线性链条件随机场](ml/crf/linear-crf.md)
* [前向后向算法评估标记序列概率](ml/crf/back-forth.md)
* [维特比算法解码](ml/crf/crf-viterbi.md)
* [自然语言处理](nlp/nlp.md)
* [文本挖掘的分词原理](nlp/text-mine.md)
* [HashTrick](nlp/hashtrick.md)
* [TF-IDF](nlp/tf-idf.md)
* [中文文本挖掘预处理](nlp/preprocessing.md)
* [英文文本挖掘预处理](nlp/english-preprocess.md)
* [潜在语义索引\(LSI\)](nlp/lda/lsi.md)
* [非负矩阵分解\(NMF\)](nlp/lda/nmf.md)
* [LDA基础](nlp/lda/lda.md)
* [LDA求解之Gibbs采样算法](nlp/lda/lda-gibbs.md)
* [LDA求解之变分推断EM算法](nlp/lda/vi-em.md)
* [scikit-learn LDA主题模型](nlp/lda/scikit-learn-lda.md)
* [第三部分 深度学习](dl/dl.md)
* [深度学习层](dl/layers/layers.md)
* [核心层](dl/layers/core.md)
* [卷积层](dl/layers/conv.md)
* [池化层](dl/layers/pooling.md)
* [局部连接层](dl/layers/lcnn.md)
* [循环层](dl/layers/rnn.md)
* [嵌入层](dl/layers/ebbedded.md)
* [合并层](dl/layers/merge.md)
* [高级激活层](dl/layers/activation.md)
* [归一化层](dl/layers/regular.md)
* [噪声层](dl/layers/nosie.md)
* [层包裹](dl/layers/wrapper.md)
* [自定义层](dl/layers/userdefine.md)
* [第二十五课:深度学习](dl/introduction/introduction.md)
* [基本概念](dl/introduction/ji-ben-gai-nian.md)
* [深度神经网络(DNN)模型与前向传播算法](dl/introduction/dnn-fp.md)
* [深度神经网络(DNN)反向传播算法\(BP\)](dl/introduction/dnn-bp.md)
* [反向传播](dl/introduction/back-propagation.md)
* [反向传播2](dl/introduction/READ.md)
* [DNN损失函数和激活函数的选择](dl/introduction/dnn-loss.md)
* [深度神经网络(DNN)的正则化](dl/introduction/dnn-normal.md)
* [参考文献](dl/reference.md)
* [第二十六课 卷积神 经网络\(Convolutional Neural Netowrk\)](dl/cnn/introduction.md)
* [卷积神经网络\(CNN\)模型结构](dl/cnn/cnn-arch.md)
* [卷积神经网络\(CNN\)前向传播算法](dl/cnn/cnn-fp.md)
* [卷积神经网络\(CNN\)反向传播算法](dl/cnn/cnn-bp.md)
* [对抗生成网络\(Generative Adversarial Networks\)](dl/gan/gan.md)
* [GAN原理](ml/gan/gan-principle.md)
* [InfoGAN](dl/gan/infogan.md)
* [DCGAN](dl/gan/dcgan.md)
* [VAE](dl/gan/vae.md)
* [受限波尔兹曼机](dl/rbm/rbm.md)
* [RBM code](dl/rbm/rbm-code.md)
* [RNN](dl/rnn/rnn.md)
* Bidirectional RNNs
* Deep \(Bidirectional\) RNNs
* [LSTM模型与前向反向传播算法](dl/rnn/lstm.md)
* [随时间反向传播(BPTT)算法](dl/rnn/bptt.md)
* [循环神经网络\(RNN\)模型与前向反向传播算法](dl/rnn/rnn-bptt.md)
* [自动编码器](dl/encoder/encoder.md)
* [堆叠降噪自动编码器](dl/encoder/stack-denoise-encoder.md)
* [降噪自动编码器](dl/encoder/denoise-encoder.md)
* [sparse自动编码器](dl/encoder/sparse-autoencoder.md)
* [word2vec](dl/word2vec/word2vec.md)
* [CBOW与Skip-Gram模型基础](dl/word2vec/cbow-skip-n.md)
* [基于Hierarchical Softmax的模型](dl/word2vec/hierarc-softmax.md)
* [基于Negative Sampling的模型](dl/word2vec/negative-sampling.md)
* [增强学习](Dl/reinforcement/reinforcement.md)