V
主页
Batch_Normalization: Learning rate 更大也没关系
发布人
读深度学习论文,欢迎大家在弹幕中提问和指正
打开封面
下载高清视频
观看高清视频
视频下载器
Batch_Normalization: Abstract
Batch_Normalization: Introduction
Batch_Normalization: Convolutional Network
Batch_Normalization: Toward Reducing Internal Covariance Shift
Batch_Normalization: Normalization vs mini-batch statistics
Dropout: 如何理解weights * (1- dropout_rate)
李宏毅深度学习batch normalization笔记要点问题
Batch_Normalization: Normalization vs mini-batch statistics full
(中文) 掰开揉碎:initialization, covariance shift, batch normalization
论文分解器:Generalization in deep learning (合集)
为什么当learning_rate 变大如>= 0.01 时,loss为NaN?
resnet 3.4 implementation
3.1 residual learning
Inception: architecture details
AlignedReID:如何理解mutual learning
ResNet related works
AlignedReID:mutual learning 效果的实验对比
Batch Normalization at test time
Dropout: motivation 2
Dropout: abstract
Inception: related works
ResNet: abstract and introduction
EN 掰开揉碎:initialization, batch normalization, covariance shift
Dropout: 4 model description
Dropout: 5.1 backpropagation
Dropout: Introduction
Dropout: Motivation 01
Dropout: related works
Inception: Abstract and Introduction
AlignedReID:triHard, triplet loss, 为什么在预测时只用global features
YOLO:abstract
论文分解器:medium's capsule 2.1
YOLO:Unified Detection part2
Stories of infection
(中文) 论文分解器:Hinton's capsule section7-8
论文分解器:medium's capsule 2.4
论文分解器:hinton's capsule 01 中文
(中文) 论文分解器:Hinton's capsule 2 paragraph1-4
论文分解器:medium's capsule 1.1
哈佛通识概率普及课(特别推荐)Fat Chance: Probability from the Ground Up by Harvard and Edx