V
主页
京东 11.11 红包
【Proof-Trivial】现代机器学习中的【统计学习理论】
发布人
https://www.youtube.com/watch?v=W6wpAiOSF2c John S Shawe-Taylor is a professor at University College London (UK). His main research area is Statistical Learning Theory. His contributions range from Neural Networks, to Machine Learning, to Graph Theory. John Shawe-Taylor obtained a PhD in Mathematics at Royal Holloway, University of London in 1986. He subsequently completed an MSc in the Foundations of Advanced Information Technology at Imperial College. He was promoted to Professor of Computing Science in 1996. He has published over 150 research papers. He moved to the University of Southampton in 2003 to lead the ISIS research group. He has been appointed the Director of the Centre for Computational Statistics and Machine Learning at University College, London from July 2006. He has coordinated a number of European wide projects investigating the theory and practice of Machine Learning, including the NeuroCOLT projects. He is currently the scientific coordinator of a Framework VI Network of Excellence in Pattern Analysis, Statistical Modelling and Computational Learning (PASCAL) involving 57 partners. Abstract: Machine learning aims to identify properties of a distribution from a set of examples. The presentation introduces the statistical learning framework for analysing this task, focussing on the PAC-Bayes approach that draws on Bayesian inference and Probably Approximately Correct learning. The application of this analysis to modern machine learning approaches including Kernel methods and deep learning will be presented, highlighting successes and ongoing shortcomings of these results.
打开封面
下载高清视频
观看高清视频
视频下载器
李航统计学习方法第二版
【Proof-Trivial】数学视角下的Transformer【MIT数学系-Philippe Rigollett】
【Proof-Trivial】深度神经网络中的【泛化理论】与【归纳偏差】 (Generalization and Inductive Bias)
【Proof-Trivial】统计热力学 北京大学
【Proof-Trivial】机器学习与数据科学中的【理论】与【算法】【北京大学 文再文】
【Proof-Trivial】黎曼几何【北京大学/北师大 葛剑】
【Proof-Trivial】李理论 (Lie theory) —— 李群、李代数、李括号
【Proof-Trivial】通信与感知 —— 从【压缩采样】到【基于模型的深度学习】
【Proof-Trivial】最优传输前沿讲座 (Optimal Transport) (更新中...)
【Proof-Trivial】高等数理统计(基于测度论) 清华大学
2024最火的两个模型:Informer+LSTM两大时间序列预测模型,论文精读+代码复现,通俗易懂!——人工智能|AI|机器学习|深度学习
【Proof-Trivial】【2023年】最优化:建模、算法与理论 【北京大学 文再文】
【Proof-Trivial】【Simons Institute】优化与采样中的几何方法 —— 机器学习理论研究者必刷
【Proof-Trivial】流形入门 (Manifolds)
【Proof-Trivial】范畴论入门 (Category Theory)
【Proof-Trivial】K-理论 (K-theory)
一口气学完回归算法、聚类算法、决策树、随机森林、神经网络、贝叶斯算法、支持向量机、神经网络等十二大机器学习算法!通俗易懂
【Proof-Trivial】哈密顿系统与辛几何
【Proof-Trivial】统计推断 (George Casella)【台北大学-李孟峰】【中文】
【Proof-Trivial】泛函分析保姆级入门教程 (Functional Analysis)
【Proof-Trivial】从理论和实践两个角度审视 Linear Transformers
【Proof-Trivial】高维概率及其在数据科学中的应用 (HDP) - Roman Vershynin
【Proof-Trivial】几何测度论书籍推荐 (Geometric Measure Theory)
【Proof-Trivial】【IROS‘22】机器人学习、优化与控制中的几何方法
【Proof-Trivial】变分学 【北京大学 张恭庆院士】【希尔伯特第23个问题】
【整整200集】不愧是吴恩达,一口气把CNN、RNN、GAN、LSTM、YOLO、transformer等六大深度学习神经网路算法,新手小白秒上手!-人工智能
【全874集】目前B站最全最细的ChatGPT零基础全套教程,2024最新版,包含所有干货!一天就能从小白到大神!少走99%的弯路!存下吧!很难找全的!
【Proof-Trivial】贝叶斯优化与贝叶斯学习 (Bayesian Optimization) 持续更新
【Proof-Trivial】随机梯度下降中的【长尾效应】 (Heavy-tail Phenomenon in SGD)
【Proof-Trivial】非凸低秩矩阵估计中的【过参数化】与【全局最优性】
【PyTorch深度学习实战案例】70个练手项目合集,B站最通俗易懂的pytorch深度学习,还不拿下 !!!PyTorch入门|Pytorch实战
【Proof-Trivial】信息几何【入门课程】 (Melvin Leok from UCSD)
【Proof-Trivial】自然语言处理前沿/论文选读 清华大学
【Proof-Trivial】什么是【几何代数 ( Geometric Algebra/Clifford Algebra) 】
【Proof-Trivial】流形上的优化 @Nicolas Boumal (ENS→Princeton→EPFL)
【Proof-Trivial】弦理论 (String Theory)
【Proof-Trivial】凸优化-Stephen Boyd-Stanford (2023-2024最新课程)
【Proof-Trivial】数学基础自学路线
【Proof-Trivial】基于梯度的优化方法 —— 随机优化、非凸优化与加速优化 (Michael I. Jordan)
B站最全!概率论基础、线性代数基础、高等数学基础、微积分、泰勒公式、贝叶斯算法、回归分析等十大人工智能数学基础一口气学完!