V
主页
Learning linear models in-context with transformers
发布人
Learning linear models in-context with transformers
打开封面
下载高清视频
观看高清视频
视频下载器
DPO算法详解 : Direct Preference Optimization 算法详解 (RLHF的替代算法)
Flash Attention 2.0 with Tri Dao (author)! _ Discord server talks
AI Safety, RLHF, and Self-Supervision
Policy Gradient Theorem Explained - Reinforcement Learning
图灵奖得主 Yoshua Bengio 谈 Deep Learning I
AlphaGeometry 成功解决 IMO 几何问题!Google DeepMind 击败 ChatGPT,难以置信的智能水平,大模型数学大突破
A Theory for Emergence of Complex Skills in Language Models
Mistral AI's Open Source Initiative CEO of Mistral AI
Towards Reliable Use of Large Language Models
Interview with Dr. Ilya Sutskever, co-founder of OPEN AI - at the Open Universit
谷歌Gemini全能大模型震撼发布,全部多模态能力演示,Gemini击败GPT-4,官方全部演示视频收录,Google高管出镜,谷歌背水一战击败GPT
LLaMA2 explained KV-Cache, Rotary Positional Embedding, RMS Norm
What, if anything,do AIs understand? Talk with ChatGPT Co-Founder Ilya Sutskever
Neural Scaling Laws and GPT-3
OpenAI's Ilya Sutskever The man who made AI work
【全748集】南京大学终于把AI大模型(LLM)讲清楚了!通俗易懂,2024最新内部版!拿走不谢,学不会我退出IT圈!
How To Run Mistral 8x7B LLM AI RIGHT NOW! (nVidia and Apple M1)
图灵奖得主 Yoshua Bengio 谈 Deep Learning II
Let's build the GPT Tokenizer
CMU《多模态机器学习|CMU Multimodal Machine Learning, Fall 2023》中英字幕
Modular and Composable Transfer Learning with Jonas Pfeiffer
GitHub Universe 2023 opening keynote
GPT-5: 独家首发能力预测 II
首发:解读OpenAI Q* GPT走向通用人工智能AGI,毁灭人类的Q*算法?OpenAI秘密Q*曝光,OpenAI实现AGI,ChatGPT实现通用人工智能
A.I. Could Solve Some of Humanity’s Hardest Problems. It Already Has.
EMNLP 2022 Tutorial Modular and Parameter-Efficient Fine-Tuning for NLP Model
Reality is a Paradox - Mathematics, Physics, Truth & Love
首发!GPT商店即将面世:ChatGPT的App Store时刻!GPT商店下周上线!详情解析,GPT商店上线政策,GPT新政策!ChatGPT商店
[1hr Talk] Intro to Large Language Models
一种对于泛化的观察 :压缩即智能 II
首发gpt4.5! GPT-4.5真的发布了,GPT-4.5 Turbo已经在ChatGPT发布, 震惊GPT-4.5真的已经来了,ChatGPT4.5发布
奥特曼的GPT5最新访谈,ChatGPT能力大揭秘,Sam Altman的秘密项目,OpenAI GPT5大揭秘
Programming the OpenAI Q Star Algorithm
Greg & Sam are BACK! (+ Q-Star is AGI) (Also Memes)
OpenAI超级对齐策略揭秘,OpenAI Superalignment团队最新成果, WEAK-TO-STRONG GENERALIZATION
开始本地大型语言模型的 Llamafile|Beginning Llamafile for Local Large Language Models (LLMs)
Introducing the Knowledge Graph: things, not strings
Reinforcement Learning with Fast and Forgetful Memory NeurIPS 2023
ICML 2023 Data-Efficient Contrastive Self-Supervised Learning
图灵奖得主 Yann Lecun 谈未来强人工智能