V
主页
在单个GPU有效微调Llama-v2-7b|Efficient Fine-Tuning for Llama-v2-7b on a Single GPU中英字幕
发布人
https://www.youtube.com/watch?v=g68qlo9Izf0 The first problem you’re likely to encounter when fine-tuning an LLM is the “host out of memory” error. It’s more difficult for fine-tuning the 7B parameter Llama-2 model which requires more memory. In this talk, we are having Piero Molino and Travis Addair from the open-source Ludwig project to show you how to tackle this problem. The good news is that, with an optimized LLM training framework like Ludwig.ai, you can get the host memory overhead back down to a more reasonable host memory even when training on multiple GPUs. In this hands-on workshop, we‘ll discuss the unique challenges in finetuning LLMs and show you how you can tackle these challenges with open-source tools through a demo. By the end of this session, attendees will understand: - How to fine-tune LLMs like Llama-2-7b on a single GPU - Techniques like parameter efficient tuning and quantization, and how they can help - How to train a 7b param model on a single T4 GPU (QLoRA) - How to deploy tuned models like Llama-2 to production - Continued training with RLHF - How to use RAG to do question answering with trained LLMs This session will equip ML engineers to unlock the capabilities of LLMs like Llama-2 on for their own projects. This event is inspired by DeepLearning.AI’s GenAI short courses, created in collaboration with AI companies across the globe. Our courses help you learn new skills, tools, and concepts efficiently within 1 hour.
打开封面
下载高清视频
观看高清视频
视频下载器
沃顿商学院《病毒式营销及如何打造具有传染力的内容|Viral Marketing and How to Craft Contagious Content》
Coursera斯坦福大学《概率图模型(表示、推断、学习)|Probabilistic Graphical Models》中英字幕
吴恩达《AI for everyone》给所有人的AI课(中英字幕)
吴恩达最新《面向每个人的生成式AI》Generative AI for Everyone(中英字幕)
AI想象中的 坠入黑洞
吴恩达《使用私人数据进行大语言模型的联邦微调|Federated Fine-tuning of LLMs with Private Data》中英字幕(豆包翻译
吴恩达《利用向量数据库构建多模态搜索|Building Multi-Modal Search with Vector Databases》中英字幕
斯坦福大学《学术写作|Writing in the Sciences》中英字幕
吴恩达《深入模型量化|Quantization in Depth》中英字幕
吴恩达《Llama2的提示工程|Prompt Engineering with Llama 2》中英字幕(英字可关闭)
吴恩达《用直接偏好优化对齐LLMs|Aligning LLMs with Direct Preference Optimization》中英字幕
Andrej Karpathy《从零开始搭建GPT|Let's build GPT from scratch, in code, spelled out》中英字
耶鲁大学《美国合同法一American Contract Law I 》中英字幕
Andrej Karpathy《让我们复现GPT-2 (124M)|Let's reproduce GPT-2 (124M)》中英字幕
沃顿商学院《AI For Business(AI用于商业:AI基础/市场营销+财务/人力/管理)》(中英字幕)
【英字可关】吴恩达《利用AutoGen的人工智能智能体设计模式|AI Agentic Design Patterns with AutoGen》
吴恩达《使用大型语言模型进行配对编程》Pair Programming with a Large Language Model
吴恩达《使用向量数据库构建应用程序|Building Applications with Vector Databases》中英字幕
Andrej Karpathy《大语言模型介绍|[1hr Talk] Intro to Large Language Models》中英字幕
斯坦福大学《机器学习理论|Stanford CS229M Machine Learning Theory - Fall 2021》中英字幕
(超爽中英!) 2024公认最全的【吴恩达大模型LLM】系列教程!附代码_LangChain_微调ChatGPT提示词_RAG模型应用_agent_生成式AI
密歇根大学《ChatGPT 示教|ChatGPT Teach-Out》中英字幕
这组历史名人 AI 还原真是神了!
LMLY-AI创意短片【视觉艺术影片】
吴恩达《AI高级检索:Chroma|Advanced Retrieval for AI with Chroma》(中英字幕)
(英文可关闭)吴恩达《知识图谱用于RAG|Knowledge Graphs for RAG》中英字幕
吴恩达《使用 crewAI 的多人工智能代理系统|Multi AI Agent Systems with crewAI》中英字幕【短课英文均可关闭/拖拽】
谷歌《AI基础知识(LLM、ChatGPT、Stable diffusion等)|Google AI Essentials》中英字幕
B站强推!如何从零到一搭建一套完整RAG系统!大模型RAG企业项目实战简直太实用了!LainChain、rag增强检索/LLM大模型/大模型学习路线/微调
普林斯顿大学《高效的利他主义|Effective Altruism》中英字幕
三鹰
耶鲁大学《Introduction to Psychology 心理学导论》中英字幕
《测度论》Measure Theory by Claudio Landim(中英字幕)
宾夕法尼亚大学《科学哲学|Philosophy of Science》中英字幕
(超爽中英!) 2024公认最好的【吴恩达RAG】教程!更适合中国宝宝体质,全程干货无废话,学完成为AGI大佬!
斯坦福大学《CS25 Transformers United V4(关于Transformer的一切)》中英字幕
相信我,你也能在3分钟内搞定大模型的本地部署!3种方式(GPT4All,Ollama,llamafile),无需GPU,拿下llama3.1,qwen2
【一起研发】大模型技术学习路径及研发讨论
【中英字幕】Nvidia黄仁勋与Ilya Sutskever的谈话
谷歌Jeff Dean最新演讲《机器学习领域令人激动的趋势|Exciting Trends in Machine Learning》中英字幕