V
主页
[论文速览]LoRA: Low-Rank Adaptation of Large Language Models[2106.09685]
发布人
论文题目:LoRA: Low-Rank Adaptation of Large Language Models 论文地址:http://arxiv.org/abs/2106.09685 代码:https://github.com/microsoft/LoRA * 本视频旨在传递一篇论文的存在推荐感兴趣的您阅读,并不是详细介绍,受up能力限制经常出现中英混杂,散装英语等现象,请见谅。如论文报道出了偏差,欢迎各位怒斥。 ** 新论文推荐,过往论文查找,欢迎编辑这个文档: https://docs.qq.com/sheet/DSUdOTG9xWUdydVB6 *** Slides每1-2月会上传到置顶动态地址
打开封面
下载高清视频
观看高清视频
视频下载器
[论文速览]LLaVA: Visual Instruction Tuning[2304.08485]
[论文简析]VAE: Auto-encoding Variational Bayes[1312.6114]
[论文速览]OpenVLA: An Open-Source Vision-Language-Action Model[2406.09246]
[论文速览]Bootstrapping Language-Image Pre-training...[2201.12086]
[论文速览]Structured Denoising Diffusion Models in Discrete State-Spaces[2107.03006]
[论文速览]Open-vocabulary Object Segmentation with Diffusion Models[2301.05221]
[论文速览]BLIP-2 ...with Frozen Image Encoders and Large Language Models[2301.12597]
[论文速览]Masked-attention Mask Tr. for Universal Image Segmentation[2112.01527]
[论文速览]Deformable Convolutional Networks; DCN[1703.06211]
[论文速览]VLMs are Zero-Shot Reward Models for RL[2310.12921]
[论文速览]Rethinking the Truly Unsupervised Image-to-Image Translation[2006.06500]
[论文速览]LLaMA-Adapter: Efficient Fine-tuning..Zero-init Attention[2303.16199]
[论文简析]Patching Open-Vocabulary Models by Interpolating Weights[2208.05592]
[论文简析]Toolformer: Language Models Can Teach Themselves to Use Tools[2302.04761]
[论文速览]Aggregating Nested Transformers[2105.12723]
[论文简析]Regularized Vector Quantization for Tokenized Image Synthesis[2303.06424]
[论文速览]iFormer: Inception Transformer[2205.12956]
[NeurIPS24] DiTFastAttn: Attention Compression for Diffusion Transformer Models
[论文速览]Align before Fuse / ALBEF: ...[2107.07651]
[论文速览]iBOT: Image BERT Pre-Training with Online Tokenizer[2111.07832]
[论文简析]SAC: Soft Actor-Critic Part 1[1801.01290]
[论文简析]XCiT: Cross-Covariance Image Transformers[2106.09681]
[论文速览]EViT: Expediting Vision Transformers via Token Reorganizations[2202.07800]
[论文速览]Efficient Visual Pretraining with Contrastive Detection[2103.10957]
[论文简析]SlowFast Networks for Video Recognition[1812.03982]
[论文简析]MLP-Mixer: An all-MLP Architecture for Vision[2105.01601]
[论文简析]GroupViT: Semantic Segmentation Emerges from Text Supervision[2202.11094]
[论文速览]RegMixup: Mixup as a Regularizer Can Surprisingly Improve...[2206.14502]
[论文速览]OWL-ViT: Simple Open-Vocabulary Object Detection with ViT[2205.06230]
[论文速览]Autoregressive Image Generation using Residual Quantization[2203.01941]
[论文速览]Taming Transformers for High-Resolution Image Synthesis[2012.09841]
[论文简析]How Do Vision Transformers Work?[2202.06709]
[论文速览]CRG: Improving Grounding in VLM w/o training[2403.02325]
真的超容易“搞深度学习神经网络到底怎么改代码的啊?”复旦博士教我用一本书搞定!
【全374集】2024最新清华内部版!终于把AI大模型(LLM)讲清楚了!全程干货讲解,通俗易懂,拿走不谢!
[论文速览]Learning to Learn with Generative Models of NN Checkpoints[2209.12892]
[论文简析]EfficientNet V1/V2[1905.11946/2104.00298]
[论文简析]Improving fine-grained understanding in image-text pre-training[2401.0986]
[论文简析]PolyFormer: Referring Image Seg. as Sequential Polygon Gen [2302.07387]
[论文简析]Dynamic Vision Transformers with Adaptive Sequence Length[2105.15075]