V
主页
京东 11.11 红包
[论文简析]Dreamer V2[2010.02193]
发布人
本视频旨在隔离期间维持up思维清晰能说人话,受能力限制经常出现中英混杂,散装英语等现象,请见谅。涉及论文理解报道出了偏差,欢迎各位怒斥。 论文题目:mastering atari with discrete world models 论文项目地址及代码:https://danijar.com/project/dreamerv2/ 没有偏差的报道:https://ai.googleblog.com/2021/02/mastering-atari-with-discrete-world.html
打开封面
下载高清视频
观看高清视频
视频下载器
[论文简析]Point Transformer V2[2210.05666]
[论文速览]LLaVA: Visual Instruction Tuning[2304.08485]
MPC+强化学习!Actor Critic模型预测控制,苏黎世大牛教授人类水平性能的自主视觉无人机演讲
[论文夕拾]Diffusion Models for Robotics
[论文简析]VQ-VAE:Neural discrete representation learning[1711.00937]
斯坦福公开课!不愧是计算机大佬李飞飞亲授:计算机视觉实战居然如此通俗易懂!建议收藏!(人工智能、深度学习、机器学习、神经网络、AI)
[论文简析]Large Language Models as General Pattern Machines[2307.04721]
【论文代码复现122】基于强化学习的路径规划问题||强化学习和群智能优化算法有什么区别
[论文速览]DDPG&TD3[1509.02971][1802.09477]
[论文简析]VAE: Auto-encoding Variational Bayes[1312.6114]
[论文简析]DETR: End-to-End Object Detection with Transfromers[2005.12872]
[论文简析]NeRF in the Wild: NeRF for Unconstrained Photo Collections[2008.02268]
运用AI技术实现游戏自动化!所用到的YOLO技术原理原来是这样的!计算机大佬手把手教学YOLOv5基础原理及代码复现!
【全集200集】深度学习必看圣经!李沐大神《动手学深度学习》最新保姆级教程分享,比啃书高效!这还没人看,我就不更了!!
[论文速览]Open-vocabulary Object Segmentation with Diffusion Models[2301.05221]
[论文简析]NeRF: Representing Scenes as Neural Radiance Fields...[2003.08934]
双热点强强联合的发文方向:Transformer+强化学习!
[论文速览]NeRF-RL: Reinforcement Learning with Neural Radiance Fields[2206.01634]
[论文简析]MobileNet V2: Inverted Residuals and Linear Bottlenecks[1801.04381]
[论文速览]Theia: Distilling Diverse Vision Foundation Models for Robot..[2407.20179]
[论文简析]C-Learning: Learning to .. via Recursive Classification[2011.08909]
[论文简析]Location-Aware Self-Supervised Transformers for Semantic Seg.[2212.02400]
[论文简析]SlowFast Networks for Video Recognition[1812.03982]
[论文简析]SAC: Soft Actor-Critic Part 2[1812.05905]
这可能是我见过强化学习和模型预测控制最好的教程!四大名校教授精讲动态系统和仿真、最优控制、策略梯度方法、MPC
[论文简析]DeiT: Data-efficient Image Transformers[2012.12877]
[论文简析]DeepLab V3/V3+[1706.05587/1802.02611]
[论文简析]Transformers are Sample Efficient World Models[2209.00588]
从模型预测控制到强化学习12:DDPG做动态控制-研究生入学培训答疑
[论文简析]World Models[1803.10122]
[论文简析]DAT: Vision Transformer with Deformable Attention[2201.00520]
[论文速览]VLMs are Zero-Shot Reward Models for RL[2310.12921]
[论文简析]Red Circle: Visual Prompt Engineering for VLMs[2304.06712]
[论文简析]DropPos: Pre-Training ViTs by Reconstructing Dropped Positions[2309.03576]
[论文简析]VoxPoser: Composable 3D Value Maps for Robotic...[2307.05973]
[论文简析]Mobile-Former: Bridging MobileNet and Transformer[2108.05895]
[论文简析]PolyFormer: Referring Image Seg. as Sequential Polygon Gen [2302.07387]
[论文简析]DiffSeg: Unsupervised Zero-Shot Seg. using Stable Diffusion[2308.12469]
[论文简析]TNT: Transformer in Transformer[2103.00112]
[论文简析]MViT: Multiscale Vision Transformers[2104.11227]