Rednote Open-Sources Its First Multimodal Large Language Model
Rednote's HI Lab (Humanistic Intelligence Laboratory) has open-sourced its inaugural multimodal large language model, dots.vlm1. Developed based on DeepSeek V3, the model incorporates Rednote's proprietary 1.2-billion-parameter visual encoder NaViT, demonstrating advanced multimodal comprehension and reasoning capabilities.
Editor:Hou Qianqian