Meituan Opens Sources Its 561B "All-Purpose" Tamamo Model

【#Tech24H】On November 3, Meituan officially open-sourced its multimodal model LongCat-Flash-Omni, which is built upon LongCat-Flash and integrates efficient multimodal perception and speech reconstruction modules. It supports a context window of 128K tokens and over 8 minutes of audio-video interaction. The model has a total of 560 billion parameters and 27 billion actived parameters. Meituan stated that LongCat-Flash-Omni is the first open-source large language model in the industry to achieve full multimodal coverage, end-to-end architecture, and efficient large-scale parameter inference.
Editor:Zhang Liyan









