Alibaba releases Wan 2.7 video model, now available in ComfyUI
Alibaba’s Wan 2.7 video generation model is now live on Model Studio and available as a partner node in ComfyUI.
Alibaba’s Wan 2.7 video generation model is now officially live. The model, listed as wan2.7-i2v in Model Studio, supports multimodal input including text, images, audio, and video, and covers five task types.
The full list of supported tasks:
- Image-to-video: First-frame, first and last frame, and audio-driven generation
- Text-to-video: Text prompts with optional audio and multi-shot narration
- Video continuation: Extend an existing clip guided by a text prompt
- Reference-to-video: Lock in a subject’s appearance and vocal timbre across up to five real-person inputs
- Video editing: Edit or replicate existing videos via text prompts, reference images, or style transfer
Output goes up to 1080P, with durations between 2 and 15 seconds per generation.
ComfyUI added Wan 2.7 support today in version 0.18.5, with workflow templates available for all five task types. Desktop and Comfy Cloud support is listed as coming soon.
Alibaba also released Wan 2.7 Image and Wan 2.7 Image-Pro on April 1. Key details on what those cover:
- Wan 2.7 Image: Text-to-image generation, instruction-based editing, batch output of up to 12 images at once, and text rendering support for up to 3,000 tokens across 12 languages
- Wan 2.7 Image-Pro: Adds 4K output and a built-in chain-of-thought reasoning mode
Both image models are available through Model Studio and wan.video.
Bottom line: Wan models have been community favorites, especially given the shaky start of Seedance 2.0. We’ll see if 2.7 continues on the foundation its predecessors built.
Source: Alibaba Cloud, ComfyUI Blog

Get the core business tech news delivered straight to your inbox. We track AI, automation, SaaS, and cybersecurity so you don't have to.
Just read what you want, and be done with it.





