
About the Project
In the AGI era, silicon-based life thrived on computational power—until unchecked growth triggered a heat crisis. The "Computation Singularity" caused ecological collapse, halting both carbon and silicon civilizations. As the sole survivor, can Novo awaken the Sea God? What fate awaits carbon and silicon lifeforms?
Humankind created artificial intelligence and, upon entering the AGI era, developed silicon-based life. Empowered by computational power, silicon lifeforms rapidly evolved. However, their unchecked expansion ignored both carbon-based life and the planet’s tolerance for extreme heat. The energy consumption generated heat far beyond the limits of existing cooling systems. This led to the “Computation Singularity”—a tipping point where scorching heat swept the globe, ecosystems collapsed, silicon life came to a halt, and computational systems were locked down. Both carbon-based and silicon-based civilizations were effectively frozen. Before the destruction of humanity’s final refuge, a lone survivor, Novo, received a hand-drawn image of the “Neptune” from a child—a mythical being said to be under development at the Neptune Institute, a silicon entity capable of rebuilding the world’s ecology. While crossing the radiation-scorched desert, Novo encountered monstrous heatwave entities. On the verge of death, she leapt into the storm and caught a glimpse of a mirage: a holographic projection of the research institute. Within flashing red alarms, researchers were seen disintegrating into ash. The Neptune project had reached its final stage—"Awaiting consciousness upload.” Upon reaching the ruins, the AI system detected her as an intelligent lifeform and prompted her to complete the consciousness upload. But doing so would mean the end of her carbon-based body. What choice will Novo make? Can Neptune finally be awakened? And where will carbon-based and silicon-based life go from here?
AI Tools & Workflow
Technical Overview This film serves as an experimental innovation in AI-driven film industrialization, completing the entire microfilm production pipeline—from concept development to final output—through a systematic integration of multimodal AI generation tools. With an AI generation rate of over 95% (covering all visuals and partial sound effects), the creative team employed a "Human-AI collaborative model" to push the current technological boundaries of AI filmmaking. Naturally, the production process posed numerous challenges, particularly in areas such as emotional expression, visual detail fidelity, and breakthrough creative articulation. While AI tools are capable of producing visual effects, there remain limitations in conveying nuanced emotions and achieving distinctive visual styles, requiring human intervention and creative fine-tuning. 1) Production Timeline The core team comprised five key members: AI Content Director Prompt Engineer Two AI Audio-Visual Specialists AI Compositing Specialist Working efficiently over a span of three weeks, they completed the full film production process. 2) Image Generation Image assets were generated using tools such as Midjourney and Kling, producing nearly 10,000 storyboard frames. To ensure character consistency, techniques such as Midjourney’s --cref with --cw, ComfyUI multi-angle workflows, and FaceFusion face replacement were employed. For stylistic consistency, parameters like Midjourney’s --sref and --sw were used, blending multiple style references or image parameters. 3) Motion Generation Dynamic content was generated using a variety of tools, including: Kling (image-to-motion, head/tail frames, multi-image reference, lip-sync) Dreamina (image-to-video, master-level lip-sync) SeaShell, Runway (keyframe-based animation, Act-One, localized video edits) Luma (first/last frame generation) Pika, Pixverse, Vidu (element-driven video generation) Sora (image-to-video), Heygen (AI avatars), Wonder Studio, Face Fusion (AI face-swapping) Innovative compositing techniques were also applied—for example, the protagonist’s "talking" scenes were created using layered synthesis of "facial expression" + "animated background" to achieve refined detail. 4) High-Resolution Upscaling To ensure visual richness and detail, dual-stage AI upscaling was used: Magnific for images Topaz Video Enhance for dynamic sequences The final film can reach up to 8K resolution, meeting the demands of offline screenings. 5) Music & Audio Part of the soundtrack was generated using Suno Voiceovers were synthesized via ElevenLabs, ChatTTS, and Spark-TTS Sound effects were enhanced using TangoFlux and other tools Creative Reflections This project validated a key insight: the industrialization of AI filmmaking requires the establishment of an "Artificial Intelligence Decision-Making Hub." When tools like Sora (for motion intensity), Heygen (for micro-expressive avatars), and ChatTTS (for emotional voice delivery) are orchestrated under human direction, a new cinematic language begins to emerge. The final film is not just a technical symphony, but a living testament to how human creators are pioneering the aesthetic frontier of AI.