
About the Project
For the information one receives defines the person one is allowed to become.
It is a tranquil town, above which a vast, inverted Echo Tower hangs suspended in the sky. Every resident harbors a humble yet profound aspiration: to ascend to the tower’s summit and listen to the most advanced, most exalted voices in the world. Each year, the town’s most exceptional child is granted the rare chance to make this ascent. This time, the honor falls upon Xiao Yin, a radio-headed student, who now stands on the threshold of a journey that will unravel all he once believed. In the end, the promise of equality in the information age reveals itself as illusion — for the information one receives defines the person one is allowed to become.
AI Tools & Workflow
1. Overall Visual Tone and Style The visual tone of this film is defined by a retro yet poetic aesthetic. Accordingly, the primary style references include elements of retro-millennialism, retrofuturism, and dreamcore. The architectural settings draw heavily from Brutalist architecture of the former Soviet Union, aiming to evoke a grand and poetic visual experience. The overall color palette is inspired by the works of Theo Angelopoulos and Andrei Tarkovsky, whose cinematic languages emphasize temporal stillness and spiritual depth. To ensure stylistic consistency, we selected LoRA models within Xingliu and Liblib platforms to guide the visual rendering process.2. Character Design In accordance with the script requirements, for characters that demand identity continuity throughout the narrative, we used platforms such as MidJourney (MJ) and Xingliu to generate consistent character visuals. These characters were subsequently processed through AI-powered 3D generation tools such as Tripo and Meshy to obtain their three-dimensional forms. For secondary or background characters not requiring strict identity consistency, we directly employed Meshy for one-step model generation, followed by spatial placement within the scenes.3. Storyboard (Animatic) Generation Storyboard generation followed a hybrid workflow involving direct image generation and rough 3D modeling with depth-of-field manipulation to define core compositional elements. Once the spatial structure was established, characters were inserted and redrawn within the generated frames. For scenes that do not require high fidelity consistency, we primarily used Xingliu and MidJourney to generate base images, followed by multiple rounds of local refinement and repainting to achieve visual coherence and stylistic unity between characters and environments.Secondly, by constructing rough scene models and placing characters within them, we are able to establish a clear depth of field for the main subject. This spatial foundation allows for subsequent style transfer and multiple rounds of detail refinement using image-generation AI tools, ultimately producing more refined and compositionally accurate storyboard frames.4.Video Generation To ensure absolute consistency and precision between characters and environments, all shots were generated using a continuous frame-based approach, specifically employing initial-final keyframes, single initial frames, and single terminal frames as reference anchors. By meticulously scripting both camera movement and character motion, we achieved precise control over the visual dynamics of each shot, as well as accurate character blocking and spatial choreography within the frame.