
About the Project
In the post-capitalist era, individuals frequently face the challenge of finding the distant self. This AI film combines Carl Jung's concept of the unconscious with the theory of the permeable self-object to explore the experience of the integration of individuals and the external environment. It is a profound exploration of inner experiences, focusing on the indistinct emotions and objects rooted in memories and dreams, to reveal the contradictions and dynamics of personal emotions and experiences through the lens of their hidden and interdependent relationship with the external world. The film utilize Narrative Therapy in postmodern psychotherapy and draws upon the creative styles of automatic writing and "Cadavre Exquis", as well as using AI drawing models to regenerate and restructure stories from personal experiences. The project uses video to explain ourselves and explore the deeper gateway to connecting with the world.
As Susan Kassouf (2024) described, “Permeability is neither subject nor object-oriented, but rather both and more, exploring and destabilizing, without necessarily deconstructing or dismantling, the spaces between and within.” My film attempts to apply this theory as the foundation of the project. In applying this theory, while it retains a focus on the human experience, the film is expanded and structured to discuss memory-dreams-subconscious in sequence within the realm of wholeness and permeability and to explore how the state of the self can be better considered in a state of circulation and interaction between the internal and the external world from a positive perspective. I encourage the audience to view and reflect on the project's audiovisual content through a lens of permeability and wholeness.
AI Tools & Workflow
I chose real shot for the character part, and then composited the live action footage into After Effects according to the pre-script.and I use 3D modelling and rendering for pre-video generation in parts that are more abstract or can't be produced quickly with real shot.These pre-processed videos are put into streamdiffusion for AI stylization and output.I ended up using Eleven-lab for the vocal dubbing and suno for the background music.