Current
Keywords: Live Streaming, Volumetric Cinema, AI, Deep Fakes, Personalized Narratives.
In the contemporary contestations of algorithmically recommended content, the screen time of scrolling between livestreams has become a form of new cinema. Current experimented with various AI image processing technologies and volumetric environment reconstruction techniques to depict a future where every past account has been archived into an endless stream. Made in 2019, Current is a volumetric film encompassing front page stories that had happened around the world in the same year, which were broadcasted in real-time using livestream media technologies.
Livestream is a new form of moving image, to which its content is generated and broadcasted simultaneously. Its real time quality gives rise to an attention economy that circulates values distinct from traditional moving image media, such as movies and television. First, it encompasses extraordinary moments alongside an infinite feed of the mundane, suggesting a sense of ‘truth’ to its audience. Second, instead of having to sit in for a standardised amount of time, the quality of mundane in livestream allows its audience the freedom to step in and out of the stream at any moment. Third, it allows a participatory authorship, where the interaction between the audience and streamer collaboratively directs, narrates and curates the experience.
Volumetric cinema is the perceiving of information in a 3 dimensional space. Instead of compressing our 3D world onto 2D plane, technologies such as point cloud and 3D reconstruction techniques record and project data in 360 degrees, which minimises the reduction of the complexity of the image data. It is a form of cinema that is immersive as well as expansive - there is no negative space in any scene, there is no behind the camera. When coupled with livestream, it has the potential to preserve every detail of every past occurrence in full scale, directing a new way of perception, from planar to global, flat to volumetric.
The outsourcing of imagination to AI can most readily be observed in the cultural phenomena of deep fakes and deep dreams. The project experimented with generative adversarial network (GAN) and autoencoder to simulate visuals that are uncanny to the mind. These neural networks allow the compositing of multiple visual data inputs, generating infinitely long single takes that redefine the cinematic cut. Along these lines, ‘Current’ seeks to configure a new aesthetic vocabulary of cinematology, expanding the spectrum of aesthetic semblance and intelligence, questioning truth and identity in contemporary urban phenomena. Such image content in turn can be personalised for the training of AI, for machine vision to learn the clustering of information and semantically labeling objects within the moving image.
Current experimented with a range of digital technologies that are readily available to any individuals (i.e. livestream data, machine learning, 3D environment reconstruction, ubiquitous computing, pointclouds). It developed a production pipeline using distributed technologies, which provide a means for individuals to reconstruct, navigate and understand event landscapes that are often hidden from us, such as violence in protests, changes in nordic animals behaviors, the handling of trash, etc.. History, from Latin ‘historia’, means the art of narrating past accounts as stories. What will be the future of our urban environment if every single event is archived in realtime to such accuracy that there is no room for his-story? This implies an economy of values, that has potential in multiple streams beyond social media, as the content deep learns from itself.
Acknowledgements
Current would like to acknowledge Artem Konevskikh, who have joined us in delivering you a better current in 2020.












Website
Join the conversation
xCoAx 2020: Provides Ng, Eli Joteva, Ya nzi “Current”. A speculation on the future of broadcasting cinema. It emerges from the intersection of trends in live streaming culture, volumetric cinema, AI deep fakes and personalized narratives. https://t.co/bD5WM19vF3 #xCoAx2020 pic.twitter.com/PfdVaF4VwL
— xcoax.org (@xcoaxorg) July 8, 2020