Following consultation and delivery I would be able to travel with the neural network and perform with it to live audiences anywhere. It will have learnt video and audio that I make myself and feed it and can continue to feed it. It will receive live video input from my face that will act as a controller, the subsequent image/video can be projected. It will synthesise video and audio at the same time and those will be interrelated. There will also be scenes whereby the parameters can change across time, where the interaction and rules are different.
A programmer with experience in deep video, machine learning, creative computing, and neural networks in live performance would be ideal.
I don't expect to perform with the AI until Spring 2020 because of the amount of video I want to provide it with and experiments I want to try.
A composer-performer based in Oslo at Norwegian Academy of Music undertaking artistic research.
If you are hired you will need to be happy to send a copy of your ID/passport to Norwegian Academy of Music and sign an employment agreement with them.
About the recuiterMember since Nov 11, 2022 Yuda Iswanto
from Arizona, United States