The first three are videos and images that I generated partly using AI with touch-ups in Photoshop. This was for a project with @martes.studio + @wearevampiro that I collaborated on (pitch). The idea to show a cyber futuristic mockup of what the real video would look like. The last video was a test used to demonstrate to Martes Studio what could be achieved with Runway’s video-to-video function.
The other day, I walked past an optician’s shop on Jardinets de Gracia, and all the advertisements for their glasses were generated with Midjourney (I assume the agency got rid of the photographer and the models) and added the glasses in Photoshop. I still notice it, but I guess most people don’t, and in 2 years – maybe even less, much less time – I think it will have advanced so much that I won’t notice either, and I’ll swallow everything without question. We’re in for a ride. Something is coming that we can’t even imagine.
Of course, I now let ChatGPT translate these texts into English. I no longer make the effort to translate anything; I delegate that task to ChatGPT. I also ask it to review and remove redundancies in my texts. As for the scripts, I write them and then pass them to ChatGPT to format them correctly and clean them up.
I’m cloning myself in 3, 2, 1 … see you!