2/29/2024 0 Comments Adobe creative cloudpsuSometimes the performances of the actors in the PIP windows in those takes happened to be best, but most of the time, I’d find better individual performances in other takes, so we then decomposed all of the multi-camera clips to their component shots and replaced those PIP takes with others. I then used these multi-camera sequences to make the initial rough edit of each scene, and then we used transcription to help us segment every take so I could choose the best main window performance for each cut. We never experienced any performance issues in Premiere with these multi-camera sequences as we were editing on fast SSD drives. This meant we often were playing 9 layers of HD ProRes LT video simultaneously. Each nest included the main image for a character, as well as the smaller PIP windows that character would see. To solve this, assistant editor Christian Whittemore, created a multi-camera sequence for each shot that was composed of 3 nested layers. The first time I saw that this system actually worked on set was incredibly exciting and relieving, as up to that point, its functionality was completely theoretical. Motion Graphics Designer Phil Aupperle used After Effects to create the computer interface, which we used both as previs and for the finished scene. This allowed not only me, but each actor looking at their teleprompter to see a previsualization of the finished scene - that included not only the live feed of the cameras in the other rooms, but also the computer interface as they typed and interacted with it. In order to achieve this, each set had a fixed camera integrated into a visual communication system that we created using Interrotrons (essentially, two-way teleprompters) connected to a live switching system. Equally important was their ability to see each other, as well as the need to establish fixed eyelines to each of the elements on their screen, without which the reality of the movie would have been destroyed. While shooting, it was very important to me that the actors could interact with each other in real time, so we built three identical sets next to each other on a stage. In the story, all characters are trapped alone in their own rooms, and are able to communicate with each other only through a rudimentary computer in their wall. I think when people hear that, they think that it must have been incredibly easy to make, but the exact opposite is true, with regard to both production and post-production. SHARE? is filmed entirely from only one fixed camera angle. Tell us about a favorite scene or moment from this project and why it stands out to you. After trying many different processes over the years, I find this method to be the most efficient way to edit a scene. Once I have a sense of the sections of each shot that I will likely be using in the edit, I build a sequence that contains all takes of each of those sections, back-to-back, so I can select the best portions of each take to use in the edit. I like to start each scene by using the last take of every shot to quickly and roughly edit the scene. How do you begin a project/set up your workspace? I quickly fell in love with production and editing and ultimately switched into the Film & Video program. We also learned how to splice actual film together on a Moviola. This was back in 1996/97, so I initially learned to edit on a VHS to VHS machine. I was initially a Microbiology major, but thought it would be fun to take an introduction to film production course as a sophomore. I attended Penn State University as an undergrad. How and where did you first learn to edit?
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |