GT CS Alum Albert Shaw poses with 'Dorothy' at The Wizard of Oz at Sphere.

Q&A: CS Alumnus Lifts the Curtain on 'The Wizard of Oz at Sphere'

Earlier this year, we learned that Georgia Tech alumni played a role in bringing the 1939 classic The Wizard of Oz to the Las Vegas Sphere's 160,000-square-foot interior screen. 

Albert Shaw (CS 2016, MS CS 2017) was among the small group of computer science alumni who lent their expertise to help "reconceptualize" the film for the August 28 premiere of The Wizard of Oz at Sphere.

Following last month’s premiere, Shaw, a senior machine learning researcher with Google, shared his experience and some behind-the-scenes details about what it took to bring Dorothy’s adventures to life in 16K resolution.

Image
GT CS Alum Albert Shaw at The Wizard of Oz at Sphere.
Georgia Tech Alumnus Albert Shaw (CS 16, MS CS 17) helped tackle some of the technical challenges of bringing The Wizard of Oz to the Las Vegas Sphere. Photos courtesy of Albert Shaw

Wizard of Oz at the Sphere - what is it in your own words? Why is it a big deal?

The Wizard of Oz was one of the iconic pioneers of Technicolor filmmaking, so it's a bit poetic that we were able to adapt it to the unique Sphere experience.

This project wouldn't have been possible two years ago. To me, it's truly been incredible to see the very cutting edge of technology and artistry come together to create this amazing experience that transports you into the Land of Oz.

What did you contribute to the project directly and/or indirectly? How did it push you professionally, and were there any moments of "wow" or "wonder" that are special to you?

Last year, my teammate at Google, Meera Hahn (Ph.D. CS 2022), and I were working on an out-painting model (an AI-based technique for expanding images beyond their original borders) when our manager, Steven Hickson (Ph.D. CS 2020), told us about a project with The Wizard of Oz. We were all like, "That's crazy! Is it even possible? That resolution is insane!" We didn't think it would all happen or that it was technically possible. It's been a crazy journey to get where we are now.

For my part, Meera and I took the out-painting model we had been developing and specialized it for the film. This involved fine-tuning it on the original movie and characters, plus tackling a lot of new problems, like figuring out how to make something significantly bigger while keeping motion and characters consistent.

[RELATED: Lions, Tigers and Tech—Oh My! Alumni Help Dorothy Debut in Ultra-HD at Sphere]

This project really showed me the huge gap between research and application. It also raised a number of new research questions that we've been exploring. We then worked to scale up the model so a whole team could use it to process the entire movie. We even got to run many of the shots ourselves, from testing to custom workflows for some of the most challenging scenes.

Working directly with the Magnopus creatives in Los Angeles was one of the most amazing experiences I had. We built a great workflow where we'd improve what the artists gave us, and they'd improve our outputs to feed back into the models and repeat the process.

Seeing the scenes I worked on, first on the test Sphere and then on the real Sphere, was jaw-dropping. You can't really understand how big the screen is until you're there in person.

Image
Google image of The Wizard of Oz at Sphere
Image courtesy of Google DeepMind

In the back of my mind, I was always thinking, "I really hope this all works out." It wasn't until I saw the first scarecrow scene (my favorite scene in this version) on the full Sphere –after all our back-and-forth iterations, artist touch-ups, and compositing with full CG renders – that I was like, "This is really going to work!" It's incredible what everyone achieved together.

Starting in May, I also gained a broader understanding of the entire process by serving as an ML Tech Lead, supporting other workflows for the Super Resolution and Performance team. I did a lot of debugging, putting out fires, and helping with the final touches on everything in the various tracks, and working with Magnopus to ensure everything fit together.

The work everyone did is truly groundbreaking. Seeing it all come together with the practical effects at the premiere was just sublime. I even got one of the sought-after foam apples!

From a technology/AI standpoint, in lay terms, what stands out to you?

What amazes me is how quickly we were able to enable this and how it wouldn't have been possible if we had started even a year earlier. It's been amazing being in the middle of this revolution in video and other generative models. However, putting it into professional filmmaking at this unprecedented 16k resolution, with the quality and character consistency we achieved, was amazing.

Most base video generation models currently target around 720p, so I'm most proud of the fact that we've been able to accomplish this working together with all the creatives and others on this project.