هواتف

Nvidia and the End of Movies as We Know Them

GTC21, this year's Nvidia GPU Technology Conference, was terrific as always.

The alleged focus was on AI and autonomous cars. But, as I watched presentation after presentation from folks out of the entertainment industry, I realized that if you started putting some of these various elements together, they effectively predicted the end of movies as stand-alone, costly entertainment.

Instead, they were forecasting the emergence of movies as a linear, highly-automated process beginning with the idea, transitioning through books and games, and eventually becoming custom entertainment that could be different for everyone watching individually or in small groups. I think theaters will make less and less sense as we transition into this blended mixed reality, AI-driven, virtual movie future.

We are going to end with my product of the week, which this time isn't something you can yet (or possibly ever) buy, but the most extraordinary SUV I've ever seen. In this instance, showcased by Audi at Nvidia's GTC.

Let's get started.

Movies, Actors, Movie Theaters Become Obsolete

What got me thinking of the end of movies was a technology Nvidia announced Friday called GANverse3D. It allows you to create realistic moving objects, like cars, and introduce them into a film in minutes.

An apparent early use would be if you had GM as your named sponsor in a movie but, for some reason, needed to replace them with Ford using GANverse3D. You should be able to, with a few clicks, replace each featured GM vehicle with its Ford counterpart.

What makes this possible is that this technology allows you to define an object, call that object a car, give it realistic movement automatically, and then apply a 2D car image to the figure. You instantly have a realistic-looking, moving automobile. No 3D modeling software or experience required. Check it out in this video:

Now you can't do people yet, but that is coming, and this means that every image in a movie can be rapidly changed while retaining movement and realism. Granted, if you replaced a sportscar with a tank, you'd get a tank with unbelievable performance, so you'd also have to change the logic surrounding the tank's movement. Still, even that could essentially be done with little additional work.

But it was Vicki Dobbs Beck from ILMxLAB that caught my attention. Vicki implied a future where a book could be automatically, using an AI, converted into a script, where that script could be automatically enhanced by a storyboard, where that storyboard could automatically be converted into an animated movie, and that animated movie could be upscaled into a photorealistic final product.

We aren't there yet, but I feel that this capability isn't decades off. It is more like single-digit years in our future.

Kathryn Brillhart, the director of virtual production at USC Cinematic Arts, introduced me to LED walls where, during production, these walls are used to define the stage (one of which is the ceiling) and then broadcast the digitally created world the movie is set in on those panels. This process places the actor and primary props in a virtual stage that looks like you are on location. Since the images can be rendered, that virtual location could be any place real or imagined in the universe.

It is a ton easier to act like you are someplace if it feels like you are there instead of in front of a generic green screen. So actors can be anywhere and virtually travel to any place in the world or universe, imagined or real, just by using these vast LED walls and ceilings (for ambiance).

But blended within this are Nvidia's Omniverse and Omniverse Mechanima efforts that allow you to make movies with game engines. Nvidia's ray tracing capabilities allow upscaling of game-level images to a photorealistic upscaled alternative, providing a low-cost way to take a rabidly-created game quality image and upscale it.

With this upscaling capability coupled with AIs, you get the equivalent of non-player characters (NPC) becoming extras, if not actual actors, in the resulting movie or TV show -- depending on the level of AI you are using.

Now let's put this together and talk about the day in the life of a future movie producer/director/actor/owner.

Creating the Movie of the Future

You have an idea for a story and write a one-page outline including key plot pivot points. You feed it to a literary AI who takes that summary and writes an initial draft novel from it. You read the novel providing feedback on what you like or dislike, and feed this back to the AI, which modifies and completes the novel. You decide you want to turn that novel into a movie; so you tell the AI to convert the novel into a storyboard and turn it into a script. From the script, the computer creates a storyboard. The storyboard is used to pitch a new game based on the story.

Currently, there is an effort to use Nvidia technology to create virtual planet Earth. That virtual planet Earth becomes the virtual set, first for your game, then for your movie. First, the AI generates a game, and then users play the game with goals matched to the script elements and against the movie's proposed plot. That gameplay is captured both from the camera's perspective and from the player's perspective, creating virtual movies the event's entire gestalt.

You still will have to find someone to market and sell your game, and you need the game to get to the movie because the players may become your actors -- and you could end up with multiple takes on the movie depending on which player perspective is more interesting. Like today's role-playing games (RPG), the gameplay will follow your story, but the interactions between these players and NPCs will be captured, and the more they stay in character, the higher those interactions will be ranked.

You could then, if you wanted, make (much like people watch Twitch today) the result of the high-ranked players' play viewable, and people could vote on which they like the best. The winning video would then either be turned into an animated movie or further upscaled into something photorealistic. Players that made it through this process would be compensated for it and likely become experts over time and the next generation of actors with one huge difference.

That difference is that they would have, through mixed reality, experienced the movie as if they were in it rather than acting against a green screen or with cameras and takes. I think it would make acting for movies a ton more fun.

Why Acting Sucks

When I was younger, I spent four years trying to be an actor. I was a good-looking kid, good enough to be a model, and living near Hollywood, I knew many folks in the business. But I found I couldn't wrap my head around two things: being rejected constantly happens when you start; and then seeing how movies were made, so I didn't enjoy watching them.

That's on top of the other problems the industry is infamous for, like infidelity, substance abuse, misogyny, and the thing that I later decided was highly troubling: people mix you up with the characters you've played, good or bad. People often can't seem to separate who you are from the character you played on screen. I can imagine the heartbreak of falling in love with someone only to realize they don't love you; they think they love who you played in a TV show or movie.

Back to Next Generation Movies

But this way, the progression to becoming an actor goes through playing the game, which should, assuming it is done correctly, be a ton more fun than standing in line for a cattle call. Also, in the game you are playing yourself. You might have the character name you are playing, but you are playing the character, not you trying to be them.

Granted, this would play better for movies like "Star Wars" or "The Avengers" than you'd think it would for a "Titanic" movie -- though I can certainly imagine people wanting to play a game where they were the leads in that movie.

Now you'd also potentially get different versions. For instance, where the gender roles were either reversed or changed from heterogeneous to homogenous, allowing for versions of the movie that would appeal to a different audience and the starring people were playing these alternative roles; but where the orientation of the actor matched the changing orientation of the character.

I can even imagine where the version movie that you'd uniquely like, based on your past preferences, would be generated on the fly just for you -- and there we have my death of the theater outcome, because theaters would not work if people prefer movies that are custom tailored, which they should, over the existing standard format.

Wrapping Up: It'll Take a While

This unique viewer-focused outcome isn't going to happen tomorrow, but I think we are in a ten-year window where this is possible and virtually certain within twenty years. Pretty much every industry remotely connected to technology will be undergoing massive change during this same period, with AI either enhancing or replacing people in critical jobs.

But this would massively increase the engagement between those watching and those making movies, given one will bleed, through gaming, into the other. It should also improve the quality of the movie and reduce the pain of being an actor. I've gone from wanting to be an actor to just feeling sorry for those folks that still do this.

Indeed the outcome would be better, faster, cheaper, and far more customized and engaging than what we have today. Oh, and this is likely an interim step on our way to living in virtual worlds that we may choose not to leave.

Last week at GTC, I saw the future, and I'm kind of excited about what is coming because maybe I can have my acting cake and eat it too.

Rob Enderle's Technology Product of the Week

The Audi AI: Trail Prototype

I've been into cars since I was a kid. My first car was a souped-up Chevy Impala Supersport which never seemed to have enough power, and I'm currently fulfilling a wish I had as a kid to create a restomod XKE with a ton of power.

I have followed Audi's autonomous driving efforts more than most and was invited early to observe some of their initial success with their self-driving Audi TT. Audi's presentation at Nvidia's GTC was one of the best car technology presentations in terms of content, performance, and graphics I've ever seen. When they got to the prototype cars, had they been available, I would have bought one.

For me, the most exciting of the bunch was the AI: Trail, an autonomous (Level 5) vehicle for off road use. Equipped with what appeared to be airless tires (they are coming), an aggressive futuristic (think new "Lost in Space") design language, and three autonomous drones, this thing looked ready to tackle the most aggressive off-road adventures on Earth or Mars.

Audi AI: Trail

Audi AI: Trail

- click image to enlarge -

With the massive torque of its electric engines, the drones blazing the trail night or day, and the autonomous systems handling the tricky bits, you could pretty much go anyplace on land accessible by any vehicle, including those in the military class.

Typically, Audi doesn't have the best reputation for bringing designs that are this aggressive to market, but they are, by 2025, changing how they are designing and building cars. Moving from designing their vehicles from the outside in, to the inside out, should result in something far closer to the AI: Trail. So there is hope that this fantastic vehicle will show up -- and if it does, unless something better shows up by then, I'll be buying another Audi.

I fell in love with Audi's prototype AI: Trail, so it is my product of the week.

The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.

Let's block ads! (Why?)



from TechNewsWorld https://ift.tt/2OZlwII https://ift.tt/2xW9HYX
via IFTTT

ذات الصلة

اشترك في نشرتنا الإخبارية

Post a Comment