What was the last original movie you saw? Did you see it at the cinema? Or maybe on Netflix? As it turns out, of the top ten US domestic blockbusters of 2018, eight were sequels and the remaining two were remakes. 2019 looked similar, with a single original blockbuster, Us.

And yet the fastest growing market for VFX/animation is not movies, but series. In fact, many of the most popular streaming shows are VFX-heavy.

A great deal of money is being invested, but proportionally much more content is requested – and yet budgets for VFX do not appear to grow in step, despite the ever-higher expectations for quality and quantity. This can be especially true for series where the content being produced is targeted to very specific audiences, and the dollar per minute figure has very real limitations.

So, in VFX and animation, producers have been looking for fat to trim, and asking whether it’s necessary to rebuild every VFX element for every sequel. It’s true that making assets and materials from scratch is much easier than it used to be, thanks in part to improved tools – such as the ones I work with in my day job. At the time of writing this piece, however, this still takes time, money and specialized skills.

Can we reuse designs, models, textures from a few years ago? The answer should be ‘yes’, but this is a lot more difficult than it may initially sound.

For a start, reusing old assets can cause quality issues. Looking back at some movies we used to love, it doesn’t take many years for them to show their age. As we become used to better imagery, our eyes, along with merciless new HDR, HD and (the horror!) motion-smoothing display devices reveal the quaintness of models and textures that seemed just fine not so long ago.

Large franchises such as Star Wars can include sequels, prequels, spinoffs, TV series, games, VR experiences, toys, and more.

Images provided courtesy of Disney

Image © Chris Jelley

We have to face the challenge of making things reusable over time. This isn’t a problem that affects everyone, but it’s an old problem all the same. Increasingly, VFX teams needs assets that work well even if they were made a long time ago, in a galaxy far, far away…

In the Star Wars franchise alone, Industrial Light and Magic has an incredible collection of designs and models, built over the course of decades. Many of them need to be used for future movies, a few series, and video games, not to forget toys and merchandise. Despite their sizable budgets, reinventing every wheel is not a luxury on which they should expend resources. This problem is not unique to ILM, of course. Some studios have already begun working on digital asset libraries, also nicknamed digital backlots – the name is a tribute to the ’physical’ backlots for special effects, which have been used for similar purposes throughout the last few decades.

Backlots with old props and costumes were historically kept by most VFX studios in Hollywood.

Successful Digital Asset Libraries need to be able to solve several challenges.

Durability

Durability

Durability tends to be the challenge that proprietary libraries need to face the most. Will this asset work in 2 to 5 years? What about 10 years? Can we use it for our sequel, or spin-off, or remake? Will the technology we need to read this data continue to be available, working, and still relevant? The most durable technologies in recent years have been those that are adopted and embraced by many users and vendors, as opposed to single-studio solutions, even when trying to work on proprietary assets. This reduces the risk of in-house technology becoming obsolete, and their sole maintainers leaving the company.

Portability

Portability

That very risk is one of the reasons why the attention paid to standardized exchange formats has been growing over the years. For a standard format to gain trust and momentum, being open source goes a long way; this explains the success of open source initiatives such as Sony’s OSL, Pixar’s USD, Lucasfilm’s MaterialX, Autodesk StandardSurface, Khronos Group’s glTF, Nvidia’s MDL and Omniverse, Lucasfilm and Alembic, and so on.

Here’s a slice of the impressive digital asset library developed for Toy Story 4, entirely carried out in USD.

https://video.tv.adobe.com/v/3419332

Footage provided courtesy of Disney Pixar.

Exporting to a portable format ensures we have a baseline of compatibility, but if the assets in the library came from an AAA game or a top VFX studio, which are paid to create FX that haven’t been seen before, exporting to open formats will be extremely lossy, in terms of detail or even in terms of proprietary technology such as shading, custom rigs, and dynamics. The best systems for uploading will anticipate this and provide an easy approach for limiting, controlling or at least highlighting this unavoidable loss for high-end assets.

For example, custom shading solutions may be baked down to textures, which will require a solid standard shading model and a solid texture format. Baking down to textures requires UVs, which may not have been available to start with, depending on the type of procedural texturing being used (some studios have been using PTex files, effectively avoiding this issue).

Custom rigs are also a challenge. A common approach is to have them simplified into vertex weights and bones. Pixar’s USD, for example, has introduced a skeleton schema in which transforms and skinning are defined, making it easy to bake rigs for crowds and background characters. These solutions don’t currently have a universally accepted documented standard across studios, however.

Quality

Quality

Assuming we can read an asset years later, will the asset still be good enough to reuse? The quantity of the models saved from a production that can be salvaged over time is a real concern. Many times, an asset that used to be good enough to be in the foreground will no longer be directly usable, but this doesn’t make it useless. A large amount of work can be saved if we could reuse all models deemed to be in the background, and only revamp and upgrade those models which will require extra attention. This makes assets that are easy to update particularly valuable. Specific technology for edge cases should be dumped and simplified, making sure that what is stored is easy to re-rig or re-texture.

If you manage a curated commercial library, such as Substance Source, Adobe Stock 3D, Turbosquid’s Stemcell, and so forth, quality is paramount to your business model. Many of the assets in your library will not have been created by yourself. How do you ensure quality? Maybe you screen them first, or perhaps you can screen and train a number of pre-certified artists. Maybe you have a QA department just to ensure quality. Regardless, at some point you have to set guidelines and rules (one such great attempt at this is CheckMate, also from Turbosquid). But how do you set such guidelines? And can quality even be measured? Might it be possible to use automated testing and quantifiable measures?

In some ways, yes. There are some bad things you can definitely catch. The most straightforward (and automatically testable) features are on the geometry itself. Is it manifold? Are there isolated vertices? Is it composed of triangles, or quads, or a mix? Does it have UVs? are they degenerate or do they make a reasonable use of UV space while keeping a relatively uniform pixel density?

Then there are grey areas where it’s harder to define ’good’, and we settle for ’consistent’ instead: hierarchy, names, number of UDIMs, etc. can be tested and measured based on a specified given standard.

Finally, there are many artistic aspects of digital assets that are extremely hard to measure. Is the model ‘good’? is it realistic? Are the textures skillfully executed? Is it culturally sensitive? Granted, some of these features can be now measured in some way with computer vision, via deep learning, but in practice they are usually deferred to users, via a rating system.

Predictability

Predictability

Ok, so this asset used to look great in our old movie. Let’s bring it in for the sequel; we don’t have time to rebuild it from scratch. Oh wait – its colors are all messed up! What happened?

Something that has been more appreciated in VFX in the last few years is the importance of understanding the working color spaces of an asset’s textures, footage, or ‘primvars’. Systems and standards have existed in other industries for a very long time (ICC) but a real standardization in VFX, especially across studios, has been lagging behind. Newer initiatives, such as ACES from the Academy of Motion Pictures, paired with OpenColorIO, really took this problem head-on a few years ago, with great results. Color space can be hard to fix retroactively, on assets that were created before this was really understood and standardized, but leveraging breadcrumbs used on the metadata of files can help. Still, there are painfully few experts in this field, and many artist-facing tools often provide you just enough rope to hang yourself.

Value

Value

VFX and animation may not always see a cost-effective benefit from reusing digital assets from previous projects, but one could argue that they aren’t the primary beneficiary of their own backlots anyway.

I had a chance to discuss this topic with my friend Dan Lobl, one of the talented VFX Supervisors at Industrial Light and Magic. In big companies with film franchises, the real benefit comes from licensing and leveraging intellectual property. For every VFX asset made for a film franchise, it is probably re-used another hundred times for down-market purposes such as commercials, web intros, VR, video games, catalogs, merchandise, posters, theme parks, and so on. The producers and modelers at VFX companies don’t have the perspective or budget at the time of creation to appreciate the design considerations of an asset’s lifespan, but in a perfect world, the assets would be built with all of these uses in mind. It would be most cost-effective if asset standards could be adopted to enforce the creation of 3D content that was capable of being reused across all media for an entire franchise.

“I deal with problems around this constantly,” Dan said. “For example, say a small company wants to create a Millennium Falcon Christmas ornament. We have the VFX model in our backlot, but it is so complex, they don’t even own a computer capable of loading it. They also don’t have the budget to pay us to make a lighter version more in line with what they would require. I don’t have any specific solutions in mind, but I do try to encourage our modeling team to keep a mindset that VFX is only one percent of the uses their model is going to see.”

Purpose and Style

Purpose and Style

A good digital asset library will have features to capture many different versions of the same ‘entity’ as different assets designed for different purposes. A backlot might have 20 different ‘German Shepherd’ assets, that are all meant to represent the same dog entity concept, but that are made for VFX close-up, game engine, phone app, TV cartoon styling, 3D printing for toys, and so on. These differences are important to different people, and they should be maintained according to their use-case. People should be able to search a backlot for specific 3D models, but also for the abstract concept of a ‘German Shepherd’ and see all instantiations on that concept as linked models returned by the search engine. A related challenge would be to find different entities that match the same style and purpose, rather than the content, of a given asset. Which brings us to the next topic…

Tags and Search

Congratulations, you’ve built a successful digital asset library. So successful that you now have a ton of assets – so many that they can’t all be browsed simply by scrolling through them. At this point you’ll need a search feature. Fuzzy searches that attempt to match the name fail very quickly because things are not always named consistently, and perhaps you don’t even know exactly what you’re looking for. What to do?

Image provided courtesy of Adobe Stock

The first approach that one could take is adding categories and tagging features. Tags are actually very common in online technologies. They are hidden in web pages – such as in this very article – and they help search for content using a specific keyword. The value of tags in an asset library grows with consistency in the tagging. This requires a good policy and an ontology for how tags are assigned, so that two similar assets will hopefully have a number of tags in common, and related assets will have at least one tag in common. It’s worth noting that changing an ontology is both controversial and expensive when it is changed after a lot of assets have already been categorized and tagged, so building some solid technology to carry out this kind of change can become necessary. The ideal case would be to have a fully automated tagging system, that requires deep understanding of the asset being tagged. Not a small feat.

Substance Source exposes their categories as a classification, but they also have a full ontology of tags.

If you look at Google searches, however, you can usually find what you are looking for even if you don’t enter the exact keyword needed. That is not just because of fuzzy matching, but rather because Google has a certain awareness of the meaning of a word and its synonyms (and because of many other reasons).

There is a great amount of recent literature about this topic. There are deep learning solutions that project every known word into an n-dimensional vector (called ‘embedding’), so that the distance between words can be measured in geometric terms. If, for example, the embedded vector had 3 dimensions, the measure could be a 3D Euclidean distance. In practice, these vectors usually have 500 or more dimensions, making them harder to picture and illustrate directly. Using this measure of distance, you can find the closest existing tags (e.g. ‘rock, pebble, marble’) to the words you searched for (e.g. ‘stones’), and hopefully find examples of what you are looking for even if the words were not used as tags anywhere.

What if one doesn’t like searching, or doesn’t really know what to search for? They could start out by browsing, and find a model that they like. They would then like the library to suggest similar assets. This is called a ‘proximity search’, which also relies on this projection in a vector space. From the selected model, we find its tags and their embeddings, and then use them to find neighbors.

A similar strategy could help mix different assets by interpolating their embeddings. Pushing this even further, one could imagine an embedding of an asset without going through words, but simply using visuals. This would be solvable in theory using computer vision and deep learning, although at the time of writing this piece, we don’t often find convincing results beyond academic publications seen at SIGGRAPH. I expect this will change soon.

Maintainability

Maintainability

The best proprietary digital backlots will try to store the working scene files with the goal of making it as easy as possible to make small, simple changes. This can go against the other goal of durability, as original models tend to ‘rot’ faster, but for recent assets this can really save time. Also, it’s worth keeping in mind that this is on top of the ready-to-use standard export. Usually, a new project will want to use an old asset, but just change the color of the decals, or put on a new logo, or some such. Even if a major overhaul is being carried out, the backlot assets are useful just as references. However, if a new project only calls for a new colored chair, producers will often assume this is a ‘zero cost’ change, even if the asset is very old and the original paint isn’t available anymore. A primary feature goal for reusability should be picking up where the original artists left off, and making a small, simple change as easily as possible.

When it comes to commercial libraries, we need to be very careful as the original file may contain data that is proprietary to the artist and should not be shared. Still, being able to download and license an asset from the library, upgrade/update it (or fix it!), and re-upload it once again requires a whole system of asset versioning and a legal framework that is not trivial – implying who owns which version, whether upgrades are automatic, and whether purchasing an asset gives rights to all future versions.

Customizability

Customizability

The value of an asset would grow greatly if it could be customized based on the context it needs to be used in – an example of a similar workflow would be the ability to customize your car online with optional upgrades on rims and paint, and to see a high-quality, accurate preview of the result before you buy it. USD, as a data format and an object model, began supporting such features years ago with a mechanism called ‘variants’. Arbitrary, sparse changes can be encoded along multiple axes. Pixar regularly uses them to vary, for example, modeling, texturing, shading parameters, degrees of damaging, and level of detail, independently of one another. All varying data needs to be generated ahead of time, and the exported USD asset can compactly expose all combinations in one place.

Another solution would be to preserve some procedural features in the asset itself, meaning that the end results (polygon, pixels, and so on) are not final, but rather configurable based on some set of carefully exposed parameters. Substance does just that with its Source collection, where each material still exposes its parameters and can be tweaked, making each item in the library extremely versatile. Let’s assume this does not break the assets because such procedurals are robust. Such assets certainly have more potential uses, at the expense of simplicity. But I would argue that if the procedurals didn’t add value and usability, they should not have been added in the first place. This overall increase in usability still has a negative impact on portability and durability, because it requires the client applications to have the engine needed to evaluate the procedural. If the engine is not compatible, or not available on a given platform, or not free, the user base would decrease considerably. A sweet spot could be found by embedding variants into the procedural asset, as snapshots, so that even if the correct engine is unavailable, some preset configurations could still be used.

https://video.tv.adobe.com/v/3419331

Archiving

Archiving

Especially for proprietary backlots, a lot of attention needs to be brought to the activity of archiving in and of itself, as well as questions of who carries out this task, and when. There is no standard procedure for this, but in general:

  • Assets need to be identified as worthy of archiving.
  • They need to be converted into the format of choice.
  • They need to be tested in a ‘generic’ environment.
  • Visual references need to be generated (turntables, thumbnails).
  • It is valuable to add a metadata sheet documenting the environment and the tools the asset was built with. This is intended as a reference for future asset archaeologists who need to find information or upgrade a backlot asset to the next generation of standards.

A proper archiving initiative is time-consuming and expensive, which explains why only a few studios have taken this up systematically. It is best to involve the original artists involved in the asset creation, but at the end of a project, they may well be unavailable (gone on vacation, or moved to a new project), leaving junior or non-technical people to process the assets for the backlot. This creates inconsistency, which is the biggest enemy of making a successful library.

A proper archiving initiative is time-consuming and expensive, which explains why only a few studios have taken this up systematically. It is best to involve the original artists involved in the asset creation, but at the end of a project, they may well be unavailable (gone on vacation, or moved to a new project), leaving junior or non-technical people to process the assets for the backlot. This creates inconsistency, which is the biggest enemy of making a successful library.

On this topic, Dan argues, “For a backlot to grow in value over time, extreme care must be taken in the packaging and curation of the items in a collection. Like a museum collection, this work is best done by specially trained people that are not subject to the whims of a production environment. It is a difficult pitch to make in VFX since it is an overhead cost. Maybe machine learning will help with this somewhat in the future, but I haven’t seen this done well, both at a macro and micro scale over a huge digital library collection. There is a science to digital archiving that is getting sorted out at major studios now. I find it pretty interesting.”

Wrapping it up

I am grateful to many friends and colleagues across film, games, tech and design who helped me gather thoughts on problems and solutions. Thanks in particular to Dan, Jonathan, Kimberly, and Brian. I too have dealt, in one way or another, with digital asset libraries for many years. I was at a panel about just that during MIFA 2018 in Annecy, and I was surprised to see how little is known about the issues involved, let alone the solutions. Hopefully I’ve convinced you that digital asset libraries are both valuable and hard. I think their presence is bound to grow in our industry, and while there isn’t a one-size-fits-all piece of magic, there are great technologies available today that can be leveraged, both within a studio and for commercial solutions.