, by Romain Van den Bogaert

First Steps to Full Scene: The Creation of Natuf, the Gatherer

Sculptor Romain Van den Bogaert provides an epic account of his discovery of 3D tools, and his creation of Natuf, the Goddess of Gathering.

  • Interview
  • Workflow

Romain Van den Bogaert is a professional, conventional sculptor who, prior to this project, had barely any experience of working in 3D. Many among the Adobe Substance 3D team are big fans of Romain’s work; we initially approached Romain with a view to discovering more about how a skilled artist might be able to switch their medium, adopting 3D tools to create a complete scene. We gave Romain an open brief to create any scene he wished, and remained on hand to answer any technical questions he might have. Then we simply waited to watch the magic happen. Romain was kind enough to write up an extensive account of his process of discovering 3D tools, the research and experimentation involved in this process of discovery, and how he was able to convey his artistic vision in this new 3D field.

0: The Overall Approach as a Creative

I’ve always had a fascination for volumes, shapes, and malleable materials. This initially led me into science – I gained a PhD in clay structures within soils at Aix-Marseille University – but it has since become a lifelong creative pursuit; since 2015 I’ve dedicated myself entirely to sculpture, with my creations now appearing in the collections of a range of figurine and statue companies.

My approach to my work as a sculptor and designer has been evolving, particularly recently. For a few years now I’ve been interested in other disciplines, such as typography, graphic design, ceramics, woodworking, and furniture design, as well as the design of objects more broadly.

I recently moved to a new home, and the construction of a large workshop will gradually allow me to combine many approaches, materials, and techniques. The evolution of my practice involves creation, around digital sculpture (notably in VR), 3D printing, and 3D scanning using photogrammetry, but also including ceramics, woodworking, and the design of tools and utensils (such as plates, bowls, spoons, chairs, furniture, and so on). Doors are opening up onto immense vistas before me, and I am very excited by the infinite possibilities that are becoming imaginable! To be able to navigate between all these media in a fluid way evokes real creative fascination within me!

Natuf the Gatherer, rendered at the project’s conclusion.

Digital tools are great for being able to imagine and possibly prototype new ideas that I can then physically create at extremely varied scales, and with a wide range of materials. I am currently searching for a workflow which allows me to switch from one medium to another, in any direction necessary, with as much fluidity as possible.

For this project I implemented this multimedia approach by first sculpting objects conventionally, in wood or clay, and then 3D scanning them and reworking them in VR; I then integrated these objects into the diorama. I notably did this for the oak plates, and the face of the clay figure. I’ll talk about that in more detail a little later.

1: The Concept of the Project

Whenever I start work on a project, I do so without knowing what the end result will look like. And that was certainly the case here. I had a very basic, vague idea of the final result, but that was all. This exploratory method is pretty much standard for me, for my personal projects.

Even during the project itself, I don’t know how the final piece will appear right until the very end. Each step gives me the possibility of circling back, and discovering new graphical and/or technical possibilities. I take a sort of ‘patchwork’ approach.

Natuf the Gatherer, rendered at the project’s conclusion.

From the seed of an initial idea, concepts and visual references are superimposed atop one another at each stage. However, I try to keep precise control over the visual coherence of the project, even if I’m navigating through a sort of creative fog. This description is contradictory, I realize that. But that’s how I work. It’s only by approaching the project from this multitude of possible directions that I manage to create what I hope will be something original.

To lay out the groundwork for the project, I think I proceed like the majority of visual artists; I collect a lot of reference images to work from, and I make a lot of small sketches in notebooks – often very quick sketches of objects or details.

I had the idea for this project after coming across a book that detailed Mesopotamian statuettes, the ‘Bactrian idols,’ while in parallel reading about and listening to podcasts on anthropology. I was specifically looking at the moment where the Natufian civilization transitioned from being a culture of hunter-gatherers to one of sedentary farmers. In looking at that period, some questions arise, notably regarding when, how, and why mankind began to harvest and use cereals to make bread and beer, two foods that are quite closely intertwined and very old. I won’t go into anthropological discussions here – I’ll leave these to specialists – but these questions seem very relevant to me, especially in view of the current moment of history in which we find ourselves.

An early attempt to mimic the Bactrian style, and preliminary sketches for the scene.

Now that my ideas about this project are firmer, I can say that it’s part of a backdrop that I plan to develop in a future cycle, which I may call ‘Man and Food,’ a sort of gallery of illustrations of cultural practices linked to food more widely.

And so my character is a representation of Ninkasi, the goddess of primordial beer brewing, made from honey and wild grain bread.

Preliminary sketches for the scene

My bibliography for my research on this project includes a University of Chicago piece on ancient beer brewing methods, and a France Inter radio show on Sumerian beer.

2: Sculpting

The beginning of the project is a huge thrill. Starting my first ever digital commission, for no less a client than Adobe, imposes some pressure on me. I’ve never so much as opened the majority of the tools I’ll use. It’s going to be a big challenge.

I try a hybrid approach that will allow me to relax a little, and unblock the beginning of the project. I most often start my projects with the character’s face – that is, the central point of the sculpture. But, in this case, I don’t feel like leaping directly into Medium to work on that.

So I start out by making several clay models, and in particular variants for the face. I then 3D scan them using photogrammetry, so that I can rework them on a computer and integrate them into the project. I’d already used photogrammetry on a few tests in the past, but this time the scan has to be precise enough for it to fit in with the rest of the workflow.

Above image: various sketches and a clay maquette of a character lying on a large organic chair.

Above images: clay maquette.

Above image: scanned data with texture.

Retopology of the scanned head in ZBrush.

Scanned clay head in the Medium viewport.

The scan of the model of the face that I select is really nice; I manage to capture a lot of detail, despite my not-so-high-performance SLR camera. I try to capture more detail using the high-definition texture, in order to reproject those details onto the mesh. I gain a little grain and texture, but it’s not a major difference.

In this project, and in the rest of my digital experiences, I really want to keep the feel and grain of traditional sculpture. Particularly the non-symmetry and the surface imperfections which are often missing — IMHO — in digital sculpture.

I take this opportunity to incorporate a few other physical objects into the project; notably I scan an oak plate, and include that in the scene. I’ll make variations from this plate and use those shapes in some of the other objects in the scene.

Above image: hand-carved oak plate being protected with oil.

Scanned oak plate imported into Medium (above).

Oak plate after dinner and the corresponding Substance textured version on the screen.

It’s quite incredible to be able to import into VR the scan of clay models and various objects, along with their texture. It really gives us the impression that we have them in front of us, and that we can do what we want with them!

Beginning the sculpture with Medium

I used Medium a few years ago, shortly after its release, but at that time the workflow didn’t convince me to go deeper. Then, last year, I saw some outstanding projects by Gio (Nakpil) and Samuel (Poirier), which inspired me to return to the software. Once I got back into it, and with a little hindsight and the proper workflow, I soon found my bearings and began to enjoy the process. There are also some videos and tutorials online which help to find ways of working, especially for hard-surface sculpting.

At work in Medium (photograph by Geoffrey Rosin).

The feeling is very close to traditional sculpture, in the sense that you can really hold the object in your hand and turn it around. It’s not at all like in ZBrush, for example, where I feel like I’m completely blocked by the screen in front of me. The interface is also very simple, without too many tools, and it’s very pleasant for a beginner like me. My feedback on Medium’s functionality from the point of view of a traditional sculptor mainly concerns the missing interaction between the tool you hold in your hand and the surface you’re working on. Since you can’t have feedback between the resistance of the tool and the material, I think it’s necessary to find a way to simulate that, in order to get even closer to the interaction between a spatula and real clay. I think that an algorithm that takes into account the speed of movements should be able to control the intensity of the deformation that takes place on the virtual surface.

I get so used to sculpting in VR that I get carried away; I sculpt a lot of objects that pile up and pile up! The project that was supposed to be limited to a small, fairly simple scene with a central character turns into a much larger scene than expected. Hah!

I start by setting up a very simple architecture (image below), to which I’ll add detail as I go along, and which I’ll ultimately fill with a lot of objects.

I probably should have pushed a little more the general composition of the scene in low resolution. But with my bad habit of charging forward, I opted to sculpt ‘by sight’, object by object, with an approach centered on the design and the specifics of each object that my character will use in this diorama, rather than focusing on the overall scene as a whole.

Early stages of the composition.

Early stages of the composition (the head on the character was a 3D-scanned clay sketch, incorporated to test proof of method feasibility).

Early stages of the composition (the head on the character was a 3D-scanned clay sketch incorporated to test proof of method feasibility).

Early character design with the final scanned clay sketch as the model’s head.

I added a colored fog, it’s relaxing for my eyes and I like the ambiance; it’s very immersive.

The final character steps in ZBrush, to clean the file and redo the topology.

I also quickly run into one technical problem when sculpting large, very detailed scenes in Medium: the files quickly become very large, and running the whole scene in a single file with the level of resolution that I want to obtain rapidly becomes impossible. And so I subdivide the scene into several sub-files, including groups of assets.

With this done, my groups of files in Medium comprise:

– a group of files with the architecture and the floor of the diorama;
– a large series of ceramics;
– a group of files with all the various objects and furniture present in the scene;
– the character and her objects.

The architecture

I previously presented some sketches as well as the first steps of setting up the composition of the diorama, with fairly simple geometric blocks. Here I continue to detail and harmonize these large blocks, trying to keep an overall pyramidal shape, which is a shape that I find pretty strong, especially for portraying mythological characters.

When I sculpt in larger files, I use different colors to distinguish between what are supposed to be the low res elements, just to check the composition (gray), and the active elements (brown).

The sculpture of the ceramics

For some of the objects I have a few sketches in notebooks, which I take as a basis to imagine the utensils that this mythical queen uses for brewing her beer and harvesting wild grains.

Preliminary sketches for the ceramics.

Preliminary sketches for the ceramics (above).

Early ceramic modeling in Medium.

The sculpture of various objects and furniture

Despite the eccentric shapes of most of these objects, I try to think of them as achievable and functional prototypes. I also had several sketches of designs in my notebooks; I use these for the furniture present in the scene.

Hand-drawn sketches imported as references inside Medium for the jewelry design.

Each object is carefully created as being functional and self-sufficient. I even imagine them as prototypes of utilitarian objects that I would like to create as life-size pieces afterwards, in wood or ceramic, to install in my home! I was quite inspired by Brancusi sculptures, traditional timber framing, and patterns like old wood veins and ripple marks.

Furniture design in Medium.

I wanted to push the details of the sculpture pretty far directly in Medium; because of this, I very rarely needed to retouch the sculpting side of things in ZBrush.

Once I’m done with sculpting all the pieces I need for my story, I put them all together for the first time in ZBrush. And so, I literally discover the scene as I assemble the objects and pieces of scenery. I had a little idea of the final style, but it’s pretty funny to gradually discover something that you’ve created yourself! After assembly, I take the time to find the position and scale of each object in the scene, so that everything is consistent.

The scene sculpted in Medium, and assembled in ZBrush. No retouching of the sculpture has been carried out yet.

In addition, I work on several object files, allowing me to print them. I remain very attached to making my work a reality, so that it becomes a real object that I can hold in my hand. It’s great to be able to take voxels out of the machine and turn them into something physical; I find this at least as exciting as the 3D scanning process. And it’s quite similar to the feeling I get when I first mold a traditional sculpture. It is kind of like a concrete birth of the object.

3D printed elements during painting.

A collection of clay sketches and 3D printed elements.

A small print of the whole character. This is just a rough print, and a single part, with automatic supports.

3: Retopology / UV, and the Preparation of Files

Voilà, voilà. I’ve had a lot of fun in Medium – probably a little too much, given the size of the project! During this phase, I’ve still been on familiar ground, and somewhat in my comfort zone: sculpture, design, scanning, and printing. But now I’m getting into difficult areas; everything before me is completely unknown.

What am I going to do with all these files? Of course, and this is a major point, I’ve sculpted everything like a traditional sculptor. Not knowing the subsequent steps, I have no critical perspective – or almost no perspective – regarding how to adapt my sculpture to UVs, or retopology, or the various basic technical aspects that any second-year students of 3D already know by heart! I only have a vague notion of what UVs are (I know they’re origami of the 3D model, basically), and that this plays a part in rendering… And that’s it.

To better manage the rest of the process, I export my objects one by one from Medium in OBJ format, in separate files. I do a first pass of cleaning the files and especially a first decimation; some objects are 10, 15 or 20 million triangles when I open them in ZBrush from the Medium export!

Zbrush screenshot.

I note that Medium’s algorithm, which transforms voxels into triangles, often generates islands of a few polygons that must be removed from the mesh before going any further.

The goal is clear: to add color to everything. The way to get to that point, however, is more nebulous. But let’s start from the beginning: what is a texture? What does the word ‘procedural’ mean? What does all of this cover as a concept? And as technical possibilities? I find myself standing before of an immense mountain that I need to climb – so let’s go!

Painter is supposed to allow you to carry out this step, right? Do I need to first identify at what stages this software will be needed, in a project that will go right up to the final rendering of the scene? I also ask myself other basic questions: what is the Substance software ecosystem (Designer, 3D assets , etc.)? The technical words clash, each one accompanied by a procession of concepts to discover. And of course, each concept and each word demands 10 more by way of explanation!!!

So I watch hours and hours of beginner tutorials about the Substance tools, those released by the Substance team itself, but also many on YouTube. I also begin asking for information from Geoffrey Rosin, a texture guru and my technical contact at Adobe; Geoffrey will answer all my basic questions in a very educational way, and later on in the project we’ll eat a very good homemade pâté together!

I advance a little. It seems that in order to paint and texture a 3D model you have to generate UVs. But how do you do UVs? What software do you use? I decide to use ZBrush, as I don’t have many other ideas just now. With a little hindsight, of course, I can now see that this clearly wasn’t the best choice. Not necessarily because of the unwrapping algorithms themselves, but above all because of the near absence of UV island layout tools. Very slowly, I learn to use ZBrush to clean the files; I cut my models into several pieces, I learn to redo the topology so that unwrapping is possible, and clean, and so on. Again, I realize that I should have planned out the sculpture to facilitate this technical step. Then I also use Blender to retouch certain UVs, and to better optimize the layout of the islands.

I’d already tried to use ZBrush before this project, but this software continues to be particularly incomprehensible to me. Honestly, I think that my brain is formatted in UNIX UI; the logic of ZBrush remains mysterious! Little by little, I begin to discover more and more functions, but I still find it a pretty laborious experience.

I use ZBrush to clean the files; I cut my models into several pieces, I learn to redo the topology so that unwrapping is possible and as clean as it can be, and so on… And I realize once again that I should have considered this technical step right from the start. By the end of this first major topological cleaning I’ve generated a low resolution mesh and a high resolution mesh for each asset; I’ll use these during the baking.

The baking process begins . I test out how to organize the files, how to import them into Painter, and how to make corrections in ZBrush and/or Blender, so that the maps are baked correctly in Painter. I choose to bake everything in Painter, though I see that some people prefer to bake the normal map in ZBrush. I’ll check later whether than can provide some advantage to my workflow.

To start with, very little goes well. I find a lot of mistakes in the baking, and there are unacceptable errors with the UVs. I understand that the quality of the UV cuts will absolutely determine the quality of the texture. No clean UVs means no texture!! A ïe aïe aïe !! I make a long round trip between ZBrush and Painter in an attempt to improve the UVs by small, successive touches. I discover in a tutorial that other software, such as Maya or Blender, is much more efficient for this type of work, as they include numerous options to reorganize, store, and classify UVs, therefore optimizing their distribution in the 2D space. I understand that UVs are transferable between software tools, something that’s surely evident to most 3D users already… And so I search for a way to bridge the different software tools, notably by using the GoB (Go Blender) plugin to rapidly exchange data with ZBrush. In Blender I’ve tried to make my task easier by using add-ons such as UV Layout, Texel Density Checker, Textools, Magic UV, and so on. It still isn’t easy, in any case. I spend some time trying to control the distribution and homogeneity of the texel density on my UVs – making better cuts, using tools that relax the UVs, and so on. If I’d known about the importance of the UVs earlier, I would have taken a better approach to some of the organic shapes, which are proving really horrible to cut.

At first, I also make the mistake of creating too many texture sets for each asset, something like 10 texture sets for each one! Knowing that I’ve already separated my assets during modeling, I don’t need to re-cut the files very much – but it takes a while to figure this out; haha! I then use ID maps that I create in ZBrush to make the work easier. I’ll ultimately come to understand that texture sets will generate many maps that each have a resolution, and therefore a file size, and that I’ll need to optimize all of this for the final rendering. But at this time, the notion of a ‘map’ containing information in a 2D format is still very vague in my mind. So is the notion of a shader! : D

4: Texturing with Painter

After this long technical parenthesis of preparing the files, I return to a series of tutorials on Painter itself. Once I’m more familiar with the technical side of handling of the software, the painting / texturing phase quickly becomes a lot of fun! Here, again, I’ll thank Geoffrey Rosin for his technical help. I think that, without him, I’d have probably torn out a lot of my hair by now.

At work in Painter (photograph by Geoffrey Rosin).

I don’t take into account the first colorization tests I did in Medium; I decide to stick with Painter for the whole texturing process. In future projects, though, I’d like to use the painting steps in Medium and ZBrush as a basis for the texturing in Painter; I think it’s quite important to keep my ideas regarding color from the initial sculpture, because they are full of energy and they are consistent with the sculpture itself.

I discover sensations and techniques that are extremely similar to those I used in my younger years, as a painter of figurines. Base coat, transparencies, brushing, washes, ink, and so on… The principle of the layer stack is extremely close to the approach of traditional painting, with the obvious advantages of being able to retouch and adjust each effect precisely.

Much as for the sculpting phase, I start slowly with tests, and by texturing certain secondary assets. Like this, I put less pressure on myself! I start with ceramics and quickly I am super-surprised by the realism and the detail that can be obtained. It’s crazy!

Painter’s internal Iray internal renderer is really great for having live renders. I try to keep as traditional a touch as possible on my textures, with a lot of imperfections, to ensure they remain very organic.

Graphically, I want the style of the scene to be a mix between realism and an oneiric vision. I have neither the technical qualities nor any particular interest in pure, hard realism, but I use it as a basis for this scene. My goal is to give the impression of an intermediate world, suspended in time and space, slightly distorted, as if in a dream. I take this approach for the sculpture and the painting of the diorama. I am not looking to perfectly recreate the materials in the scene – the wood, and ceramics, and so on – but rather to suggest these presences, and evoke these materials. I use a lot of wire brushes, and color nuances, to bring the scene out of the overly realistic atmosphere it might otherwise possess.

Technically speaking, I proceed very simply. I load a base material that is fairly close to the material I want to represent, and then I add several layers of modifications, such as paint washes, on top of this. I use a lot of Smart Masks to select the areas to paint, as well as the ID maps exported from ZBrush as necessary.

Character’s ID map generated in ZBrush, based on the polygroups of the mesh.

Finally, I do some touch-ups by hand to adjust or hide certain areas. I directly paint the face by hand, more than anywhere else; it’s difficult to manage everything procedurally at this level.

Painting the main character without her assets. This render is done directly in Iray.

Painting a bench: a sort of peeling paint on the wood, with metallic design elements at each end.

Close up of an element of the architecture behind the figure. A bas relief of a pollinating bee is placed at the top of the central element.

Close up: another element of the architecture in bas relief.

Almost general view of the architectural element.

Plate with ceramic texture. This is a 3D scan of the oak plate.

Variant texture from the 3D scan of the oak plate.

A simple sickle.

A table with an organic design. The patterns have been sculpted in Medium; they have not been added during texturing.

A sort of candlestick without a candle, in bronze.

Close up: a sort of candlestick without a candle, in bronze.

A kind of counter; I’ll place other objects in the scene on top of this.

The low, wooden table , upon which are arranged many objects necessary for harvesting cereals. I wanted to give this table a feeling of being old and well weathered, by adding lichens and mosses.

A teapot. With this asset, I was able to better grasp the notion of the shader, which I’d found extremely vague. I used a transparency for the tea and the steam. The steam is sculpted; I made an initial mesh in Medium, and touched it up a bit in ZBrush by superimposing several meshes, giving it the effect of curves of steam. It’s clearly not the most realistic render, but it’s my most satisfying experiment in Painter. I can of course further build on this render in Blender.

The character’s throne! I enjoyed designing this object. It’s inspired by a mix of traditional timber framing and Windsor chair designs. As I did with the low table, I chose here a very aged, well-worn texture.

Variations of a vase with organic shapes.

A pair of small ceramic objects, among the first that I textured in Painter (above).

Enameled ceramic.

I wanted to explore some funny forms for these ceramics (above).

Ceramic for the main character. Like a lot of ceramics in that diorama, I will fill it with wheat and wild cereals.

Some of the organic-shaped ceramics. These are some of the first textures I did with Painter.

Once the textures are broadly complete, I do some export and compatibility tests between Painter, ZBrush, and Blender. I’ve also installed and used the GoZ (GoZBrush) add-on, and a bridge add-on between Blender and Painter, during my many tests back and forth. I can’t wait to test the new official file transfer add-on between Painter and Blender… If I’d been carrying out this work a few months later, I’d have been able to save a lot of time!!

Technically, I export the majority of textures in 4K, though I export some very small and/or not-very-visible textures in 1K or 2K. Conversely, I export some very prominent textures, such as those of the character in the foreground, or the very large texture for the architecture, in 8K. I’d already cut my files in Painter in this way somewhat intuitively, so that I could adjust the size of the files according to their visual importance. The most important textures have 16-bit normal maps. For this project I exported for each mesh the following maps: the normal map, the roughness, the metallic, and the diffuse. In a few rare cases I re-baked information from the height map to the normal map. I also played around with other types of maps a bit – particularly for the steam rising from the teapot, where I played with opacity – but this rapidly becomes complicated for a beginner such as myself.

At the conclusion of this big painting stage, I find myself with a collection of maps for each of my meshes. I’ll go on to combine these in Blender to complete the composition, and to have an overview of the whole project before rendering.

5: The Render

Let’s start with the choice of rendering engine: who, what, why, what, where? Faced with the jungle of rendering engines, my choice is quite naturally oriented towards a software that fascinates me, even though I’ve never opened it prior to this project: Blender. It appeals to me because of its versatility, its very active community, and due to its open-source nature. An enormous amount of functions and add-ons become available with this software, which can do pretty much anything. Working with Blender, I know that I’m spending my time learning something that’s enduring and robust. Since I’ve been interested in 3D modeling I’ve had the impression that this software is becoming more and more popular, and more and more powerful. But hey, maybe it just seems that way because I’m really interested in it; haha!

Natuf the Gatherer, rendered at the project’s conclusion.

In any case, it’s another software tool in which I can lose myself in the possibilities available; haha! I sense dozens of hours of tutorials coming my way…

Still, I take the opportunity to learn more about the other main rendering engines available, and their degree of compatibility depending on the software… But I save the real exploration of these engines for later – particularly UE5!

Blender’s rendering engine, Cycles, seems powerful, with a few technical tweaks, and computing times have been greatly reduced in the latest versions of Blender. I’m really looking forward to the release of a stable version of Cycle X.

To come back to my project, the pressure is mounting on my side, as I’m approaching the project’s denouement! I have a feeling that the technical flaws and cumbersome elements that I’ve accumulated throughout the project may cause problems, and cause rendering times to explode.

I start methodically importing low resolution meshes, as well as textures exported from Painter into Blender. Like this, I recreate my scene little by little; I adjust the position of objects, their proportions, and so on… I use the Wrangler node add-on to assign maps and create the shader for each mesh. All the shaders still need to be checked, and possibly adjusted, depending on the results of the first rendering tests. This essentially involves a few tweaks to the normal to bring out the detail, as well as reducing glare from a few things that are too shiny or too metallic. I have a water vapor shader on the teapot; for this, I ultimately use a sort of volumetric shader to make adjustments. A few artifacts remain in the render, but I plan to erase them in compositing, without spending much time on such small details.

Renders from the project’s conclusion.

As I add my textures and meshes the scene becomes heavier and heavier, and the size of the files becomes my main problem. Ideally, a lot of optimization should have been carried out upstream; now, I’ll just have to manage with what I have. In a few cases I lighten the topology of certain meshes, and the size of some maps, to make things a little easier.

It’s also at this point that I have some fun – heh heh – discovering all the small joys involved in the approximate compatibilities between the different software tools: differing coordinates of the objects in space, according to the software used; different scales; badly managed seams on the UVs and the texture which weren’t previously visible, but which show up in this engine… Basically, I now encounter all of the boring issues that, with a little experience, we’d normally fix upstream without even thinking about them.

Now I have to apply Cycles optimizations so that the rendering time doesn’t become gigantic, keeping in mind that my computer isn’t exactly a beast! I begin, with my large files all together, and wait to see what happens. Watching tutorials and cross-checking information, I gradually optimize some elements. Despite this, I find myself with a very heavy scene, and the technical and graphical research necessary is very complicated. By the end of the configuration, each image takes an hour and a half to calculate… I’ll add that, for this, I’ve loaded 150 maps, each around 4K resolution on average…

Renders from the project’s conclusion.

After some rendering tests, I make some additions to topology directly in Blender. I realize that a lot of connections between meshes are really ugly, and empty, particularly at ground level, and that my texture won’t magically be able to disguise this.

And so – quickly, quickly! – I add pebbles herbs and various other bits to hide this hideousness !!! Like this, I discover Blender’s particle system functions, to randomly distribute objects. Great! This works really well.

As I discover the technical possibilities and options, I refine the lighting setup, mixing lights that I control manually and an HDRI for the general atmosphere. I’m aiming for a dreamlike, evanescent sensation; to evoke this mood, I want to establish very bright lighting – so bright that the scene will seem to dissolve in the clarity of a sort of setting sun. At the image processing stage, I want to create a scene that is both dreamlike and realistic.

I nonetheless want to avoid a kitsch, slightly forced ‘golden hour’ lighting effect. After numerous tests, however, I do decide to add a subtle volumetric fog. Even though it’s pretty classic, I test out the famous ‘god rays’, with a few different techniques. But after 4 or 5 different approaches I abandon the idea – my scene is too heavy to be able to play around properly with the parameters. That can be in my next project! But, okay, for now I promise not to make my lighting too ‘Blade Runner.’ 😀

Trying to generate god rays in Blender.

In general, I’m not very sensitive to the lighting ambiance that I see in CGI. I have the impression that I often see the same kind of construction of the lighting – dark ambiances, chiaroscuro, or the classic 3-point lighting, with a mega-forced rim light that contrasts like crazy. I guess that’s what’s most effective visually, and there are certainly some works that are technically fabulous, but I rarely find scenes that really inspire me in that way.

I naturally lean more towards photography around fashion, design, and architecture… And I’ll consciously try to sidestep the archetypes that I often see in CGI. Though, admittedly, this can be quite tricky and frustrating when you’re a big noob. 😀

I also adjust the DOF options of my cameras; this is immensely reassuring for me, in terms of my render!! A well-controlled DOF makes all the difference; it can cause everything to just click together. And so, like this, the elements of my scene bond, and integrate with one another.

An early attempt at lighting the scene.

After a lot of adjustment and testing I find settings that I like. For the final render I change the light setup depending on whether the camera is near or far, and on the viewing angle. In particular, I reduce the amount of fog for more distant shots; if this weren’t the case, you just wouldn’t see very much. I would have liked to work more on compositing directly in Blender, with Cryptomatte passes, but that wasn’t really possible for this project, in part because the files were too big, and in part because I didn’t really have enough time.

Overall, I’m happy with the result of the rendering even though I see a lot of flaws – principally, the lack of hierarchy in my light sources. It would have been good to have handled this part better but, as with the compositing, the size of the project and the time available made this impractical. The compositing comes down to some retouching in Photoshop; I’ve generated several images for each camera angle, and I play with the overlay of these images in Photoshop to adjust certain areas in terms of light ambience and contrast.

During the creation of Natuf the Gatherer, Mimaki, the worldwide specialists in 3D printing, proved keen to use Romain’s 3D design to demonstrate the precision and overall quality that can be achieved with 3D printing. The photographs below show Mimaki’s real-world representation of Natuf. Notably, geometry generated in Medium is particularly suited to such 3D printing projects; similarly, all color information was transferred directly from Painter.

3D prints, by Mimaki.

6: Conclusion

Excepting the part concerning artistic conception, this has been an almost completely self-study approach to learning a digital workflow. And, while the end result is not flawless, I’m very happy with what I’ve created. If I were to approach this project again, knowing what I know now… I’d probably do a lot of things differently! Certainly, I think I’d pay a more attention to some of the more organic shapes, and the cleanness of the modeling overall – particularly in terms of the UV unwrapping. I still have to look for ways to better manage and optmize my layout, when it comes UVs.

Regarding Painter, I have a good grasp of the basic principles and the possibilities of the software. But I’d like to go a little deeper with the use of more specialized shaders, however, to better work with transparency and more complex materials. I also want to integrate the use of UV tiles into my workflow. The arrival of a Painter plugin for Blender is also great news to facilitate the integration of textures. It will be more efficient for certain materials to stay procedural in Blender, particularly for complex scenes, to make rendering faster. I have a lot to discover, and to learn about, in order to become more comfortable with this part of the work.

Thinking about the next project?? (photo by Geoffrey Rosin)

I’d like to thank all the people who took part in this project – and in the end, there are quite a few of them! Thanks to the Adobe Substance 3D team as a whole for having trusted me and allowing me to express my know-how in a medium that I did not yet know; to Pierre Maheut for having launched me on this project; to Geoffrey for his help at all times; to Marine and Paul for their kindness and the flawless organization of the project; to the Mimaki team for their great printing work; and to Gio for his creative enthusiasm. And thanks too to all the people who have helped me privately on technical points, and to everybody who publishes tutorials online. Without these resources, the learning needed to complete this project would not have been possible! I got into this project without really knowing where I was going, and when I look back at all the stages I went through, I think it was great not to know what to expect; haha! I’m extremely happy to have succeeded in this challenge – I do now have an idea, however, of the gigantic technical progress that I still need to make in the years to come.

I hope this diorama and my learning journey can inspire others. Now all that remains for me is to start the next project! 😊

Read more