, by Steve Talkowski

The VR Bot: Testing a Complete 3D Pipeline

Steve Talkowski creates his VR Bot to test a pipeline of Adobe 3D tools – Medium by Adobe, Substance Painter, and Dimension

  • Interview
  • Workflow

Steve Talkowski is an industry veteran, and the definitive 3D pro. When Adobe’s 3D team began to consider artists who might be able to demonstrate the capabilities of a workflow of Adobe tools – the creative results possible when working within Adobe’s tools, but also how those tools interact with one another – Steve was a natural choice.

We were also keen for Steve to ‘stress-test’ Adobe’s 3D toolset. Yes, we wanted to show the potential of a pipeline of Adobe’s 3D tools – but we were also curious to see if Steve would identify any areas of possible improvement. How would the toolset stand up to a thorough pummeling, from an artist of Steve’s caliber?

And so we gave Steve an open brief: using only Adobe’s 3D tools, create whatever you like. Not wholly unexpectedly , we loved what he came up with.

Sculpting in Medium

Steve: Adobe originally approached me with a view to showcasing how I use their 3D tools in my pipeline. I’ve been sculpting in Medium for almost 3 years now, and in terms of my idea for the project it was a natural progression to create a bot that uses VR, in VR! The idea for my VR Bot was born.

I’ve always been into robots. I saw Star Wars at the age of 12, and this was a huge influence. My love of figurative art also influences my focus on robots, as they can often be interpreted as an extension of how we perceive ourselves.

The pipeline was as follows: I would first create the initial 3D sketch and volumes in Medium. Next, this model would be imported to Substance Painter for materials and textures. To wrap up, the model is then imported into Dimension for lighting and final render. When this project came to me, I was already in the zone of kit-bashing, which made it easy to dive in and quickly lay out the form using Medium’s primitive stamps. I then added the included mechanical stamps where I saw fit.

Starting out I worked symmetrically, blocking out the character in a single layer. I then duplicated this master layer for each component I wanted to address separately, one layer each for the head, torso, arms and legs. Next, I further segmented the arms and legs into smaller parts so that I could create a parent/child hierarchy that would allow me to pose the VR sculpt like a stop-motion puppet.

When you work like this, you need to create a hierarchy so that you can manage the parts; this is true in any DCC app. In the past, I’ve been a lazy VR sculptor – I think I’m like a lot of people, in that I just want to jump in and start sculpting. But if you take this approach, leaving everything on one layer, you’ll ultimately create problems for yourself later. You really have to break up your parts, and manage your layers. I always have my layer stack present when I’m working, so I can easily pick at different sections.

The process of the initial sculpt didn’t take even a full day, though I did come back later to clean up some pieces. Finally, I exported the model as an .fbx file.

Assigning Texture Sets

My mandate for this project was to work within a pipeline of Adobe tools, but I needed to made a quick stop over in Maya, primarily for the need to assign a material to each of the model’s four parts. This allowed me greater control when using Substance Painter’s auto-UV unwrap feature, while also allowing me to have higher fidelity with four maps in comparison with having all the parts reside in a single texture set.

It’d be great if in a future version of Medium there’s a way of assigning materials to your model, or some other way of ensuring that this information gets transferred over to Substance Painter, so that when you do the UV auto-unwrap each part of the model has its own texture set right away. Personally, I even like to assign a specific texture set to each individual limb, rather than to pairs of limbs – I know that in the game world a lot of people like to mirror their work, but I’m not keen on that; I find that having a unique texture set for each side of the character helps to break up symmetry.

Substance Painter

Next, I brought the .fbx file into Substance Painter. Working in Painter is the most fun part of the project; to me, it’s really effortless. I wouldn’t consider myself a diehard pro user of Substance Painter – I don’t use it every day. But when I get to use it, I love it, and I’m trying to incorporate it into my pipeline more frequently.

I really force myself not to settle with the base assets that come with Substance Painter. I might use some smart materials just to get my texturing started, but then I’m really conscious about generating an AO pass, or creating some custom alphas. I intentionally try to push my own use of Substance Painter, so that I’m not settling into a routine of just dragging and dropping textures onto my model. You shouldn’t settle for just doing things the easy way; you’ll never get everything you need for your artwork that way. Pushing myself like this is an element of using Substance Painter that I really enjoy. One particularly great learning resource is the wealth of information that can be found on Substance by Adobe’s YouTube channel.

With VR Bot, I applied a few smart materials as my base, followed by custom fill layers and lots of hand painting and tweaks to the character’s texture. This approach enabled me to build up my layers to a point where the character started to exhibit a more natural and worn believability.

I then exported this file directly as an Adobe Dimension file, which opened flawlessly with textures already assigned. I chose a few Adobe Stock images to test out, and began creating a few of the renders seen in this article.

Rendering in Substance Painter, Rendering in Dimension

For my Iray renders in Substance Painter I chose an HDRI image from one of my favorite online sources – HDRI Haven in particular, the Small Hangar 01, which has warm tones and nice, open lighting. This also gave me an idea for an appropriate color scheme for the robot. But here I found myself wishing that I’d had the ability to add some colored lights; this would have allowed me to tweak the image a little.

This is a point of comparison between rendering in Substance Painter and Dimension. Painter ’s Iray renders are gorgeous, and I’m able to quickly add real-time effects, such as adjusting the depth of field, cranking the exposure a little, or adding a lookup table, which allows me to iterate at a very fast pace.

Conversely, Dimension has the ability to add custom lights, and this can help to make the final render a little richer. So, it can be a trade-off from relying solely on the HDRI and post FX versus a bit more controlled lighting.

I tried out four different backgrounds with the Dimension renders, and I’ll admit that some of them came out pretty bland. But there was one in particular that I was really happy with, the image below, because I was able to tweak it and add some colors to it, and really do some directional, dimensional lighting – no pun intended – that allowed me to match it better to its background.

One of the nice things about using Dimension was that I was just able to hit the button in Substance Painter, ‘Export as Dimension,’ and the model came over with all the maps nicely assigned to each part. And it looked great, right out of the box; I didn’t have to worry about changing anything. My renderer for the last three years or so has been Redshift GPU – though I’ll also use Arnold in a pinch, and occasionally Renderman – but if I were rendering this scene in Redshift, it’d take some time to set everything up properly. Going from Substance Painter to Dimension was fast.

My one request for Dimension would be for it to incorporate the use of the HDRI background image in the same way that Iray in Substance Painter does – that is, rather than having a still image as a background, it’d be great if you could lock an image to the ground plane, and then move it around and change it relative to the sphere that’s being projected. That kind of functionality would be fantastic for somebody who’s just doing concept work, and who just wants to rapidly crank out numerous camera angles. If Dimension had that kind of capability built in, then I think people would be using it a little bit more for that type of concepting. On top of that, you’d also have that capability to add additional light sources that you can tweak, as I mentioned before.

So, the advantages for rendering in Substance Painter’s Iray: beautiful, fast renders; ease of use; ability to toggle on real-time effects such as bloom and DOF.

Advantages of rendering in Dimension: ability to add custom lights; ability to use additional models in the scene. And I could also toggle off pieces that were separated out – so, for example, I have a set of floating hands, and a head that represents what the VR Bot is sculpting. In Dimension, I could turn off these separate elements, just to see what that looked like.

Afterthoughts and Wish Lists

This was a really fun project. The brief for it was basically, “Steve, we just want you to go to town and do whatever.” I love having that kind of freedom, and especially, the acknowledgment that the party doing the commissioning believes that much in your body of work.

The Stamp Bot

The Brooding Bot

In retrospect, I wish I could go back and change some of the posing. But I feel like once I’ve taken the sculpture out of Medium I have to live with that. Knowing how I’d be using the character throughout the pipeline, I tried to give it a fairly dynamic pose. But this did put me in a difficult position later on – I didn’t have any background in particular picked out, and had to find an appropriate and desirable background that matched this pose.

I think if I were to approach this project again I’d plan upfront for the amount of individual sculpt layers needed while in Medium. Depending on how I’d like to potentially use this raw voxel data, I’d most likely make a trip to ZBrush for retopology, then over to Maya for a more controlled UV layout and material assignment, prior to the .fbx export for Painter.

I’ve already mentioned some features I’d like to see integrated into the various Adobe 3D tools in the future. One big request is the possibility for animation in this pipeline. I use Mixamo quite frequently, and I actually was able attach one of the default skeletons to the character in Maya, then upload this directly to the Mixamo site and try out a plethora of animation clips with it. Ideally, it would be great to see some sort of rigging / skeletal capabilities added inside Medium, and be able to pose directly in VR!

Meet Steve Talkowski

Steve has been working in the field of 3D computer graphics for over 30 years. He studied figure drawing and painting at art school, while also seeking out various computer systems on campus, and shooting frame-by-frame animation with an old Super 8 camera. He ultimately worked as a 3D artist at Rhonda Graphics in Phoenix for 7 years, then as a senior animator at Blue Sky Studios for 7 years, followed by stints as animation director at Hornet Inc. and Guava NYC. He began freelancing in 2008, notably releasing his retro-styled art toy, Sketchbot, in 2010. He has since moved to Los Angeles with his wife and daughter, where he continues to freelance as an art and animation director, 3D modeler and animator, and where he continues development on his original robot IP.

Read more