In the Cloud: Carbonite

Posted By Design Corps / December, 16, 2010 / 0 comments

Backing up your data regularly is something we all know we need to do – but knowing it and doing it are two very different things. Plus there is the need to have an ‘off-site’ backup as well, after all what’s the point in having everything backed up to an external drive and then leaving it in the office – if a disaster does happen it’ll destroy the backup as well as the originals.

I had read about some of the ‘cloud backup’ services before and, to me, that seemed like a better solution than an external drive, so I did a bit more research and decided on Carbonite. Installation and setup of the client software is very easy and once it’s there you just tell it what you want backing up and what you don’t. It is set to automatically backup files in ‘my documents’ or ‘my pictures’ but you can turn that feature off, as I did, and just select the important stuff. The interface is very simple and gives you a small icon next to the folders it is backing up that lets you know at a glance whether it’s backed up or not. You also get an option in the right-click menu to exclude or include files in the backup.

The actual process of backing up is quite drawn out, in the introduction it says that an initial backup might take as much as a week – but those 3D projects and videos take up a lot of space so try just shy of 2 months for my initial backup! However the upload doesn’t make too noticeable an impression on your web connection, I was doing my backup and uploading video files to clients via FTP simultaneously with very slight slowdown, and since the bulk of my files are now backed up it just idles in the background until I create some new files, then fires up and starts uploading them. The only limitation to mention is that your account is tied to one machine, that works fine for me since I only wanted a backup of my main workstation but obviously if you need a backup for loads of PCs this might not be the right solution for you.

Now we get to the important bit; price. Some of the other services I looked at were charged on a monthly basis plus a charge per GB so, for me, that would have been a bit costly. I was quite happy then to see Carbonite priced at a very reasonable £42 a year with unlimited space and no charge per GB. For the price I think this is a great service; my files are all secure off-site on an encrypted server, anything I do on my workstation is automatically backed up and I have access to my files anywhere I go. There’s an iPhone app for this too which lets me browse my data offsite, I can even use it to zip and email files to clients, Brilliant!

CT.

In the Cloud: Freshbooks

Posted By Design Corps / December, 6, 2010 / 0 comments

When I started up Design Corps earlier in the year I was initially concerned about all of the extra software and processes I was going to have to get to grips with to run the business properly. When it comes to stuff like 3DS Max or Photoshop I’m quite happy getting the latest updates and picking up new tricks…but accounts and office management software? Hmm, not so much.

I have worked at design agencies in the past that used systems to handle jobsheets, time tracking and invoicing in a variety of ways – from the pen and paper approach to server based apps with client software on local machines – none of them were particularly straightforward and all had their limitations. Since I was starting from scratch I wanted to use something that would make life easier and not give me constant hassle.

After a bit of searching I came across Freshbooks and, so far, have found it to be exactly what I was looking for. With it I can track jobs, time, expenses, contractors, team members and clients as well as generating estimates (which can then be turned into invoices) and invoices (which are easily generated from the job tracking page). The interface is incredibly easy to use and also offers comprehensive report options as well as the ability to export to excel and, since everything is based online, I have access to this information wherever I go.

Another benefit is that the whole thing can be viewed online by clients as well. When an invoice or estimate is generated I can either use Freshbooks to create a pdf to send over or just hit ‘send by email’ to send out a message with a link to a client section of the site. Clients can view the invoice or estimate they have just been sent and see a detailed account history too. Best of all this interface can be re-branded with your own logo and colour scheme which makes it look fantastic – quite a few of my clients have commented about how cool it all looks, which is good to hear.

This one piece of software has taken the aspect of setting up my own business that I was dreading and made it idiot proof (I am the one using it after all!), it is superior in every way to any of the other solutions I have used before and I have yet to find a situation or variable that it can’t deal with easily.

CT.

Working with Slate

Posted By Design Corps / November, 1, 2010 / 0 comments

3DS Max 2011 has been around for about 6 months now and one of its features was the updated interface for the material editor – the Slate. This is a move towards a node-based system; something that a lot of other software (Maya, XSI, Blender, Shake, Fusion, Nuke etc) has been running for a while and a method that is generally considered to be an efficient and intuitive one…but is it?

Well the old material editor in Max had a couple of problems – the need to reset material slots if you went over 24 mats always annoyed me, as did the way you had to click through everything to get to the ‘deep’ settings on complex materials. One big thing it had in its favour though was familiarity; I had been using it for about 10 years so, despite its quirks, I knew what I was doing with it…and then along came Slate.

I have to say that I found it an uphill struggle at first, the very different look and feel to everything was quite off-putting and my first reaction was that it didn’t really add anything productive to your workflow. After a day or so of perseverance though I started to see the benefits and haven’t looked back since – and what makes it so good? Well as I said I found it a bit tricky to get to grips with at first (and a number of colleagues still can’t see the point of moving away from the old system) so I thought I’d go through some of the things I’ve picked up and some of the key features from my perspective:

Customised Layout.
The layout of the Slate when I first opened it wasn’t quite to my liking, but fortunately if you click onto any of the standard windows and drag them around you will see some highlighted positions, drag the selected window to the position you like and drop it there. A feature I really like is the option to add a custom material group which you can drag all of your most commonly used items into, this saves a lot of time as the choice from the standard drop down menus you start with can be a bit bewildering , having a custom set to choose from is far easier.
Customising the Slate layout

Work Area.
That first issue with the old material editor (now called the ‘compact material editor’) about running out of mat slots is now gone as the working space you have to place your materials on is huge. Using the new available space it is now possible to create hundreds of mats in the same place; although in terms of organisation that might get confusing. To combat the confusion with complex scenes you have the option of creating new workspaces to keep things organised, these just sit at the top as tabs for you to flick between.
Work Area and tabbed views in the Slate

Everything at a glance.
This addresses my previous comment on dealing with complex materials, the new node based view means you can see straight away what the setup of your material is. No clicking through channel upon channel of mats and sub-mats to find out what’s going on, everything is just there to see. Another benefit of this is that if you use the same map in multiple places you can just drag the wire out from it to multiple slots/materials at once – it’s basically the same as instancing but is a much neater way of doing it.
Viewing complex materials in Slate

Extra goodies.
There are also a few other really useful little things in the Slate; the option to load all scene materials onto the work area at once is handy as are the node/child layout options (the option to re-sort everything vertically can be very useful if things start getting a bit complicated),  the navigator window and the search functionality built right into the Material Map Browser window. I really love the way matlibs work now though, you just click on the arrow to the left of the search bar and select ‘open material library’, select one and it opens above your custom set and you can drag mats onto the work area; when you’re done just right-click the lib and close it. You can also open scenes as matlibs by changing the ‘Files of type’ drop down in the Open window – this is very useful as you can get quick access to previously used mats without having to add them to a custom library.

Space Invader.
One thing with the Slate is that it takes up a lot of screen space, it is lovely to use with a dual monitor setup (it does really need an entire screen in my opinion), working on one screen is of course possible but it’s a bit cramped and I found myself having to constantly resize and move the window to see what was going on…the title of ‘compact’ for the old material editor is apt indeed!

Personally I think the Slate is a big improvement over the old material editor and, as my first experience of a node-based system, I have indeed found it to be efficient and intuitive. I would say it is a great addition to Max, hopefully development will continue and new features will be added to improve the workflow of material creation even further.

CT

iRay: First Thoughts

Posted By Design Corps / October, 6, 2010 / 1 comments

The latest subscription centre update to 3DS Max 2011 included something I have been excited about since I first read about it over at the Mental Images website; iRay. It is, as their website states, “the world’s first interactive and physically correct, photorealistic rendering solution” and the important part of that sentence is the word ‘interactive’.

Because iRay is set up to leverage the power of CUDA enabled GPUs as well as the CPU power that rendering engines traditionally use it is capable of producing amazing renders in a fraction of the time usually required. But even if you don’t have the GPU power it still offers an amazing solution for visualising scenes interactively (it just does it slower) because unlike Mental Ray, VRay or any of the other traditional rendering engines it does not need to compute a complicated and time-consuming light pass before moving on to the final render, iRay starts rendering almost immediately and you can see fairly quickly whether you need to make adjustments or not. It doesn’t render in ‘buckets’ either so you see the whole scene rendering at once – it does this by rendering iteratively, so the image is very grainy and poorly defined at first but the longer you leave it the more refined it becomes, once you have an image you’re happy with you can stop the render and save it out – easy.

And it is that; easy I mean. Compared to the myriad settings of Mental Ray there is practically nothing to set up with iRay, you get 3 basic options to control how long it renders (you can set a time limit, an iteration limit or leave it at unlimited) and then some additional settings to control trace depth, image filtering, displacement and the ever-useful material override – that’s it! For someone who has used Mental Ray for so long the simplicity of iRay feels wrong at first.

The results can be spectacular though, whilst it certainly isn’t perfect (current problems include ‘fireflies’ (white pixels that don’t disappear from your renders) and lots of unsupported map types, including the new substance materials – although Ken Pimentel from Autodesk has said they are working on a fix for this) for a certain type of scene it looks like a great choice; take this simple prod-viz scene I put together as a test for example:

iRay photorealistic render with 3DS Max 2011

I decided to try and limit it by iteration so I could get an idea of where an acceptable level of quality was, I tried 1000 first but it was still pretty noisy so then went for 3000 which seems ok (although there are still a few specks of noise) – this took 49m41s rendering at 1024×683, not too bad considering my GPU doesn’t have that many CUDA cores to play with; also notice the caustics and DOF effects that iRay does ‘for free’. To highlight the DOF in this picture I rendered a section using the ‘blowup’ option.

Enlarged photorealistic render with iRay

So how did Mental Ray fare with the same scene? Well the render below took 27m44s with the following settings; image precision 1/16, FG set to draft with 5 FG bounces, Mitchell filter, Caustics off and the same MR DOF settings as for iRay. Whilst it is indeed quicker than iRay I think the DOF is noisier, the refraction and reflections aren’t as good and, even though I left them out on purpose, you really do miss those caustics. Of course all of that could be remedied, that’s the strength of being able to tweak everything with Mental Ray, and those changes would possibly bring the render time to about even with iRay. If I were to get another GPU then iRay would really come into its own, these tests on The Area were done on a machine with a Quadro 5000 and Tesla C2050 and while the render times and results are impressive so is the cost; a little over £4,000 for the pair, yikes!

Comparison render with Mental Ray

I’m looking forward to using iRay on some projects but, as I said before, there are things that it just isn’t suited to – very large scenes for example (since the entire scene needs to be loaded onto the GPU) or scenes with proxy objects, I tried an arch-viz scene of mine that had hundreds of proxy trees in and it crashed pretty much instantly. I also think that whilst the DOF is a lovely touch it still doesn’t beat the flexibility of using a depth matte and doing it in After Effects with something like Frischluft Lenscare. These are small points though for what is surely a big step in the right direction; all we need now is Mental Ray to make use of GPU power and then that Tesla might seem like a good idea after all!

CT.

PS: For reference my workstation specs are – Dual Xeon E5520, 8GB RAM, Quadro FX1800 GPU.

Lets Get Linear

Posted By Design Corps / August, 9, 2010 / 0 comments

Linear Workflow (LWF) is a much mentioned topic in 3D circles which really seems to polarise opinions – some are all for it while others just don’t see the point. I have been using LWF for some time now and while I won’t try and write any lengthy explanations of it (as there are already many excellent articles online which I will link to later), I thought I would share personal experience on the subject in case it comes in useful for anyone.

So, I guess a good place to start is a brief explanation of LWF and what it means for 3D – basically almost all monitors ship with a standard gamma (luminance) setting of 2.2 whereas 3DS Max uses a default gamma setting of 1.0. To use a Linear Workflow you adjust the settings in Max to 2.2 and this gives you a gamma corrected display.

Why is this a good thing? Well rendering with a gamma of 1.0 will give you very dark results with lack of detail in the shadows, the problem is that most people (like me) will have been working in this way for years and think it looks right (like I did) and get around the problem of ‘things being too dark’ by either adding more lights to the scene or upping the intensity of the existing lights. So what’s wrong with that?

Well for a start more lights = longer render times as your rendering engine has to compute more bounces etc, and if you’re using area lights then this is an even bigger consideration. The other issue is when you’re using photometric lights and IES profiles they won’t be accurate and won’t produce the desired results – and why bother using physically accurate settings if you’re only going to ruin it by fudging the settings?

That’s what convinced me to change the way I work and embrace LWF, I was getting more and more into using physically accurate settings as much as possible so it was an obvious choice. However, I know a few other CGI artists who don’t care about physically accurate and just go for what ‘looks right’. Personally I think gamma 2.2 renders look right…and all without having to force your lights to compensate for your gamma curve.

If you want to learn more about Linear Workflow then have a look here:

My Mental Ray Community
CG Talk
CG POV

Is it real?

Posted By Design Corps / August, 3, 2010 / 0 comments

At a recent event I attended I was having a conversation about 3D visualisation with a group of people (some were relatively CGI-literate and others not at all), we were talking about the different approaches you can make on a project and the changes they will make both aesthetically and in terms of workflow. I briefly mentioned some of these approaches – like abstract or stylised, non-photorealistic rendering (NPR) and photorealistic – when one of the group said that ‘…3D will never be able to achieve photorealism’.

Initially I was quite surprised as, to be quite honest, I thought 3D visualisation was already capable of producing ‘photoreal’ results; just look at the subject of a previous post on this blog, or for that matter much of the product photography you see from major car companies (Audi, Ford, Land Rover, Mercedes etc), as well as countless other examples across commercial photography that you simply don’t notice because they are not visibly CGI. But then I started to think about the word ‘photorealism’ itself, what does that actually mean?

Of course we all know the general meaning; but in a world where photographs are modelled, styled, lit and shaped to perfection before going into Photoshop to be perfected even further…what exactly is real? Is the product shot that is actually photographed any more real than one produced using CGI? I think not. Ultimately the imagery we see is selling perfection, not reality, and photography and CGI are both perfect tools for generating that fantasy.

I realise that this long-winded ramble is so completely not the point the original guy was making, I just thought it was interesting that the term ‘photoreal’ doesn’t even really apply to photography anymore!

Design Corps on iStock

Posted By Design Corps / May, 20, 2010 / 0 comments

We have recently been accepted to list some of our 3D artwork on the world’s largest royalty-free photography website; iStockphoto. We have uploaded 4 high-quality concept images so far and that’s just the beginning!

If you have any requests for imagery or would like a variation on any of the current files then get in touch.

You can view our iStockphoto portfolio here.

Perfect Imperfections

Posted By Design Corps / May, 10, 2010 / 0 comments

There are many similarities between traditional photography and 3D – the ‘rule of thirds’ or the principles of lighting scenes and framing subjects for example are exactly the same. But for all the similarities there are also differences – and there is one difference in particular that I find quite interesting.

That difference is imperfection and what photographers and 3D artists do with it. This is generalising a little but for most types of photography the photographer will be concerned with keeping imperfections out of their images, altogether if possible, as it will enhance their work. Conversely 3D artists try to add imperfections, albeit subtly, to their images for exactly the same reason – it enhances their work.

Now when I say ‘imperfection’ I’m not talking about the simple things like vignettes, lens flare or selective blur/depth of field as, in their place, they are useful effects for photographers and 3D artists alike. I’m talking about chromatic aberration, barrel distortion and noise. In photography these phenomena are caused by inferior lenses,  poor lighting or bad camera settings and require the purchase of some specialist kit to combat (Canon’s L and DO (diffractive optics) lenses for example) or extensive work in post.

So why do we try and recreate this in 3D? Simple, 3D is too perfect, too sterile and that in turn can make renders look fake; so to add realism we – through the purchase of some specialist kit and extensive work in post – add some subtle effects to simulate the physical distortion of real lenses and the noise or grain you get from film cameras. It may be subtle but it works.

This was all brought into focus (pardon the pun) for me recently as I uploaded some renders to a stock photography library only to have a couple rejected for ‘noise and lens distortion’. Their helpful tips on choosing different ISO speeds and adjusting aperture didn’t really help, I just removed the effects (making the images look less realistic to my eyes) and resubmitted!

The 3rd and the 7th

Posted By Design Corps / April, 12, 2010 / 1 comments

Architectural visualisation specialist Alex Roman took a years sabbatical to complete this stunning personal project – The Third and the Seventh.

Featuring some of the most inspirational examples of modern architecture in the world and, aside from a couple of bits of video added in post, made entirely in 3D we can’t stop staring at this in awe!

As the note beneath the video suggests, fullscreen it 🙂