LightWave is my go-to modeler. Since taking up studies I’ve acquired a number of student licenses for various programs, but I keep coming back to LW. This is at least equal parts, “easier to do in LightWave”, and “creature of habit”. Be this as it may, I’ve been thinking that a move to MODO (sometimes known as new LightWave) is inevitable.
While LW 10 and 11 have both been positive releases, the core program infrastructure is ancient and there are lots of modelling tools I’d love to see added. MODO has many of the modelling features I want, with an interface easy for LW users to adapt to. Adding to this there seems to be a long culture of impending doom on the LW forum, and indeed, there is a good deal of anecdotal evidence suggesting that some LWers have jumped ship.For this reason I’ve been on the fence about upgrading to 2015, but recent teasers from the LightWave3D Group are giving pause (and a whole big pile of speculation).
Physically Based Rendering, Unified Geometry Engine And More
Just what this new geometry engine does has not yet been revealed, but it could be a step in the direction of unifying Modeler and Layout (pretty high on the LWer wish list), or bringing new modelling tools to layout. Of course, it could be something else entirely. Rob indicates that there are more new features for 2016 yet to be announced.
A native LightWave PBR render engine is very cool, and given LW’s positioning as a budget generalist program, with a historical popularity with indie and lower budget studios and productions, this makes good business sense. While biased renderers have remained king (LW packing one of the best natively incorporated engines), PBR engines are set to take over. With LW’s included 999 free render nodes, this could well entice some studios to adopt or move back to the program.
A post in the previously mentioned thread on the LightWave forums (by an individual presumably in the know) suggests that the new physically based engine is pure CPU. This seems strange when PBR seems to be moving to GPGPU with CUDA and OpenCL, but apparently this new engine is fast as it is. I suspect GPU options are still on the table for future iterations.
So, sounds good so far, but I’m more interested to see what, if any, modeller enhancements are coming. In the mean time I have to decide to update to 2015 and save on 2016, or if it’s better to wait and see. For me, a new render engine, given the wide variety of choice out there, just isn’t enough. Updated GPU dynamics would be welcome, but for me, it is all about modelling tools.
Genesis basic male test render LightWave. Exported via Alembic.
The blogs been fairly quiet for a while now. I’ve been going through some large life changes. A new website project has also taken a lot of time lately, but let us not dwell on the quiet time. Just tonight I had a chance to play around with the new Alembic Exporter for DAZ Studio. One huge limitation of the exporter is that it only supports a single UV map. The solution to this is quite simple. Merge the UV maps! Of course the first thing I thought to do was take the convoluted path of exporting Genesis to LightWave as an .OBJ, merging the UV’s, exporting back to DAZ, re-rigging, animate and finally export again via Alembic. This struck me as somewhat terrible. So I Googled for something like “merge UVs DAZ Studio”. First thing I stumble upon reminds me about DS’s Texture Atlas – Facepalm!
Texture Atlas is simple and great for game developers, and those looking for a way around the Alembic exporter’s single UV map restriction.
Maybe I could forgive myself a little for the fact I’ve never actually used the plugin, but I was aware of it and what it did. Atlas is simple and quick to use. With that process complete I had a nice unified UV and series of texture maps (diffuse, specular, trans, bump, displacement). Next I load my laughable test animation and export via Alembic. Make sure you have “Preserve SubDivision Surfaces” turned off if you’re exporting to LightWave. LW doesn’t support this information from DS, not even if using Catmull-Clark (which technically it should). Unfortunately this isn’t the end of our troubles though.
First up, LW’s Alembic importer doesn’t preserve any material information, so those have to be rebuilt from scratch. This isn’t such a big issue if you only need a single material each for skin, nails, lips, and eye surfaces. The second snag is that the textures appear faceted when reapplied in LW. This isn’t down to Genesis’s geometry. Exporting without subdivision information applies geometry “freezing”, so you can still export a high poly figure (or apply subdivision in LW itself). In both cases the textures have the same faceting problem, regardless of how many levels of subdivision were applied. The lower the subdivision level when exported from DAZ Studio, the worse the faceting when reapplying the textures in LW. Whether this is an issue with LW’s inability to accept real subdivision information, or a problem with DS’s exporter is not clear. Atlas doesn’t seem to be part of the problem though. The merged textures and UVs appeared as expected in DS. Unfortunately as we can’t export Alembic back in to DS I wasn’t able to test the exporter itself.
While Alembic is still a work in progress, and both the DS exporter and LightWave’s importer both have issues, Alembic is still a very convenient format for transferring animation data. With widespread industry support Alembic seems destined to succeed, so we can all rest assured that someday it will all work beautifully. I hope that both DAZ 3D and LightWave’s developers continue to refine their implementations.
My graphics card, a much neglected AMD/ATI Radeon HD 5770, recently alerted me that a replacement might be an imminent requirement. A total flush of over 4 years of accumulated driver leftovers and a fresh install of the latest batch seems to have calmed the issue somewhat, though not entirely. While I was digging up the info to expand the life of my ailing card I had a good look at what’s out there, and now I feel like I’ve opened something akin to Pandora’s Box.
Generally I’m happy with how my system handles DAZ Studio 4, Poser Pro 2014, and LightWave 11, but once you’ve been bitten by with the upgrade bug… Anyway, in this article we will focus on what sort of graphics card is best for recent versions of DAZ Studio and Poser. We’ll also take a look at GPU based rendering and what works best with Octane and LuxRender.
Getting The Full Story Is A Technical Business
I will make the disclaimer up front that ensuring the absolute best performance for your dollar is an incredibly time consuming task. Even determining the best specific variant of a particular model requires a good deal of technical knowledge, and goes far beyond what I am able to cover here, which is intended as a general guide.
It is enough to say that just reading details such as RAM and GPU clock speeds, memory bandwidth etc is not always the full story. In some cases card specs can be down-right misleading. If getting the absolute best value is a high priority than benchmarks are a very good tool.Throughout this article I will make reference to several benchmark comparisons performed by Tom’s Hardware.
If you are interested in getting into the nitty-gritty then this article at EnthusiastPC will be invaluable – clarifying many technical elements in a concise and easy to read manner. This will help you cut through the marketing and give you a much better understanding of just what you’re looking at when examining card specs.
I will be using my own current graphics card as a reference point throughout. While it handles these programs well enough It is getting long in the tooth and probably won’t be found on the shelves in many good computer stores (bricks and mortar or online). With this in mind it should be fairly comparable to many other cards out there under upgrade scrutiny. So, to make things as clear as possible it is helpful for me to provide some basic specs.
Additionally I should point out that my card can no longer reach full specs (at 100% load) without crashing the display driver. This could be caused by long term moderate overheating issues – good reason to keep an eye on things and perform regular maintenance checks.
It may also be of interest that I’m running an Intel i7 860 with 8 GB of DDR3 1333 MHz RAM.
So, Which Is The Best Graphics Card For DAZ Studio And Poser?
A lot of new DAZ Studio and Poser users assume that a top shelf graphics card will speed up their renders. This is a mistake, and an easy one to make if technical information is not your cup of tea. A good graphics card still has benefits for every 3D app I can think of. Some programs like 3DS Max will make more use of advanced OpenGL, CUDA, and OpenCL features. The biggest difference you will see in DS and Poser between cheaper and more expensive cards is in the OpenGL preview (that thing you do 99% of your 3D work in). More GPU RAM and processing power will result in faster viewport response, and an increase in the size and complexity of scenes one can reasonably work with.
The next rule of thumb that inexperienced upgraders come up against is that Nvidia is the best and only real choice when it comes to performance. To an extent this is true. Nvidia cards tend to give better overall performance and have a history of better driver support, but there are instances where Nvidia find themselves relegated by their much smaller competitor, AMD. Chiefly AMD cards beat Nvidia when it comes to OpenCL performance, but I’m starting to get off track. Let’s focus on OpenGL.
So, what cards are best for OpenGL? Well, just about any off the shelf consumer grade card will give you very good performance (AMD or NVIDIA). For programs like DAZ Studio, which does not make use of more recent OpenGL features (2.2 being all the way back from the mid noughties), even many older graphics cards that have been off the market for years like the Radeon HD 2600 should still be enough to get running, but their legacy drivers might not be stable on newer version of Windows. If nothing else old cards like these meet DS’s minimum system specs. You can still find these old cards in unopened boxes on sites like eBay selling very cheaply, but they are often the same price or more expensive than contemporary entry level cards that have higher performance architecture and components.
Of course, systems with these older cards will not be able to preview larger scenes with textures turned on without considerable slowdown, if at all. As a general indication, my system experienced moderate viewport slowdown with a scene of 600 000 polygons. For further clarification that’s the same as loading in 28 Genesis 2 figures. In general this is more than enough for me and how I use DAZ Studio. Checking Catalyst Control Center (basic tool for monitoring AMD graphics card performance) showed that the scene was definitely putting my card to the test. Note that a static scene will add little or no load to your card. It is only when moving the camera, posing, and moving objects that the GPU comes into usage.
While I say the slowdown was moderate, fine posing at that level would not be much problem for someone with reasonable patience. For these situations turning subdivision to 0 and switching the view to smooth shaded or a wire frame will improve performance somewhat.
A bigger, better and newer card has very little to offer, other than viewport speed, but comparing that performance between an entry level card like the Diamond Multimedia HD 6570, MSI HD 7750, or EVGA GeForce GTX 650, and a mid-range card like PowerColour HD 7850, Sapphire Vapour-X HD 7950, or a GTX 660, a mid-range card like a will win out noticeably as more geometry is added to a scene. So, entry level for portraits is probably fine, but populated sprawling sci-fi city scapes or high detail natural landscapes will require lots of compositing different images and/or merging pre posed scenes (fingers crossed – no crashing).
Poser doesn’t give too much away about OpenGL version or requirements. What is clear is that Poser makes far more extensive use of OpenGL than DAZ Studio does. For a start DS doesn’t have the nice real-time shadows that Poser’s preview has. What the P 10/2014 system requirements suggest is a “recent NVIDIA GeForce OR ATI Radeon required for advanced real-time preview features”. I take this to mean just about anything on the market will do to get started (likes DAZ Studio), but performance will vary.
Even a 40 dollar GeForce GT 610 supports OpenGL 4.2 (current version is 4.4), which is probably a few releases above Poser’s requirements. Even cards that are released with support for older versions of OpenGL will usually continue to have their compatibility with newer versions updated as long as the manufacturer continues to support the card. I’ve heard of people using much older and/or less powerful cards, though performance can get slow with full preview options turned on.
So what it really comes down to with both DAZ Studio and Poser 9/10/2012/2014 is that almost any card you can go into a store and buy, or order online will be enough. A bigger and better card, all the way up to high-mid (HD 7970, GTX 780) to high end cards (GTX 690, HD 7990) will obviously give you better results, such as allowing for smother preview of larger, more complex scenes, but that extra performance comes at an increasing premium.
Professional Graphics Cards
I won’t spend too much time on professional or workstation GPUs. For most users of DAZ Studio and Poser, these are serious overkill. Those that spend a lot of time in programs like Maya, 3DS Max, LightWave, etc will receive much more benefit from better graphics cards, and considering a professional cards from AMD’s FirePro and NVIDIA’s Quadro line of GPUs should be on the agenda. These programs have many workarounds for dealing with large volumes of geometry, such as having geometry/objects past a certain distance display as bounding boxes, but these methods may not always be desirable.
Quadro and FirePro cards have been shown to deliver vastly improved frame rate improvements over consumer cards when working with large volumes of geometry in production applications. For many users these cards are above and beyond budget. Mid-range work station cards can easily approach the price of upper end consumer cards. Another drawback for the many users is that they will not perform anywhere near as well as similarly spec consumer cards when it comes to gaming. So, if your work or hobby render/animation/modelling computer is also your gaming rig, then pro cards might not be a good idea.
The often touted difference in performance between consumer and workstation cards is in driver development, such as OpenGL optimisation and specialised integration with certain production applications like 3DS Max, Maya, AutoCAD, etc etc. There are other stated reasons, such as firmware design and the use of more precise error-correcting code RAM (ECC).
In short, in production level OpenGL environments pro cards are king. In this regard there are benefits for DAZ Studio and Poser users to have a FirePro or Quadro, but there is such a thing as overkill.
So with this interesting bit of information in the bag I began making eyes at AMD’s FirePro W5000 which performed admirably in LightWave. Even this relatively humble pro card was pushing what I was willing to pay, especially as till very recently it was a component I was quite happy to ignore. Then there’s a new development that turns this completely on its head.
GPU Assisted Rendering
Outside occasional gaming binges, my only other heavy use of GPU technologies is when I dabble with LuxRender. As I looked at more benchmarks I discovered a disturbing (though fortuitous) quirk, pro cards don’t perform too well in GPU rendering applications, both OpenCL and CUDA based. There are some CUDA based render engines out there specifically optimised for Quadro cards and some others where pro cards sit neck-and-neck with consumer cards.
If you have any interest in GPU based or assisted rendering then getting the best possible consumer card you can afford is a very good investment. But it’s still not that easy.
CUDA or OpenCL – Not Both Ways
If you want to work with CUDA based renderers like Octane, then getting yourself a higher end NVIDIA GTX card or two is the way to go. Go Titan if you’ve got the money. The official NVIDIA site has a list of cards that are CUDA compatible. Conveniently they are all ranked on CUDA compute compatibility.
Anything above a GTX 650 is rated as 3 + compute compatibility, while Octane requires at least 2 for full compatibility (GTX 465 +). More CUDA cores and GPU RAM are high on the Octane list of priorities. In this regard, getting the best out of Octane requires a big investment. A EVGA GTX 780 3GB will set you back around $660, while a EVGA GTX650Ti Boost 2GB
is about $170.
While smaller cards like the GTX 580 will chug along reasonably well for most uses it is seriously limited by its RAM. A simple enough fix, one might assume, would be to buy two GTX 580s. Seems reasonable, but with Octane the scene needs to be loaded into both GPUs, so while you will get quicker renders you are still faced with limitations on scene geometry and max possible textures. If multiple cards is under consideration keep in mind that having cards of equal RAM is the most efficient choice, as the card with the larger amount of RAM will have to defer to the smaller card. The biggest possible scenes call for the Titan’s 6GBs of GDDR5 RAM. In terms of pure speed, you can’t beat multiple GTX 770s, or 780s.
Right from the start I should point out that getting the biggest and most expensive GPUs will not give you a massive advantage in hybrid (CPU+GPU), at least not in most cases. GPU only versions of LuxRender such as SLG might be very fast and benefit greatly from high-end graphics cards, but they are somewhat experimental and are a long way from including all of CPU LuxRender’s features.
Lux’s developers warn users that GPU based Lux is not ready for production, though some do use it. I’ve said it many times, but I’ll say it again LuxRender really is for hardcore render nerds. Plugins like Luxus and Reality do make Lux more accessible, but then getting the most out of SLG (in particular) still requires a lot of time on the LuxRender Wiki, forums and experimentation.
With this qualification out of the way let’s move on. If you’re using LuxRender or another OpenCL based render engine then give NVIDIA a wide berth. CUDA is NVIDIA’s proprietary baby and they are terribly biased in its favour to the extent of having underdeveloped their OpenCL technology.
There is one rather large anomaly I noticed while perusing the LuxMark results page. GTX cards have been owning (that’s what the kids say, right?) the ATI cards. This is despite all the benchmarks from sites like Tom’s Hardware and AnAndTech constantly showing HD cards coming out on top, and often followed by ATI’s FirePro cards. So, what gives? Well I’ve heard rumour that some clever people out there are modifying their NVIDA drivers to perform much better with OpenCL. For the most part this is probably well beyond the scope of the average hobbyist, and might run into warranty voiding issues.
Cards from the HD 6XXX series begin to lose ground to higher tier GeForce GTX cards like the 770, 680, 690, 580, Titan (beefcake of the GeForce range), so if you like both CUDA and OpenCL based engines then a newer and higher grade GeFore GTX card is definitely worth considering.
Benchmarking with LuxMark
Out of curiosity I decided to benchmark my own somewhat degraded card. Not surprisingly the results were anaemic, at least when putting up my mere 271 (GPU only) against low to high 400s registered for the same card and test. My old i7 860 (entry level i7 at the time) put’s my card to shame in its own right with a score of 367, which sat smack in the middle of the other results for the same processor and test (Salsa). When their powers were combined the duo scored a mighty (hur, hur) 587. So, that upgrade is still looking quite attractive when a single HD 7990 can eat my results over seven times.
If you want to test your own system out, and if you’re a Lux user, I strongly suggest you do, you can pick up LuxMark (free of course), from the LuxMark section of LuxRender’s wiki.
But, WHICH card should I get?
Well, the resounding conclusion is the old hardware cliché, depends on what you plan to do with your system. A good graphics card will get you better performance with OpenGL viewports, but unless you’re working with expensive modelling and animation packages midrange consumer cards are more than enough.
If you live in programs like Maya, 3DS Max, Mudbox, LightWave etc, chances are you either already have a pro card, or considered making the investment. For these programs the difference in OpenGL performance between consumer and pro cards can be night and day. BUT pro cards are definitely not the best choice if you wish to have a general purpose machine capable of playing the latest games at appreciable speeds.
The big push for GPUs in the 3D/CGI, especially for hobbyists, is with CUDA and OpenCL based renderers, and here lies one of the most important and costly hardware and software decisions you will probably make. A good CUDA based card (GTX 680 or GTX 770) and an Octane licence with a plugin to Poser, DAZ etc will set you back anything from US $800ish up to $1300+ (Titan + Octane), or much, much more for one or more top shelf cards.
Alternatively you can get going with GPU and CPU/GPU LuxRender for as little as a AMD HD 5XXX (almost nothing – $200ish). Something beefier and more recent is highly suggested. HD 7990s are the way to go, but depending on where you buy from and what brand you select they can cost anywhere from $800 up to $1600. The 7970, an admirable performer, can cost between up to $500 or 600 for Sapphire’s delicious Vapor-X HD 7970. Remember that as you go down the hierarchy of HD cards, higher level GeForce cards begin to become more competitive for OpenCL, though tend to cost more.
There are so many factors to take into consideration when upgrading, such as board compatibility and power supply (PSU) requirements. Luckily these things are easy to nut out by looking at cards specs (check at manufacturer’s website to be certain). A follow up article is on the agenda. I hope I’ve managed to dish up some good info to help you select the graphics card that will work best for you.
I’m doing that thing where one try’s to do a million things at once, like some kind of human blender taking the tiniest of chunks out of each task as I pass by. So, work is getting done, but is this the most efficient I can be? Probably not. Anyway, I thought I’d do a quick obligatory getting excited for SIGGRAPH 2013 post.
Getting to SIGGRAPH has been a lifelong dream of mine ever since this time last year when I first heard about it. Yeah, call me green but being utterly obsessed with CGI (and a life-long pure blooded technophile) I have developed enough knowledge to get myself all excited about new software, tools and techniques, even if I don’t always understand the finer details. Well, replace “don’t always” with “a little bit frequently”. Also, I’m a window drooler. Yep, I can’t afford to even consider 99% of the software and hardware in these shows.
SIGGRAPH Computer Animation Festival
So, suffice to say I won’t even be getting to SIGGRAPH, unless by some awesome luck I get rich quick. In which case I’ll see you in Hong Kong. Ok, that’s enough rambling. The primary reason I started this post was to share the very spiffy SIGGRAPH 2013 Computer Animation Festival preview video, but alas it is presented in a custom player I can’t embed here. Of course, you can follow the link to feast your eyes. I did manage to find last years, which in my humble opinion, doesn’t have the same omphf, but still exciting all the same.
Also looking like it will be an absolute nerdy blast is the Real-Time Live presentations.
Real-time technologies are developing and emerging at a spectacular rate and giving us visual feedback that once would have been thought impossible. State of the art graphics, animation, dynamics, radiosity, huge poly counts – all these things are going real-time in a big way. Can you feel an interactive RT revolution coming? Perhaps in the near future these technologies will be the exact sort of thing that will be testing our new super-fast broadband infrastructures we’re building.
LightWave: Join The Rebellion
And finally, SIGGRAPH is one of those dates that all the 3D graphics software nuts get very excited. It is that time when many of the modelling, animation, and rendering players unveil the next salvo of upgrades. Of course, a key focus, for me, will be on finding out what NewTek/LightWave are up to. It’s something! We want to know now, but we’ll have to wait for an announcement made later this month at the Anaheim, California convention. The below promo poster was uploaded to the LightWave FB page on the 29th of June. LightWave 12? 11.6? Something huge and exciting? If I can get some new modelling tools I’ll be happy enough.