Written Thursday 9 Feb
Well-known for his highly enjoyable abstract works, Indigo forum user Enslaver has knocked another one out of the park with his Daz3D modelled + ivy covered composition "Decay." Currently there's a whole page of compliments in the Simple Renders thread just for this one image!
Produced with the new IndigoMax exporter for 3ds Max, with 60 million polygons worth of ivy, it's a great showcase for what can be done with Indigo's camera controls.
Click the thumbnail below for the full 1920x1080 image:
Written Tuesday 31 Jan
In case you missed the announcement on our 3ds Max forum, we have a brand new 3ds Max exporter in development: IndigoMax!
Through a partnership with Jakub Jeziorski, a fully native plugin for 3ds Max using the Indigo SDK is being developed to take integration with Indigo to the next level. Early releases are available from our forum here: http://www.indigorenderer.com/forum/viewtopic.php?f=9&t=11368
Welcome on board, Jakub!
A simple and fun render with the new exporter has been made by ENSLAVER (click to enlarge):
Written Friday 16 Dec
Less than a week after my previous post about how "only OpenCL can expose massively parallel compute capabilities in a vendor-neutral way", NVIDIA open-sourced their CUDA compiler! Obviously they are intently watching this blog, and had to react to my nay-saying.
Jokes aside, this is a big move for CUDA as it allows anyone with the ability to create LLVM ASTs from their scripting language to use the PTX backend. Indigo already features a dynamically-compiled scripting language, Winter, and this technology could be extended to output optimised code for NVIDIA GPUs directly. Of course, this isn't necessarily an approach we'll use, however it illustrates the potential of having a powerful optimising compiler for many platforms behind your domain-specific language.
It will be exciting to see how this develops, as more back- and front-ends are added to the burgeoning LLVM framework.
Written Friday 9 Dec
Recently AMD announced an increased focus on its Accelerated Parallel Processing SDK, promising more frequent updates tied to display driver releases on all platforms; AMD Product Manager for Compute Solutions Mark Ireton wrote on their developer blog, "[...] we will also be upgrading our OpenCL solution on a more frequent basis through the regular monthly Catalyst driver updates."
NVIDIA, for their part, are continually improving their OpenCL implementation with each driver update, approaching the performance of its native CUDA language. Much as I love the lean and mean CUDA 4 API, only OpenCL can expose massively parallel compute capabilities in a vendor-neutral way (and it's nice to be able to JIT from source), so in the long term I expect OpenCL to become the de facto standard for parallel computation.
AMD and Intel already have an OpenCL SDK which supports their CPUs, and this is important since they already include (or soon will) considerable GPU-like parallel compute resources. The tight integration of these heterogeneous compute resources is quite unlike the present norm, in which discrete cards with dedicated memory are connected to the CPU by a long PCI Express trip, and this will be an interesting scenario to optimise for in the future. Intel's Many Integrated Core (MIC) architecture is another one to watch in the future: with a widened x86 architecture (easy to program) and their manufacturing process leadership, we could see a compelling compute platform.
When we started developing for GPUs, we'd have to manually write out debug info to be returned to the host; unsuccessful runs were often met with a hard reset or blue screen. This is in stark contrast to the fantastic development tools we have now, including the ability to print out debug info from kernels, and do live debugging in Visual Studio - and it almost never hangs the machine. Two thumbs up!
It's clear that a great OpenCL implementation is important for these companies, and we are very pleased with the progress that is being made.
Written Thursday 27 Oct
As you may already know, we improved the material and medium editing user interface for the upcoming Indigo 3.2 release.
Another important change to this is the way materials and media are linked to each other. In Indigo 3.0, the link ui was just a simple dropdown, which worked, but was far from ideal, since you had to search through long lists of material names.
This is how it looked in 3.0:
In 3.2, it looks like this:
We made a nice custom control for that, which offers convenient ways to link/unlink nodes:
- It functions as a search box. Just type the material name and select from a list of results matching your search query.
- It supports drag and drop from the scene graph
- Materials/Media can easily be unlinked by clicking the 'x'
- It also now checks if linking a material would create a circular material and does not let you do it
And by clicking the button with the arrow on it, it opens the material/medium/other node type linked for editing.
Written Friday 21 Oct
The imaging pipeline collectively refers to the sequence of processing steps which result in the final displayed image. This consists of tone mapping, white point correction, light layer processing and the image resizing from the super-sampled source data.
Previously this would require a number of auxiliary buffers, which could use quite a lot of memory since they needed to be at the same resolution as the final and super-sampled images. However, since Indigo version 3.0.14, we've implemented a new imaging pipeline which avoids the need for these extra buffers, and is also a little faster.
When rendering high resolution images, especially ones with high super-sampling, this can save gigabytes of memory! Here are some numbers reported in our Indigo 3.0.14 announcement:
Scene by dcm (3508x2480 resolution, super-sampling factor 4):
Indigo 3.0.12: 12,1 GB used
Indigo 3.0.14: 8,7 GB used - 3.4 gigabytes saved!
Scene by Zom-B (3543x1993 resolution, super-sampling factor 3):
Indigo 3.0.12: 3.8 GB used
Indigo 3.0.14: 2.2 GB used - 1.6 gigabytes saved!
These improvements are only enabled when not using Aperture Diffraction, however work to improve the memory usage and performance of AD is ongoing!
Written Sunday 9 Oct
Indigo forum user Stinkie uses Indigo to render extremely lifelike renders of what look like minimalist, modernist constructions.
You can see more of Stinkie's work in the Indigo forum thread here: http://www.indigorenderer.com/forum/viewtopic.php?f=4&t=6639
Written Sunday 9 Oct
In the last couple of weeks and months, we put some more work into the material editing UI, to make it easier to use and use less space on the screen.
This is a little preview of what is to come in the next release, Indigo 3.2, and still a work in progress, so if you have any wishes and suggestions, please feel free to tell us about them. There is a discussion thread on the forum for it especially. You can find it here.
First, let us have a look at the old interface:
There are a couple of things wrong with it:
- It does not even fit on the screen of a laptop
- Only one channel is visible at a time, making it impossible to get a quick overview of how a material is set up
- Every option, even if unused, is shown
Pretty bad, isn't it?
Here is what it looks like now:
Now, all the channels are visible and you can get a quick overview of the materials setup, e.g. which channels are used, what kind of settings are used.
The icons used for the spectrum type are not final yet.
The shader and texture editing has been moved to sperate windows, of which you can have as many open as you want:
Please let us know what you think and tell us how we could improve it even further in the discussion on the forum: http://www.indigorenderer.com/forum/viewtopic.php?f=7&t=11178
Written Tuesday 4 Oct
Architectural visualisation specialist Michal Timko, also known as dcm on the forum, is known for his stylish and realistic interior shots. Here are some of his latest works with Indigo Renderer, enjoy!