Additional Outputs: Luminance Output, Z-depth,mat.l/obj. id
Additional Outputs: Luminance Output, Z-depth,mat.l/obj. id
Since IES is going to be supported, it would be great if a light-intensity map could be generated as an extra image. That would allow architects or anyone concerned with qualitative statements on lighting analytical means of reasoning, and would make use of Indigos physical accuracy there.
Z.depth would be nice, as would be a map indicating what object is sampled and what material it has. These informations could probably generated during warmup and dumped into additional images.
For z.depth and id-passes, the argument against this will be something like "go into your 3d-package and render these passes seperately", but that's not the point: That would come with no extra preparation cost on the side of the user, and would allow to decide if a Photoshop-DoF is necessary after the image is rendered.
Z.depth would be nice, as would be a map indicating what object is sampled and what material it has. These informations could probably generated during warmup and dumped into additional images.
For z.depth and id-passes, the argument against this will be something like "go into your 3d-package and render these passes seperately", but that's not the point: That would come with no extra preparation cost on the side of the user, and would allow to decide if a Photoshop-DoF is necessary after the image is rendered.
Re: Additional Outputs: Luminance Output, Z-depth,mat.l/obj.
Yes... but it costs Ono some extra work, and I like it more to see him doing some crazy stuff in his free time that can't be done by other softwarebkircher wrote:That would come with no extra preparation cost on the side of the user
(thats no "pushing" for Master Ono, but his new features are always very innovative!)
polygonmanufaktur.de
1. you can get luminance from the .igi file i presume, that stores CIE XYZ, of which the Y channel is luminance.
2. regarding the other channels, that's going to require quite a lot of chopping up of the engine, besides which - because indigo makes an antialiased histogram - you're going to have useless data for your mat+object id's in particular.
3. the best way to get this data is to do a "normal" top-to-bottom ray trace of the scene in postprocess, spitting out that data. however, you'd still need something like an a-buffer because each pixel can have arbitrarily many obj/mat id's (e.g. consider a detailed image with dof, what obj/mtl id would you like for a blurred pixel? if it's a weighted combination of 4000, do you really want to know them all? how should wide pixel filters, often with negative lobes, be handled? moreover, these won't be the same as the ones used to compute the image because keeping track of them would slow rendering a hell of a lot).
4. "qualitative statements on lighting analytical means of reasoning"? in english?
5. photoshop dof? nonononono... *puts on best leonidas voice* THIS IS INDIGO!
2. regarding the other channels, that's going to require quite a lot of chopping up of the engine, besides which - because indigo makes an antialiased histogram - you're going to have useless data for your mat+object id's in particular.
3. the best way to get this data is to do a "normal" top-to-bottom ray trace of the scene in postprocess, spitting out that data. however, you'd still need something like an a-buffer because each pixel can have arbitrarily many obj/mat id's (e.g. consider a detailed image with dof, what obj/mtl id would you like for a blurred pixel? if it's a weighted combination of 4000, do you really want to know them all? how should wide pixel filters, often with negative lobes, be handled? moreover, these won't be the same as the ones used to compute the image because keeping track of them would slow rendering a hell of a lot).
4. "qualitative statements on lighting analytical means of reasoning"? in english?
5. photoshop dof? nonononono... *puts on best leonidas voice* THIS IS INDIGO!
ZomB,
depending on what one does with Indigo this would be a crazy feature. There are not so many programs out there which can be used for pyhsically accurate lighting analysis, and I know of only one that is open source, called Radiance (http://radsite.lbl.gov).
Due to thier architecture Indigo and any other unbiased renders would be ideally suited to do so, however, most of them don't as most users seem less interested in physical accuracy but in "snappy" picures (and I don't think there is anything wrong with it) .
The support of IES files already puts Indigo ahead of many other renderers - including commercially available ones. This would be the next logical step. Also I don't think that Master Ono would have to reinvent the wheel. Radiance is open source!!!
depending on what one does with Indigo this would be a crazy feature. There are not so many programs out there which can be used for pyhsically accurate lighting analysis, and I know of only one that is open source, called Radiance (http://radsite.lbl.gov).
Due to thier architecture Indigo and any other unbiased renders would be ideally suited to do so, however, most of them don't as most users seem less interested in physical accuracy but in "snappy" picures (and I don't think there is anything wrong with it) .
The support of IES files already puts Indigo ahead of many other renderers - including commercially available ones. This would be the next logical step. Also I don't think that Master Ono would have to reinvent the wheel. Radiance is open source!!!
Great, alas, I can't. Maybe I put my head to violet sometimes.1. you can get luminance from the .igi file i presume, that stores CIE XYZ, of which the Y channel is luminance.
The background is that a software like dialux provides some valuable basis for ergonomics and certification requirements. Moreover, anybody who has been working with engineers will approve of a means to say:"This solution generates X% more efficient lighting per Watt and fullfills Requirement XY."
An even better scenario: An architect wants to visualize a building. He contacts a specialists, who knows a cool piece of software called Indigo, well integrated with his favorite 3-D Software. He prepares a "realistic" movie and has it rendered overnight at a renderfarm for a 200€. And when he's done, he adds some nice images pointing out that the rendering shows the whole building to be in line with certification requirements, using at best the same scene setup, but with a light-intensities map. Everybody is happy: The rendering guy who can avoid setting up the scene for some super duper radiosity renderer, the architect who gets a terrible convincing rendering and the architect's customer who get's more service and arguments than expected.
Good point, but let's assume that during preprocessing, every sample is checked once, determining depth, material and object ID. With a supersampling of 2, that should be roughly sufficient to faciliate all later post processing to a later degree.3. the best way to get this data is to do a "normal" top-to-bottom ray trace of the scene in postprocess, spitting out that data. however, you'd still need something like an a-buffer because each pixel can have arbitrarily many obj/mat id's (e.g. consider a detailed image with dof, what obj/mtl id would you like for a blurred pixel? if it's a weighted combination of 4000, do you really want to know them all? how should wide pixel filters, often with negative lobes, be handled? moreover, these won't be the same as the ones used to compute the image because keeping track of them would slow rendering a hell of a lot).
Imagine our architect realizes later that the film rendered contained one material with the wrong colour, and sadly, it's the logo of our customer.
Either he is terrific at post-processing, using all kinds of magic, or:
He uses the Material ID-Pass as a Mask, changes the Logo's colour slightly,
and no one will notice the slightly off reflections.
Later, his customer wants a slightly less hard look, and our savvy compositor uses a very tiny DoF with a depth pass this great rendering software provided?
- joegiampaoli
- Posts: 837
- Joined: Thu Oct 05, 2006 7:12 am
- Location: San Miguel de Allende-MEXICO
- Contact:
You mean like the images below?
This would be great! I used it a lot in lightscape and radiance for windows when analyzing interior lighting. It's very helpful when doing interior lighting design.
Dialux also does it, and it's free, outputs results to nice pdf files and does preview renders with povray. I asked them to port Dialux to linux, they told me their market is mostly windows (how can you have a market when you give it for free???)
(So be it....)
This would be a nice integration for shure, I just don't know how hard it would be for Ono to implement this kind of thing. I think maybe it would even change a lot of his coding,
Still, nice request
This would be great! I used it a lot in lightscape and radiance for windows when analyzing interior lighting. It's very helpful when doing interior lighting design.
Dialux also does it, and it's free, outputs results to nice pdf files and does preview renders with povray. I asked them to port Dialux to linux, they told me their market is mostly windows (how can you have a market when you give it for free???)
(So be it....)
This would be a nice integration for shure, I just don't know how hard it would be for Ono to implement this kind of thing. I think maybe it would even change a lot of his coding,
Still, nice request
- Attachments
-
- Lightscape
- lightscape.jpg (17.92 KiB) Viewed 5003 times
-
- Radiance
- radiance.jpg (14.68 KiB) Viewed 5003 times
oh man, it's really trivial. seriously if it's so important i can write a little tool to rip the luminance channel from an .igi file, map it to a psuedocolour spectrum, and spit a bmp file.bkircher wrote:Great, alas, I can't. Maybe I put my head to violet sometimes.1. you can get luminance from the .igi file i presume, that stores CIE XYZ, of which the Y channel is luminance.
it'll take 10 minutes at most. what's the problem? lazy? just complaining for the sake of it? :| you have this totally awesome physically based rendering system, completely free, which outputs exactly the data you want, yet you somehow don't want to use the data now that you know it's already there, and prefer to continue requesting it as a feature on the forums?
that was about 3 or 4 good points, good in the sense that you can basically expect never to see the feature for any of those reasons. well, maybe you'd like to ask ono for the final word on that ;)bkircher wrote:Good point, but
just ask yourself: if a pixel is comprised of a weighted sum of thousands of objects+materials, do i really expect anyone to slow their renderer down tremendously just so i can get my 2gb file with per-pixel ids?
remember that indigo would have to keep that WHOLE list for each pixel, together with weights, in memory at once.
hmm, you don't know how progressive rendering works...bkircher wrote:let's assume that during preprocessing, every sample is checked once, determining depth, material and object ID.
@lyc
Are you serious on the issue with the ID's? I'm wondering, because Maxwell
and fryrender did implement the ID channel later. My point is, not Maxwell
nor fryrender where significantly slower with the ID channel enabled. So I
wonder what your point is? But maybe you're a close friend of Nick and try
to defend him ... at least it sounds a bit like it. And it would be nice if you
would avoid to ask people if they are lazy - my POV. It's good that you've
the knowledge of coding, but not everybody has. So dude, maybe take a
cold shower to get rid of the heat!
Just my two cents! ;o)
take care
psor
Are you serious on the issue with the ID's? I'm wondering, because Maxwell
and fryrender did implement the ID channel later. My point is, not Maxwell
nor fryrender where significantly slower with the ID channel enabled. So I
wonder what your point is? But maybe you're a close friend of Nick and try
to defend him ... at least it sounds a bit like it. And it would be nice if you
would avoid to ask people if they are lazy - my POV. It's good that you've
the knowledge of coding, but not everybody has. So dude, maybe take a
cold shower to get rid of the heat!
Just my two cents! ;o)
take care
psor
"The sleeper must awaken"
lol, i think there's a misunderstanding here somewhere :P i'm not annoyed, i'm just confused: bkircher was requesting luminance output, and when i told him it's already there he didn't want it. just doesn't make sense to me.
the other thing is that underneath every pixel there can certainly be more than one material and object id, in fact as many as the total number of samples taken for the whole image at maximum (!), so in a region where there is strong dof or lots of geometric complexity (or both), one single id isn't going to be representative of much. i figured that much is at least intuitively clear (or?), and to check against a long list of mat+obj ids everytime you hit the image buffer (either to add it to list or increment the number of times it was used) would be really insanely expensive compared to just adding that XYZ contribution - even if you ignore the overhead of tracking all the material and object ids that the ray hit along its path to the light source.
now that i've really explained it in deeetail, i hope it can be seen why this feature doesn't really make sense in a progressive renderer. well, i don't know what the other renderers are doing for their id channel, but just thinking about that dof/complexity problem (even ignoring anti-aliasing) it's just not possible to give a single mat+obj id per pixel and hope that it is representative. even if you chose the one(s) which "dominate(s)" (has highest weight), that implies that you have to store them all at some stage, and finally sort to find the most frequently occuring one. in short, it's complete hell to implement and in the end you still get some huge file (with all the ids) which isn't of much use.
HOWEVER, to show some light at the end of the tunnel, what could be done is to just service information requests. i.e., the user clicks somewhere on their rendered image, requesting all infos possible. then indigo spits to the console luminance info (and as i mentioned false-colour luminance display is trivial from the XYZ data), 3d position, distance from camera, a list of the top 100 contributing material and object ids, together with their weights, etc... all this can be done really fast (< 1s) because it's just a single pixel request.
really, i have no beef with anyone and i'm unfortunately not really "friends" with nick (yet? i've known him a while as a programmer and still hope to meet him one day), but re-requesting features that are already implemented after an explanation that it's already there and then motivating difficult features which don't make sense... it's pretty cold here in auckland atmo, but that doesn't stop me being surprised :P
i hope i've made my point really clear and as diplomatic as possible (the possible solution i mentioned would work quite well with indigo's existing architecture), last thing i want is to ruffle feathers here!
oh, and i'd be very interested to see such an id/mtl buffer output btw; if someone with one of these renderers can export the channels for a scene with lots of pixel-level complexity (strong dof, high geometric complexity, motion blur, lots of reflection+refraction, ...) that would be really awesome :D
the other thing is that underneath every pixel there can certainly be more than one material and object id, in fact as many as the total number of samples taken for the whole image at maximum (!), so in a region where there is strong dof or lots of geometric complexity (or both), one single id isn't going to be representative of much. i figured that much is at least intuitively clear (or?), and to check against a long list of mat+obj ids everytime you hit the image buffer (either to add it to list or increment the number of times it was used) would be really insanely expensive compared to just adding that XYZ contribution - even if you ignore the overhead of tracking all the material and object ids that the ray hit along its path to the light source.
now that i've really explained it in deeetail, i hope it can be seen why this feature doesn't really make sense in a progressive renderer. well, i don't know what the other renderers are doing for their id channel, but just thinking about that dof/complexity problem (even ignoring anti-aliasing) it's just not possible to give a single mat+obj id per pixel and hope that it is representative. even if you chose the one(s) which "dominate(s)" (has highest weight), that implies that you have to store them all at some stage, and finally sort to find the most frequently occuring one. in short, it's complete hell to implement and in the end you still get some huge file (with all the ids) which isn't of much use.
HOWEVER, to show some light at the end of the tunnel, what could be done is to just service information requests. i.e., the user clicks somewhere on their rendered image, requesting all infos possible. then indigo spits to the console luminance info (and as i mentioned false-colour luminance display is trivial from the XYZ data), 3d position, distance from camera, a list of the top 100 contributing material and object ids, together with their weights, etc... all this can be done really fast (< 1s) because it's just a single pixel request.
really, i have no beef with anyone and i'm unfortunately not really "friends" with nick (yet? i've known him a while as a programmer and still hope to meet him one day), but re-requesting features that are already implemented after an explanation that it's already there and then motivating difficult features which don't make sense... it's pretty cold here in auckland atmo, but that doesn't stop me being surprised :P
i hope i've made my point really clear and as diplomatic as possible (the possible solution i mentioned would work quite well with indigo's existing architecture), last thing i want is to ruffle feathers here!
oh, and i'd be very interested to see such an id/mtl buffer output btw; if someone with one of these renderers can export the channels for a scene with lots of pixel-level complexity (strong dof, high geometric complexity, motion blur, lots of reflection+refraction, ...) that would be really awesome :D
Hehe, ... ok lyc!
Thanks for explanation and for clearing things up. And yes I do understand
that requesting stuff that is already there - sounds funny - is stupid, BUT
the problem is, that the people have no access to it, because Violet is, HAHA,
opensourced but nobody is handling it and so nobody is adding features to it.
So of course the ppl request this stuff, even if it's there - because they
can't use it! ... rolling the ball over to ya, maybe you can find some time to
get this into Violet, so the problem is solved. Hehe ... ;o))
And if you wanna see some ID channels of the other renderers, just download
their demo version and try yourself. If you need a scene, let us know!
Hope we get here somewhere. Thanks!
edit: You can download Maxwellrender here, it's the Italian website
because there you don't have to fill out any form to get your hands on the
program and all the updates + plugins. And fryrender you can get at
fryrender.com, but you've to fill out the form to get the stuff. Have fun!
take care
Oleg aka psor
Thanks for explanation and for clearing things up. And yes I do understand
that requesting stuff that is already there - sounds funny - is stupid, BUT
the problem is, that the people have no access to it, because Violet is, HAHA,
opensourced but nobody is handling it and so nobody is adding features to it.
So of course the ppl request this stuff, even if it's there - because they
can't use it! ... rolling the ball over to ya, maybe you can find some time to
get this into Violet, so the problem is solved. Hehe ... ;o))
And if you wanna see some ID channels of the other renderers, just download
their demo version and try yourself. If you need a scene, let us know!
Hope we get here somewhere. Thanks!
edit: You can download Maxwellrender here, it's the Italian website
because there you don't have to fill out any form to get your hands on the
program and all the updates + plugins. And fryrender you can get at
fryrender.com, but you've to fill out the form to get the stuff. Have fun!
take care
Oleg aka psor
"The sleeper must awaken"
Who is online
Users browsing this forum: No registered users and 22 guests