question for the pros
question for the pros
why can't a renderer be based on the human eye instead of a camera?
if possible it would revolutionize the way rendering is done. just imagine everything would render exactly how you expect it to. you wouldn't need a photographers eye you could use your own. you would'nt need fstop, white balance, lens, whatever other settings there are.
just a thought maybe a rediculous one, i don't know. what i do know is you'd never have to answer a question again with "think about when you take a photo, the window is bright or the interior is dark.
if possible it would revolutionize the way rendering is done. just imagine everything would render exactly how you expect it to. you wouldn't need a photographers eye you could use your own. you would'nt need fstop, white balance, lens, whatever other settings there are.
just a thought maybe a rediculous one, i don't know. what i do know is you'd never have to answer a question again with "think about when you take a photo, the window is bright or the interior is dark.
actually, you would need most of the stuff...
f-stop is the size of the pupil, i.E, reinhard in combination with some other stuff is the tonemapping, the eye "uses". also the whitebalance is done automatically by the eye, but cameras usually also have an "auto" setting.
The eye even has it's own AA-progress: directly in the eye, long before the information reaches the brain, there are some nervebundles, that share the informations of colour and shape -> this helps, to see details and shapes.
Then, which is an important point, if you can't add "feeling" into a computer (camera), you never get simmilarish results: Before the final impression is "created", the eye information mixes with your other senses and additionally, your temporary feelings and thoughts. It's a big science to get "eye like" cameras. Maybe, some parts of that are easier on the computer, though... But shurely not the whole eye
f-stop is the size of the pupil, i.E, reinhard in combination with some other stuff is the tonemapping, the eye "uses". also the whitebalance is done automatically by the eye, but cameras usually also have an "auto" setting.
The eye even has it's own AA-progress: directly in the eye, long before the information reaches the brain, there are some nervebundles, that share the informations of colour and shape -> this helps, to see details and shapes.
Then, which is an important point, if you can't add "feeling" into a computer (camera), you never get simmilarish results: Before the final impression is "created", the eye information mixes with your other senses and additionally, your temporary feelings and thoughts. It's a big science to get "eye like" cameras. Maybe, some parts of that are easier on the computer, though... But shurely not the whole eye

I think it would be a great idea to make rendering software figure it all out. That's what computers are for.
I'm sure if someone wanted to they could write the code that would automatically adjust the virtual pupil and every other thing that might need setting.
Lots of things are possible. Perhaps it's more that people want to replicate the techniques used in photography more so in order to produce some of the artistic nuances that make a photo eye catching.
I think there would be a place for a renderer that renders automatically as we see instead or at least has that option in addition and I don't believe it would be such a huge step to do.
I'm sure if someone wanted to they could write the code that would automatically adjust the virtual pupil and every other thing that might need setting.
Lots of things are possible. Perhaps it's more that people want to replicate the techniques used in photography more so in order to produce some of the artistic nuances that make a photo eye catching.
I think there would be a place for a renderer that renders automatically as we see instead or at least has that option in addition and I don't believe it would be such a huge step to do.
Re: question for the pros
Because you ever need to see the final image throug your eyes.xrok1 wrote:why can't a renderer be based on the human eye instead of a camera?
The way we see the real world is more a question of brain than eye so while we haven't a direct way to insert images...

Sorry about my poor english 

You're right-- in theory, its pretty easy to get parameters that emulate an eye. As long as the scene is physically based (lighting intensities, etc.), it might even be possible to automatically calculate pupil "aperture," rod-cone ratios for saturation, etc.
I think the issue is that, our eyes can accuraely see many many stops more than a computer screens can display (till we get nice HDR displays..). So, even though Indigo might calculate an image with 100% physical correctness, and 100% the dynamic range our eyes can see, you still have to convert to LDR. So, this is where the "window is bright or the interior is dark" issue comes from...
I think the issue is that, our eyes can accuraely see many many stops more than a computer screens can display (till we get nice HDR displays..). So, even though Indigo might calculate an image with 100% physical correctness, and 100% the dynamic range our eyes can see, you still have to convert to LDR. So, this is where the "window is bright or the interior is dark" issue comes from...
Hi everybody
Caronte is absolutely right .... we see with our brain and not with our eyes .
Even today the scientific community don't understand fully all the steps involved in the seeing and recognizing process . It is more a process of reconstruction than actually a projection process like it happens with a camera .
So i doubt that this could be accomplished in a easy way or even get close to how we perceive reality/images .... but ton mapping is probably the way to go with maybe some additional adjustments in the render engine
Greetings Patrick
Caronte is absolutely right .... we see with our brain and not with our eyes .
Even today the scientific community don't understand fully all the steps involved in the seeing and recognizing process . It is more a process of reconstruction than actually a projection process like it happens with a camera .
So i doubt that this could be accomplished in a easy way or even get close to how we perceive reality/images .... but ton mapping is probably the way to go with maybe some additional adjustments in the render engine
Greetings Patrick
i think if you took out an eyeball, hooked it to sensors, to a computer, without emotions and perception that you would have a measurable set of perameters. some of the statments seem to be adding the human condition into the equation, this is not what i'm talking about at all. i'm talkin about the phisical characteristics nothing else.
...I see...
(
haHAhahahahaha what a joke [ok, I stop now])
then, you still have some big differences - but it should be possible...
last problem - not shure about that:
which SHAPE has the sight?
Definately not recangular... I'm not shure, It's hard to check that objectively. then, there is the yellow dot, where you see best and the blind dot, where you see worse...
(

then, you still have some big differences - but it should be possible...
last problem - not shure about that:
which SHAPE has the sight?
Definately not recangular... I'm not shure, It's hard to check that objectively. then, there is the yellow dot, where you see best and the blind dot, where you see worse...
- deltaepsylon
- Posts: 417
- Joined: Tue Jan 09, 2007 11:50 pm
check ebay!
but seriously how about a preset to simulate what you would expect to see. this would accomplish the same thing. forget the "this is what a camera sees" for 1 second and give me a point and render "eye view". thats whats so attractive about indigo in the first place is that you get almost what you expect i guess if you're a photographer you do get what you expect, but i'm not.
there are quotes on this site similiar to "even professional photographers use tricks" if you ever see a making of movie nightime filming is done during the day, there are lights and reflectors,diffusors everywhere. why are we so gung-ho on simulating this. we know what we want to see, its not intangable. so make it happen. the programmers simulated what they wanted to see through a camera. i just think its not the perfect model to follow its inherantly faulty, why build off it. i can take a stroll through many different environments and i don't have to carry lights and lenses in a pack.
a computer will do what we ask of it, we just need the right question.
but seriously how about a preset to simulate what you would expect to see. this would accomplish the same thing. forget the "this is what a camera sees" for 1 second and give me a point and render "eye view". thats whats so attractive about indigo in the first place is that you get almost what you expect i guess if you're a photographer you do get what you expect, but i'm not.
there are quotes on this site similiar to "even professional photographers use tricks" if you ever see a making of movie nightime filming is done during the day, there are lights and reflectors,diffusors everywhere. why are we so gung-ho on simulating this. we know what we want to see, its not intangable. so make it happen. the programmers simulated what they wanted to see through a camera. i just think its not the perfect model to follow its inherantly faulty, why build off it. i can take a stroll through many different environments and i don't have to carry lights and lenses in a pack.
a computer will do what we ask of it, we just need the right question.
- deltaepsylon
- Posts: 417
- Joined: Tue Jan 09, 2007 11:50 pm
the only problem is that there is no 'default' for the eye
the pupil is constantly expanding and contracting depending on the amount of light reaching it and whatnot. I guess you could take the midpoint between the state of the smallest contraction or widest expansion possible for the pupil, but i think that the amount of light let in as a result of this contraction.expansion is not linear but exponential (not sure, could be saying something really stupid there =P)
the pupil is constantly expanding and contracting depending on the amount of light reaching it and whatnot. I guess you could take the midpoint between the state of the smallest contraction or widest expansion possible for the pupil, but i think that the amount of light let in as a result of this contraction.expansion is not linear but exponential (not sure, could be saying something really stupid there =P)
pupil simulation is a modified reinhard, I'd say (merging F-stop and Reinhard Tonemapping to ONE equivalation.)
shouldn't be that hard, I *guess*, you only have to know, which reinhard setting matches which f-stop value....
then, an other problem, which also'd have to be modified, in the reinhard algo:
colour fall off, in darkness - dark means bluish...
then, for anim, you need a NOT imediate pupil size change.
as already said, the field of view also is a problem...
A problem, that you can't get rid of, no matter, which algo: Every human sees different. the maximum pupilsize and the minimum pupil size, are not equal - you have to use a quite wide range...
Also the used reinhard settings depend on the person.
And the bleed in colour, when it's dark, also changes.
If anyone finds a possibility, to take all these (and many more, I'm not aware of, but are existing, for shure), into account, it would be a great invention in indigo, indeed.
Additional features, you could expand that:
res as high as eye + the eye internal AAalgo (that's partly produced in the eye, already) (extremely crazy, but cool, if working)
3D rendering - two eyes at once
auto-HDR (finding nice settings, which the brain would use, about, usually, instead of standard settings + manual tonemapping [as this can't work perfectly, for anyone, still with possibility for changing, obviously]
real motion blur (with shaped motion blur, not only linear)
real glare and bloom (real camera model, I guess)
Advanced option: iris colour
Advanced option: short and wide sight
...many more, I just can't think of, atm...
and after having a human eye, some special eyes wont be bad, either:
cat - how does a cat see, when it's eyes are glowing from reflected light
insect (day and night, as there are differences in sight)
any flight animal for different way of twoeyed sight
water animals, if there is a big difference - fishes, i.E.
and so on
giga request xD (lowest prior of all request's 'till now, though, I guess)
shouldn't be that hard, I *guess*, you only have to know, which reinhard setting matches which f-stop value....
then, an other problem, which also'd have to be modified, in the reinhard algo:
colour fall off, in darkness - dark means bluish...
then, for anim, you need a NOT imediate pupil size change.
as already said, the field of view also is a problem...
A problem, that you can't get rid of, no matter, which algo: Every human sees different. the maximum pupilsize and the minimum pupil size, are not equal - you have to use a quite wide range...
Also the used reinhard settings depend on the person.
And the bleed in colour, when it's dark, also changes.
If anyone finds a possibility, to take all these (and many more, I'm not aware of, but are existing, for shure), into account, it would be a great invention in indigo, indeed.

Additional features, you could expand that:
res as high as eye + the eye internal AAalgo (that's partly produced in the eye, already) (extremely crazy, but cool, if working)
3D rendering - two eyes at once
auto-HDR (finding nice settings, which the brain would use, about, usually, instead of standard settings + manual tonemapping [as this can't work perfectly, for anyone, still with possibility for changing, obviously]
real motion blur (with shaped motion blur, not only linear)
real glare and bloom (real camera model, I guess)
Advanced option: iris colour
Advanced option: short and wide sight
...many more, I just can't think of, atm...
and after having a human eye, some special eyes wont be bad, either:
cat - how does a cat see, when it's eyes are glowing from reflected light
insect (day and night, as there are differences in sight)
any flight animal for different way of twoeyed sight
water animals, if there is a big difference - fishes, i.E.
and so on

giga request xD (lowest prior of all request's 'till now, though, I guess)
Who is online
Users browsing this forum: No registered users and 138 guests