GPU Rendering anyone?!
GPU Rendering anyone?!
Hello, I'm very new to the forum and new to indigo, starting to use it with Blender and I like it. There's not many problems I have, only have lo learn to deal a bit with textures and that's all.
What is lately becoming really disturbing, is all the rumors about future GPGPU processing, i.e. using your video card to accelerate the job done by te CPU. nVidia is already out with CUDA and their cards now rock at F@H, ATi will release Catalyst 8.12 in December, unlocking the stream computing power of their video cards. I also know there is software around that already supports GPU rendering and the speedup is anyway from 10 to 50x faster than a CPU.
Has anyone thought about developping such capabilities for Indigo?! One can spend a fortune building a super computer or a render farm but when I think I could use my $80, Radeon HD4670 to render 10 times faster than an Intel extreme quad, it just makes it hard for me to fall asleep at night.
What is lately becoming really disturbing, is all the rumors about future GPGPU processing, i.e. using your video card to accelerate the job done by te CPU. nVidia is already out with CUDA and their cards now rock at F@H, ATi will release Catalyst 8.12 in December, unlocking the stream computing power of their video cards. I also know there is software around that already supports GPU rendering and the speedup is anyway from 10 to 50x faster than a CPU.
Has anyone thought about developping such capabilities for Indigo?! One can spend a fortune building a super computer or a render farm but when I think I could use my $80, Radeon HD4670 to render 10 times faster than an Intel extreme quad, it just makes it hard for me to fall asleep at night.
- PureSpider
- Posts: 1459
- Joined: Tue Apr 08, 2008 9:37 am
- Location: Karlsruhe, BW, Germany
- Contact:
- Borgleader
- Posts: 2149
- Joined: Mon Jun 16, 2008 10:48 am
Search the Fu****g Forum ^^
Sorry for the harsch welcome m25... don't take it personaly!
GPU related speed ups in unbaised renderengines will hopefully be a topic in 2009.
If you check http://www.gpgpu.org/ you'll see that there is not much about Raytracing for GPU to find. Double precision & big amount of RAM are the two most important things to start with!
Sorry for the harsch welcome m25... don't take it personaly!
GPU related speed ups in unbaised renderengines will hopefully be a topic in 2009.
If you check http://www.gpgpu.org/ you'll see that there is not much about Raytracing for GPU to find. Double precision & big amount of RAM are the two most important things to start with!
polygonmanufaktur.de
- PureSpider
- Posts: 1459
- Joined: Tue Apr 08, 2008 9:37 am
- Location: Karlsruhe, BW, Germany
- Contact:
Sorry guys, I did that before but looks like the search is not very smart, because I loked for "GPU rendering" and "video card rendering" before posting this and all the results showed was unrelated topics with the "rendering" keyword only. Sorry again, this is just everywhere:
http://www.custompc.co.uk/news/605181/a ... t-812.html
http://www.insidehw.com/News/Software/A ... ember.html
@ZomB
Double precision is somethin ATIs HD4000 series already has built in and as for large amounts of memory, 512-1GB GDDR3 is the standard today, some midrange $200 cards also have 2GB and no manufacturer would spare 2GB or more if they had a reason. ATi is already working with M$, Adobe and others to GOU enable their apps very soon and very soon, I think Vray, MentalRay etc will be part of "the others".
http://www.custompc.co.uk/news/605181/a ... t-812.html
http://www.insidehw.com/News/Software/A ... ember.html
@ZomB
Double precision is somethin ATIs HD4000 series already has built in and as for large amounts of memory, 512-1GB GDDR3 is the standard today, some midrange $200 cards also have 2GB and no manufacturer would spare 2GB or more if they had a reason. ATi is already working with M$, Adobe and others to GOU enable their apps very soon and very soon, I think Vray, MentalRay etc will be part of "the others".
Yay, someone else who uses this abbreviationm25 wrote:... M$....
But on a real note, I think a lot of people are going to hold out on GPU processing until larrabe lands (or is at least a little closer), which should realy shake up that aspect - as it is supposed much closer to a regular CPU than ati's and nvidia's GPUs are - and the big thing intell pushes about it is that it is highly programable (and uses x86 instruction sets).
Yes i know, my spelling sucks
Yep, another reason for looking towards GPUs for now and even if it looks a world apart, today's GPUs are already very similar to CPUs. If ATi and nVidia take it seriously, larrabe will still have a lot to learn when it lands. There's a lot to get out of all this technology emerging nowadays, especially for our rendering needs; if a midrange card can render 10s of Crysis frames each second...eman7613 wrote:But on a real note, I think a lot of people are going to hold out on GPU processing until larrabe lands (or is at least a little closer), which should realy shake up that aspect - as it is supposed much closer to a regular CPU than ati's and nvidia's GPUs are - and the big thing intell pushes about it is that it is highly programable (and uses x86 instruction sets).m25 wrote:... M$....
Who is online
Users browsing this forum: DotBot [Bot] and 77 guests