• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Follow my pitifull progress!

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

iwillburnbush

Member
Joined
Jul 18, 2004
Well I've set out upon my journey into GPU processing power, and after just 2 hours in front of my computer with visual studio c++ 2005 Beta 1 and some other secret ingredients I have managed to display a completely red screen while the GPU calculates 1+1 over and over again in an endless loop. Unfurtunetely I didn't really think about having the GPU able to process any inputs (such as the keyboard or mouse) and so all it does is display a red screen and heat up the GPU for an infinite amount of time (unless you press the reset button on your case of course)...

After seeing how that worked, I'm fairly confident that I could have a program to do something by the end of October. However, I'm kind of nervous about it actually being functional, as upposed to solving equations for no point. I've also learned how hard it is to port even the most basic program from directx to opengl, and so I mayhave to postpone that endeaver until after I get this to work on one platform...

I'll be sure to update this thread as often as I make any important break throughs and hopefully I will have some files up for you all to download and play with in the not to far off future! :thup: :burn:
 
Im a bit worried about how much our cards would heat up. I know cpu's are made to run 100%, but what about gpu's? I remember quite a few times where cards would crash running doom3 due to overheating.
 
That is a good point, but when you play any newer game on a fast system at a high resolution the GPU and video ram is close to 100% usage.

I don't know of any GPU usage monitors out there and I'm afraid I have no clue about how to go about making one (I'm using strictly high-level code) so I guess you'll just have to watch those temps...
 
Well I added an escape key function, but when I debuged the program I realized that it is storing the variables in the system ram as opposed to the video ram. That seems a bit wierd to me since I'm using "dummy textures" as variables and the GPU should store textures on the video ram, so I guess I should look into it.

I also cheated a bit on the first version by using the ATI SDK, so it will only run on a r3x0 class GPU, but the new version uses only Directx functions that are not specific to ATI or Nvidia (or any other video card maker for that matter, as long as it's Directx 9.0(b/c) compatible).

My next version will hopefully run all of the c++ code and anything that the GPU cannot run efficiently enough off of the CPU, and perform all calculations on the GPU using the video ram for variables that are accessed by the GPU and the system ram for variables that are accessed bythe CPU.

As for folding, I would have no clue about how to go about folding a protein, so for now I can see no application for this software in the Folding world, but if I can get it streamlined and flexible enough then I'm sure somebody a lot smarter than me will do it. Also, since GPUs often display artifacts, or anomallies in the textures (which I'm using as variables), I would doubt that it would be accurate enough for folding...
 
GPUs cannot execute programs in the way CPUs can. They have specially designed pipelines that are meant for processing graphics. However, I have heard of certain types of matrix and vector operations being performed on GPUs by loading the matrices into a texture, and then using a pixel shader program to do basic arithmetic operations on the matrix/ vectors.

However, there are significant limitations to this approach. First, there are limitations to the length of pixel shader programs, and the size of textures as well. Plus, since you have to move the data to the GPU to work on it, and then move it back, you are limited by the bandwidth to and from the GPU, and you will also incur further performance penalties on your CPU because of all this moving (unless you can load everything onto the GPU at once). Additionally, the Pixel Shader 2.0 API is simplistic at best, in fact, I don't think it even has a divide operation. Not only that, but you kind of hit on the fact that GPU floating point accuracy is pretty damn terrible (they can afford to be a little sloppy with textures, as your eyes aren't sensitive enough to notice the difference).

Essentially, running complex programs like folding on a GPU will be nearly impossible due to the restrictions on program size, and the extremely extremely limited instruction set. Although it is true that some algorithms are well suited to running on a GPU, there aren't many, and they need to be specially tailored as such. If you are thinking of writing a full program with a user interface or anything that is more functional than a simple algorithm, I hate to say it, but you are probably out of luck.

By the way, not to insult you, but going from basic to hardware programming is quite a big step... maybe you should try a more traditional path in terms of progression from language to language. Going from basic to hardware programming is like going from a tricycle to a 1000HP Viper.
 
Well pretty much everything you said is true, except I do have a lot of programing experience with c++ and a fair amount with java, I just haven't tried programing a program to use the graphics card since basic and the first graphics guides came out for it like a really long time ago.

Anyways, aside from doing basic mathematical computations, I haven't really accomplished anything yet. I'm really determined to do something though, so
until I hit a brick wall with another concrete wall with steel rienforcements behind it, I'll keep trying.

Which reminds me, the "textures" weren't being stored in the video memory because of some stupid mistake on my part. Basically what it was (and until i find the time to fix it, still is) doing was/is trying to run the calculations using a software renderer and then a hardware renderer so that the cpu and GPu had to do the same work. Obvious this is no good and defeats the purpose of the project, so I'd definetely say that it would still be called a failure if you only look at it up to this point.
 
iwillburnbush said:
Well pretty much everything you said is true, except I do have a lot of programing experience with c++ and a fair amount with java, I just haven't tried programing a program to use the graphics card since basic and the first graphics guides came out for it like a really long time ago.

Anyways, aside from doing basic mathematical computations, I haven't really accomplished anything yet. I'm really determined to do something though, so
until I hit a brick wall with another concrete wall with steel rienforcements behind it, I'll keep trying.

Which reminds me, the "textures" weren't being stored in the video memory because of some stupid mistake on my part. Basically what it was (and until i find the time to fix it, still is) doing was/is trying to run the calculations using a software renderer and then a hardware renderer so that the cpu and GPu had to do the same work. Obvious this is no good and defeats the purpose of the project, so I'd definetely say that it would still be called a failure if you only look at it up to this point.

Just sayin though... assuming that you use Pixel Shader programs to do whatever you want to do on the GPU, I think the limit is like 1000 instructions or something like that. Thats an abysmally low number from such a low level standpoint.

It's actually too bad really, since the GPU is such a powerful processor. I just read something about a new project using the GPU to do high quality and performance sound mixing. They got the GPU operating at something like 40 GFLOPS... ridiclously high performance.
 
Well for certain calculations a gpu can be between 4 and 10 times faster than a cpu, but for running the actual software, they are extremely slow, so what I'm doing is letting the gpu do all of the calculations and letting the cpu run the software. I'm not trying to run a million instructions off of it, only a few hundred at a time, so even early shader versions could handle it.

School for me starts on Monday, and this weekend is really packed for me so I don't know when I'm going to get to work on this again, but this morning I figured out how to do some of the stuff in opengl, but it seems to be a lot trickier...
 
you should make it run from the GPU, then let the CPU do the visual processing :p

I read the stuff about using the GPU as an audio processor...the results sound awful! how about running 3 instances of folding - 2 on the HT CPU and one on the VGA, lol
 
Sjaak said:
you should make it run from the GPU, then let the CPU do the visual processing :p

I read the stuff about using the GPU as an audio processor...the results sound awful! how about running 3 instances of folding - 2 on the HT CPU and one on the VGA, lol

Hell why not 4 instances. 2 on HT CPU, 1 on the GPU (or 2 in SLI hehe), and one on the CPU on the soundcard(the SBlive proc is like 100mhz).

Anyway I hear that when we game we use less than 80% of the GPU on any given video card. Since it has sections to do OpenGL and DirectX type stuff and it's not all being used at the same time. I once remember reading that if a GPU were to ever be at 100% it would be equal or greater to the heat of a CPU.

Another question, isn't the 2D rendering of the cursor done by the video card? I always thought it was...
 
Tebore said:
Another question, isn't the 2D rendering of the cursor done by the video card? I always thought it was...

beats me lol...i dont know anything about that kind of stuff but its cool reading about it. I would really like to see that GPU-based (OS??) working
 
Sjaak said:
beats me lol...i dont know anything about that kind of stuff but its cool reading about it. I would really like to see that GPU-based (OS??) working

Seriously like back in the days when 2D was just starting out and was all the rage, you had cards advertising "hardware mouse cursor" and I had an update to my laptop that was for "Hardware cursor rendering support". So I assumed that today most cards actually rendered the cursor and not the CPU. I mean hey if wall paper is loaded in video memory.....
 
umm, the windows shell won't be graphically supports (at least hardware wise) until longhorn comes out...

And Folding on a GPU would be counter productive!!!! You know how video cards sometimes show artifacts, even at stock speeds?? Well imagine that one pixel being a qaurter of the protein that the program is working on. Precision on GPUs is very bad, mostly because most people never notice a mis-colored pixel or other graphical anomaly. Besides, I have no clue about proteins and so how would I create a program to fold them? lol

Anyways, I was kinda preocupied today with some other stuff (hint: it included boobs, but not my computer or tv), so I didn't get any farther on the project. Tomorow doesn't look good for the project either, and monday I start school, so I don't know when I expected to work on this, but I hope it wasn't over the next week or two... I'll still get it done, but I'm starting to worry a bit.
 
iwillburnbush said:
umm, the windows shell won't be graphically supports (at least hardware wise) until longhorn comes out...

And Folding on a GPU would be counter productive!!!! You know how video cards sometimes show artifacts, even at stock speeds?? Well imagine that one pixel being a qaurter of the protein that the program is working on. Precision on GPUs is very bad, mostly because most people never notice a mis-colored pixel or other graphical anomaly. Besides, I have no clue about proteins and so how would I create a program to fold them? lol

Anyways, I was kinda preocupied today with some other stuff (hint: it included boobs, but not my computer or tv), so I didn't get any farther on the project. Tomorow doesn't look good for the project either, and monday I start school, so I don't know when I expected to work on this, but I hope it wasn't over the next week or two... I'll still get it done, but I'm starting to worry a bit.

lol anytime boob's get entered in the equation nothing get's done :)
 
copernicus said:
lol anytime boob's get entered in the equation nothing get's done :)

yeah man, you know what's up!

Well during a late night programming session I came across some documentation that talked about performing matrix calculations through the GPU directly without storing them in textures first using opengl. I am still looking into it, and if it's legit then that would mean that this could potentially work on any platform. However, I just spent a week learning about directx and now I have to learn about opengl too. I gotta learn how to manage mytime better i guess...

I'll try and get a sample program up somewhere so you can download it (I hate to talk about all this stuff and not let you see what's going on for yourselves), but I'm not the best programmer and until I get something that doesn't freeze my computer every couple of times I try to run it I'd rather not share. I don't want to destroy everybody else's computers with mine (is it even possible for a program to destroy the hardware like that? well you get the point).
 
iwillburnbush said:
And Folding on a GPU would be counter productive!!!! You know how video cards sometimes show artifacts, even at stock speeds?? Well imagine that one pixel being a qaurter of the protein that the program is working on. Precision on GPUs is very bad, mostly because most people never notice a mis-colored pixel or other graphical anomaly. Besides, I have no clue about proteins and so how would I create a program to fold them? lol
True dat. :) Single precision (IEEE 32-bit, which nVidia cards support) isn't exactly precice... Double precision (64-bit, supported by no GPU as far as I know) is much more likely to be used by scientific programs.


/me just wishes a SETI client could be released. They even have public source code if you're crazy ;) :p

JigPu
 
Back