• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Does CUDA crunch the work that would or can be crunched by CPU?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

ppe1700

Member
Joined
Jan 9, 2007
Looking at my stats it is taking my GPU around 50 minutes to crunch a single file or piece of work. There are similarly-named work for my CPU and sometimes the run time on these is around 12 hours.
Does the GPU get speciality work that is smaller? Or is the GPU really that much better at crunching than a CPU? My GPU only hits mid 40-50% utilisation.

I'm thinking may be the CPUs get much larger amounts of work.
 
Looking at my stats it is taking my GPU around 50 minutes to crunch a single file or piece of work. There are similarly-named work for my CPU and sometimes the run time on these is around 12 hours.
Does the GPU get speciality work that is smaller? Or is the GPU really that much better at crunching than a CPU? My GPU only hits mid 40-50% utilisation.

I'm thinking may be the CPUs get much larger amounts of work.

the gpu just crunches much faster. on einstien@home my gtx 580 will do a work unit in 38mins that it takes my i7 870 @ 3.8GHZ over 4hours to do. no way to increase how much of you gpu a task uses. its automatic as far as i know. my 580 only uses 40%. also it doesn't crunch the same stuff it only works on tasks made for gpu.
 
Yea I tried to increase GPU usage with a app info file but have removed this now due to problems. Do you know how the GPU tasks are modified? Is it just a purposely written .exe that executes the same radio sample that would / could be computed by the .exe that runs on the CPU?
 
Yea I tried to increase GPU usage with a app info file but have removed this now due to problems. Do you know how the GPU tasks are modified? Is it just a purposely written .exe that executes the same radio sample that would / could be computed by the .exe that runs on the CPU?

I think it is written using a different type of code. Not sure. Not a software guy But I think I read somewhere about C/C++. But ya. I say be happy with the boost.
 
Yea I tried to increase GPU usage with a app info file but have removed this now due to problems. Do you know how the GPU tasks are modified? Is it just a purposely written .exe that executes the same radio sample that would / could be computed by the .exe that runs on the CPU?

GPU wu's are set up to use the very limited amount of memory that the GPU has available - and also, use it's tremendous number of "pipelines" (shaders), that can all run in parallel. They're a royal pain to program for, but like the PS3 cpu, they fold like a bat outta hell.

That's why the GPU has to keep communicating with the GPU client so often. It needs another batch of data to keep busy, and has to return the work it just completed.
 
Firstly, I apologise to OP for not getting back to him about the app_info.xml file for so long - midterms for medschool are keeping me bogged down :(. Not sure how to specifically write an app_info.xml for Einstein as I don't run it personally, but I'm sure somewhere on the E@H forums there is a guide (if not a stand alone installer, like the SETI "unified installer" by Lunatics/KWSN)

I think it is written using a different type of code. Not sure. Not a software guy But I think I read somewhere about C/C++. But ya. I say be happy with the boost.

Apps written for CUDA cards (GTX2xx/3xx series) use a modified version of C that lacks function pointers and recursion (amongst other things), but has a few abilities that normal C does not. Cards that are CUDA 2.0 capable (GTX4xx/5xx) have almost full native support for C++, but are a complete and utter :mad: to code for. Give it time :)


GPU wu's are set up to use the very limited amount of memory that the GPU has available - and also, use it's tremendous number of "pipelines" (shaders), that can all run in parallel. They're a royal pain to program for, but like the PS3 cpu, they fold like a bat outta hell.

That's why the GPU has to keep communicating with the GPU client so often. It needs another batch of data to keep busy, and has to return the work it just completed.

Indeed. Though hopefully with newer iterations of CUDA (GTX6xx series?) we will see full native C++ support where you could possibly just have the entire app run on the GPU, with either direct access to the HDD for data or (more likely) just execute the entire thing on the GPU - essentially making it as diverse in function as a CPU :D
 
Back