I just wanted to pre thank anyone that contributes to this thread(lol).
Ok here goes. The problem I have is I had an application (wrote/co-wrote) that has a long run time dependant on some variables passed to it (mainly accuracy variables, the more accurate the longer the run time - makes sense). However in the hopes to speed it up I decided to write a threaded version of the program to try and speed it up. How ever what I am noticing is that the threaded version is taking as long possibly longer to run. The thing is the threaded version is running on an 8 ia-64 proccessor system and it seems to only be using 2 or 3 porcessors at about 30% (fluxiates). My guess is that 6 threads are running they are using 30% sprox each of a 2 given CPUS.
What I would like to do is have say 1 thread use as much of a given CPU as possible and if a new thread is started (added) that if a CPU is available use it instead of using the same cpu. That way it should speed the application up. The standalone (non-threaded) app uses 90+ % of a single cpu when in this part of the algorithm, that is why I split and threaded that part of the algorithm there to try and speed it up because this part is repeated several times. I can post generic code of what I am doing but can't post the actuall code because of the confidentially of it. Thanks again in advance for any and all comments (even any spitefull ones)
0 · ·