Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Categories

Welcome to the new platform of Programmer's Heaven! We apologize for the inconvenience caused, if you visited us from a broken link of the previous version. The main reason to move to a new platform is to provide more effective and collaborative experience to you all. Please feel free to experience the new platform and use its exciting features. Contact us for any issue that you need to get clarified. We are more than happy to help you.

Speeding up a threaded application

MastikMastik Posts: 19Member
Hello all,

I just wanted to pre thank anyone that contributes to this thread(lol).

Ok here goes. The problem I have is I had an application (wrote/co-wrote) that has a long run time dependant on some variables passed to it (mainly accuracy variables, the more accurate the longer the run time - makes sense). However in the hopes to speed it up I decided to write a threaded version of the program to try and speed it up. How ever what I am noticing is that the threaded version is taking as long possibly longer to run. The thing is the threaded version is running on an 8 ia-64 proccessor system and it seems to only be using 2 or 3 porcessors at about 30% (fluxiates). My guess is that 6 threads are running they are using 30% sprox each of a 2 given CPUS.

What I would like to do is have say 1 thread use as much of a given CPU as possible and if a new thread is started (added) that if a CPU is available use it instead of using the same cpu. That way it should speed the application up. The standalone (non-threaded) app uses 90+ % of a single cpu when in this part of the algorithm, that is why I split and threaded that part of the algorithm there to try and speed it up because this part is repeated several times. I can post generic code of what I am doing but can't post the actuall code because of the confidentially of it. Thanks again in advance for any and all comments (even any spitefull ones)

Jeff

Comments

  • MastikMastik Posts: 19Member
    : Hello all,
    :
    : I just wanted to pre thank anyone that contributes to this thread(lol).
    :
    : Ok here goes. The problem I have is I had an application (wrote/co-wrote) that has a long run time dependant on some variables passed to it (mainly accuracy variables, the more accurate the longer the run time - makes sense). However in the hopes to speed it up I decided to write a threaded version of the program to try and speed it up. How ever what I am noticing is that the threaded version is taking as long possibly longer to run. The thing is the threaded version is running on an 8 ia-64 proccessor system and it seems to only be using 2 or 3 porcessors at about 30% (fluxiates). My guess is that 6 threads are running they are using 30% sprox each of a 2 given CPUS.
    :
    : What I would like to do is have say 1 thread use as much of a given CPU as possible and if a new thread is started (added) that if a CPU is available use it instead of using the same cpu. That way it should speed the application up. The standalone (non-threaded) app uses 90+ % of a single cpu when in this part of the algorithm, that is why I split and threaded that part of the algorithm there to try and speed it up because this part is repeated several times. I can post generic code of what I am doing but can't post the actuall code because of the confidentially of it. Thanks again in advance for any and all comments (even any spitefull ones)
    :
    : Jeff
    :

    Well doing some reading adn asking questions of others I found out that what I want python to do it can't , well not alone. So there is no need to reply to this.
  • IDKIDK Posts: 1,784Member
    The ultimate way to speed up algorithms: assembler

    I don't know Python but if it's able to link I
    could do the algorithm in assembler (if it isn't
    a to hard one).
    It would be faster than anything else.

    [b]Niklas Ulvinge[/b] [white]aka [b]IDK[/b][/white]

Sign In or Register to comment.