Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Categories

Memory vs. Efficiency??

ts4ts4 Member Posts: 1
I have a program which has three main datastructures (linked lists):
1) Molecule [main linked list - a molecule file has several residues]
2) Res - linked list with it's own attributes
4) CloseRes - linked list (each Res has pointers to a structure which keeps track of the res close to this res)

There are two issues :
a) A molecule file (main input file) can have ~500 res [ so 500 node Res linked list]. If I have to process 10 such molecule files at the same time then what is the best way to do so ?
- Extending the molecule linked list is one way - but memory can be an issue here ???
- Writing output for each molecule to a file and then writing another program to process those files is another way -- efficiency seems to be an issue here???

b) For each set of "Close Res - pt 4 above" I have to do a computation for which I need to run a program considering the "CloseRes" and all the other molecule files --
- Should I just write each CloseRes to a file, run the program --repeat ---- not sure if this is the best approach

Given below is a code snippet representing the struct:
I will appreciate your suggestions. Thanks!

Code:

struct _res_type;
struct _closeres_type;

//res for a molecule
typedef struct _res_type
{
char* resName;
int resNo;
float cx, cy, cz;
float vScore;
float sEnvScore;

struct _closeres_type* closeres;
struct _res_type* prev;
struct _res_type* next;
}res_type;

//close_res for a res
typedef struct _closeres_type
{
res_type* res;
struct _closeres_type* next;
}closeres_type;

//molecule with the res
typedef struct _molecule_type
{
int noRes;
res_type* res;
struct _molecule_type* prev;
struct _molecule_type* next;
}molecule_type;


Sign In or Register to comment.