# Yet Another Math Anomoly with my Program

I'm really questioning my machine at this point! Any of you experts got any idea what's going on with this one?

C++ Dev-C++ 4.9.9.2 on a WinXP Celeron

For the below code, cases 5, 10, 15 and 16 all work correctly, but case 1 returns a value of 2147483646 for sl:

[code]
char tempString[10], tempstring2[60];
int sl = 0;

strcpy(tempString2, "This is a string 36 characters long.");

if (strlen(tempString2)>0){
switch (Match){
case 5:
case 10:
case 15:
sl = (78-strlen(tempString2))/2;
break;
case 16:
sl = (208-strlen(tempString2))/2;
break;
default:
sl = (32-strlen(tempString2))/2;
break;
}

sprintf(tempString, "%d", sl);
MessageBox(NULL, tempString, "sl equals", MB_OK);
}
[/code]

If I substitute the value 36 for the "strlen(tempString2)" in the default case:

[code]
default:
sl = (32-36)/2;
break;
[/code]
I get the expected -2 in my MessageBox. Any thoughts?

Take Care,
Ed

• Your machine is working correctly.

If we dissect this line

sl = (32-strlen(tempString2))/2;

we first get the result of strlen. This is 36 and is returned in the type "size_t". size_t can be defined either as int or unsigned int. In this case it seems to be unsigned int.

The "int" type in C is tricky, since the C standard allows every compiler to implement it as either signed or unsigned. Most common is signed, however. In case of gcc, int happens to be signed.

The value 32 in the code is of the type int. And since gcc defines int as signed int, this is a signed value.

So in reality we have a code equal to this when strlen is finished:

[code]signed int sl;
signed int x = 32;
unsigned int y = 36;
signed int z = 2;

sl = (x - y) / z;[/code]

Before the operation x - y starts, C will launch all its deadly weapons of implicit type conversion. In this case, something called "the usual arithmetic conversions" takes place. These state that if two integers of the same size but with different signedness are compared, the signed one shall be converted to an unsigned one.

So the operation is done on unsigned variables and since those can't store negative values we get a "wrap-around". The code now equals:

[code]signed int sl;
unsigned int x = 4294967292;
signed int z = 2;

sl = x / z;[/code]

Again, the usual arithmetic conversions kicks in and transforms z to an unsigned integer. The result will be 2147483646. This is converted to a signed int, since "sl" is a signed int. A signed 32 bit integer can store unsigned values up to 2147483647. So the result fits in "sl".

So the solution to the problem is:

sl = (32-(signed int)strlen(tempString2))/2;

The lesson learnt here is:

- Keep your code clean from all oddities the C language allows, such as the "int" type which can be either signed or unsigned.

- Be aware of how C handles implicit type conversions. Check up "the usual arithmetic conversions" and "the integer promotions". Professional programmers need to be aware of them.
• Thanks Lundin!

I'm going to have to study this some more, and the references you gave, but I got most of it and the (singed int) worked great.

But, I'm doing a lot of math in my program and I had shifted all my floats and doubles to ints because of all the previous troubles that only involved the first three decimal places. Now I'm finding out that I can't depend on ints either. It makes me wonder how we can ever get accurate calculations. Yet, I have another program that uses a sin formula with pi that finds prime numbers and that works as far as I've tested it, which is in the 10 million area.

I better stop rambling here.:-)

Thanks again.

Take Care,
Ed
• Well... floating point issues are the same for any language. But two of the largest flaws with the C language are that the integer types can be -anything- (16/32 bytes, signed/unsigned etc), and then the lax type control allowing all kinds of implicit typecasts. Together they can cause very subtle bugs, especially when you want to write portable programs.

A good start is to get yourself a set of predefined types. Something like this:

[code]typedef unsigned char uint8;
typedef unsigned short uint16;
typedef unsigned long uint32;
typedef signed char sint8;
typedef signed short sint16;
typedef signed long sint32;

#define FALSE 0
#define TRUE 1

typedef unsigned char BOOL;[/code]

(Depending on your system, you might want to make BOOL 32 bit instead to increase execution speed, at the cost of program memory. The optimizer might do this anyway.)
• :
: I better stop rambling here.:-)
:
[color=Blue]
You can do even better dropping that compiler. I am sure that VS Express will do just fine for your project.
[/color]
:
: Thanks again.
:
: Take Care,
: Ed
:
• Making my own definitions doesn't seem like it would help for my situation. The size_t element for strlen() would still drive it to unsigned, wouldn't it? I did look over the suggested references and caught most of what's going on. I'll just need to be more aware of the definitions for the different functions, I guess.

Anyway, thanks again for the help.

Take Care,
Ed
• [blue]: You can do even better dropping that compiler. I am sure that VS
: Express will do just fine for your project.
: [/blue]
: :
There are a few minor annoyances with Dev-C++, but it generally seems to do alright unitl I run into these little bumps. I'm not experienced enough to know whether it's me or the compiler just yet, anyway. I do have an older Borland IDE that also serves me well, but again, my limited knowledge sometimes causes a few glitches.

I had looked at the free suite offered from MS, but I'm just tired of all the licenses that include you participating behind the scenes in all the Internet data gathering. But, that's the price for free...

Sorry, rambling again.

Thanks for all the help.

Take Care,
Ed
• They usually force you to think more about implicit typecasts.

As for the compiler, Dev did nothing wrong. You can never expect "int" or "size_t" to have a certain signedness, or your program may get bugs like this. And it will certainly not be portable.