short

This is for the C type.

Is it assured that short will always be larger in byte size than char on any platform?

{2}rIng

Comments

  • : This is for the C type.
    :
    : Is it assured that short will always be larger in byte size than char on any platform?
    :
    : {2}rIng
    :

    No

    But you can asume that...
    Which brings me to another Q...

    Why wont this work?
    [code]
    int main(){
    #if sizeof(int)!=4
    wrong(platform).this(wont, run);
    #endif
    }
    [/code]

    sizeof(int) is constant, and then the preproccessor should be able to find out about it, and then it would work.

    In a game I'm doing, I need to have a bitfield of size 32 bits, so I used an int32 (very specific to windows compilers) and everything worked fine. Now the problem is that I want to have it portable (since it written with opengl and is thus able to run on all platforms with it implemented(which is a lot)). The only problem is the little int32.
    It should be possible to replace it with something like this:
    [code]
    #if sizeof(short)==4
    typedef short box;
    #elif sizeof(int)==4
    typedef int box;
    #elif sizeof(long)==4
    typedef long box;
    #endif
    [/code]

    If all the obove would work, then gregry, you could do something like this:
    [code]
    #if !(sizeof(short) > sizeof(char))
    you need a platform where short is bigger than char
    #endif
    [/code]
  • [b][red]This message was edited by MT2002 at 2007-3-3 11:4:55[/red][/b][hr]
    : : This is for the C type.
    : :
    : : Is it assured that short will always be larger in byte size than char on any platform?
    : :
    : : {2}rIng
    : :
    :
    : No
    :
    : But you can asume that...
    : Which brings me to another Q...
    :
    : Why wont this work?
    : [code]
    : int main(){
    : #if sizeof(int)!=4
    : wrong(platform).this(wont, run);
    : #endif
    : }
    : [/code]
    :
    : sizeof(int) is constant, and then the preproccessor should be able to find out about it, and then it would work.
    :
    : In a game I'm doing, I need to have a bitfield of size 32 bits, so I used an int32 (very specific to windows compilers) and everything worked fine. Now the problem is that I want to have it portable (since it written with opengl and is thus able to run on all platforms with it implemented(which is a lot)). The only problem is the little int32.
    : It should be possible to replace it with something like this:
    : [code]
    : #if sizeof(short)==4
    : typedef short box;
    : #elif sizeof(int)==4
    : typedef int box;
    : #elif sizeof(long)==4
    : typedef long box;
    : #endif
    : [/code]
    :
    : If all the obove would work, then gregry, you could do something like this:
    : [code]
    : #if !(sizeof(short) > sizeof(char))
    : you need a platform where short is bigger than char
    : #endif
    : [/code]
    :
    [blue]
    1 method I use (and seen implimented alot of times) is redefining
    the size of data types based on the system in one location. that way,
    for example, an [b]int32[/b] is [b]guranteed[/b] to be a 32 bit integer
    on *all* platforms. Same for int64, float32, etc etc.

    This is the method Im using for my engine.
    [/blue]


  • : [b][red]This message was edited by MT2002 at 2007-3-3 11:4:55[/red][/b][hr]
    : : : This is for the C type.
    : : :
    : : : Is it assured that short will always be larger in byte size than char on any platform?
    : : :
    : : : {2}rIng
    : : :
    : :
    : : No
    : :
    : : But you can asume that...
    : : Which brings me to another Q...
    : :
    : : Why wont this work?
    : : [code]
    : : int main(){
    : : #if sizeof(int)!=4
    : : wrong(platform).this(wont, run);
    : : #endif
    : : }
    : : [/code]
    : :
    : : sizeof(int) is constant, and then the preproccessor should be able to find out about it, and then it would work.
    : :
    : : In a game I'm doing, I need to have a bitfield of size 32 bits, so I used an int32 (very specific to windows compilers) and everything worked fine. Now the problem is that I want to have it portable (since it written with opengl and is thus able to run on all platforms with it implemented(which is a lot)). The only problem is the little int32.
    : : It should be possible to replace it with something like this:
    : : [code]
    : : #if sizeof(short)==4
    : : typedef short box;
    : : #elif sizeof(int)==4
    : : typedef int box;
    : : #elif sizeof(long)==4
    : : typedef long box;
    : : #endif
    : : [/code]
    : :
    : : If all the obove would work, then gregry, you could do something like this:
    : : [code]
    : : #if !(sizeof(short) > sizeof(char))
    : : you need a platform where short is bigger than char
    : : #endif
    : : [/code]
    : :
    : [blue]
    : 1 method I use (and seen implimented alot of times) is redefining
    : the size of data types based on the system in one location. that way,
    : for example, an [b]int32[/b] is [b]guranteed[/b] to be a 32 bit integer
    : on *all* platforms. Same for int64, float32, etc etc.
    :
    : This is the method Im using for my engine.
    : [/blue]
    :

    Hmm, like typedef int int32; ?
    But how do you check if it's right?

    I got this kind of code:
    [code]
    typedef int int32;
    int32 a;
    ...
    a|=0xff000000;
    [/code]

    Here it would do a very wrong thing if int32 wasn't 32bit...
  • [b][red]This message was edited by MT2002 at 2007-3-3 14:38:58[/red][/b][hr]

    : : : : This is for the C type.
    : : : :
    : : : : Is it assured that short will always be larger in byte size than char on any platform?
    : : : :
    : : : : {2}rIng
    : : : :
    : : :
    : : : No
    : : :
    : : : But you can asume that...
    : : : Which brings me to another Q...
    : : :
    : : : Why wont this work?
    : : : [code]
    : : : int main(){
    : : : #if sizeof(int)!=4
    : : : wrong(platform).this(wont, run);
    : : : #endif
    : : : }
    : : : [/code]
    : : :
    : : : sizeof(int) is constant, and then the preproccessor should be able to find out about it, and then it would work.
    : : :
    : : : In a game I'm doing, I need to have a bitfield of size 32 bits, so I used an int32 (very specific to windows compilers) and everything worked fine. Now the problem is that I want to have it portable (since it written with opengl and is thus able to run on all platforms with it implemented(which is a lot)). The only problem is the little int32.
    : : : It should be possible to replace it with something like this:
    : : : [code]
    : : : #if sizeof(short)==4
    : : : typedef short box;
    : : : #elif sizeof(int)==4
    : : : typedef int box;
    : : : #elif sizeof(long)==4
    : : : typedef long box;
    : : : #endif
    : : : [/code]
    : : :
    : : : If all the obove would work, then gregry, you could do something like this:
    : : : [code]
    : : : #if !(sizeof(short) > sizeof(char))
    : : : you need a platform where short is bigger than char
    : : : #endif
    : : : [/code]
    : : :
    : : [blue]
    : : 1 method I use (and seen implimented alot of times) is redefining
    : : the size of data types based on the system in one location. that way,
    : : for example, an [b]int32[/b] is [b]guranteed[/b] to be a 32 bit integer
    : : on *all* platforms. Same for int64, float32, etc etc.
    : :
    : : This is the method Im using for my engine.
    : : [/blue]
    : :
    :
    : Hmm, like typedef int int32; ?
    : But how do you check if it's right?
    :
    : I got this kind of code:
    : [code]
    : typedef int int32;
    : int32 a;
    : ...
    : a|=0xff000000;
    : [/code]
    :
    : Here it would do a very wrong thing if int32 wasn't 32bit...
    :
    [blue]
    Kind of. In your example, if [b]a[/b] is a 16 bit integer, the bitwise OR would probably just set it to 0 (because the low order bits)

    This is what I do:[/blue][code]
    #if defined ( _WIN32 )
    # if defined ( _MSC_VER )

    // MSVC++ specific..
    typedef int int32_t; //32bit integer
    typedef short int16_t; //16bit integer

    // other data types..

    # endif
    #else

    #if defined ( _WIN16 )

    typedef int int16_t; //16bit integer (standard int on 16bit
    // versions of Windows

    // Because 16bit versions does not support 32bit ints, we need to
    // emulate them.. (Insure you pack the following struct on 1 byte
    // bounderies!!)

    struct int32_t {
    private:
    int low, high; // 2 16bit ints= 1 32bit dword
    public:
    int32_t () {} // low and high are not initiliazed

    inline int32_t& operator = (const int32_t& int32) {
    low=int32.low;
    high=int32.high;
    }
    };

    #endif
    #endif
    [/code]
    [blue]
    Heres a little program (written in MSVC++ 2005) that tests this
    design:[/blue][code]
    #include
    using namespace std;

    // pack struct to 1 byte bounderies
    #pragma pack (push, 1)

    struct int32_t {
    private:
    short low, high; // 2 16bit ints= 1 32bit dword
    public:
    int32_t () {} // low and high are not initiliazed

    // 32bit to 32bit
    inline int32_t& operator = (const int32_t& int32) {
    low=int32.low;
    high=int32.high;
    }

    // storing 16bit int in low order byte
    inline int operator = (const short int16) {
    int old=low;
    low=int16;
    return old;
    }

    // 16bit ints only can access low order word
    inline operator short () {return low;}

    // etc..
    };

    // typedef int32_t to other types..dword, perhaps?

    // restore standard alignment (useally 32bit)
    #pragma pack (pop, 1)

    int main () {

    int32_t a; // our 32bit data type
    a= 15; //..

    // print size (4) and value (15)
    cout << sizeof (int32_t) << " bytes" << " value: " << a << endl;

    return 0;
    }
    [/code][blue]
    This is one way I seen it implimented. You can hide the struct if its
    a 32bit platform (and just [b]typedef int int32_t[/b], and use the
    structure for other platforms.

    Unfortanly, you will need to create another struct ([b]uint32_t[/b]
    to suport unsigned types.

    These examples use [b]short[/b] data type, which itself is system
    dependent. If you want portability, insure [b]int32_t[/b] uses a
    [b]int16_t[/b], using a [b]byte_t[/b], [b]int8_t[/b], whatever.
    The basic idea is that you should insure *every* data type is delared.

    This might give you some ideas though..

    *edit:
    Actually, in MSVC++ 2005, you dont need to pack the struct like I do.
    (It works either way)
    [/blue]
  • [b][red]This message was edited by Gregry2 at 2007-3-3 21:21:34[/red][/b][hr]
    : [b][red]This message was edited by MT2002 at 2007-3-3 14:38:58[/red][/b][hr]
    :
    : : : : : This is for the C type.
    : : : : :
    : : : : : Is it assured that short will always be larger in byte size than char on any platform?
    : : : : :
    : : : : : {2}rIng
    : : : : :
    : : : :
    : : : : No
    : : : :
    : : : : But you can asume that...
    : : : : Which brings me to another Q...
    : : : :
    : : : : Why wont this work?
    : : : : [code]
    : : : : int main(){
    : : : : #if sizeof(int)!=4
    : : : : wrong(platform).this(wont, run);
    : : : : #endif
    : : : : }
    : : : : [/code]
    : : : :
    : : : : sizeof(int) is constant, and then the preproccessor should be able to find out about it, and then it would work.
    : : : :
    : : : : In a game I'm doing, I need to have a bitfield of size 32 bits, so I used an int32 (very specific to windows compilers) and everything worked fine. Now the problem is that I want to have it portable (since it written with opengl and is thus able to run on all platforms with it implemented(which is a lot)). The only problem is the little int32.
    : : : : It should be possible to replace it with something like this:
    : : : : [code]
    : : : : #if sizeof(short)==4
    : : : : typedef short box;
    : : : : #elif sizeof(int)==4
    : : : : typedef int box;
    : : : : #elif sizeof(long)==4
    : : : : typedef long box;
    : : : : #endif
    : : : : [/code]
    : : : :
    : : : : If all the obove would work, then gregry, you could do something like this:
    : : : : [code]
    : : : : #if !(sizeof(short) > sizeof(char))
    : : : : you need a platform where short is bigger than char
    : : : : #endif
    : : : : [/code]
    : : : :
    : : : [blue]
    : : : 1 method I use (and seen implimented alot of times) is redefining
    : : : the size of data types based on the system in one location. that way,
    : : : for example, an [b]int32[/b] is [b]guranteed[/b] to be a 32 bit integer
    : : : on *all* platforms. Same for int64, float32, etc etc.
    : : :
    : : : This is the method Im using for my engine.
    : : : [/blue]
    : : :
    : :
    : : Hmm, like typedef int int32; ?
    : : But how do you check if it's right?
    : :
    : : I got this kind of code:
    : : [code]
    : : typedef int int32;
    : : int32 a;
    : : ...
    : : a|=0xff000000;
    : : [/code]
    : :
    : : Here it would do a very wrong thing if int32 wasn't 32bit...
    : :
    : [blue]
    : Kind of. In your example, if [b]a[/b] is a 16 bit integer, the bitwise OR would probably just set it to 0 (because the low order bits)
    :
    : This is what I do:[/blue][code]
    : #if defined ( _WIN32 )
    : # if defined ( _MSC_VER )
    :
    : // MSVC++ specific..
    : typedef int int32_t; //32bit integer
    : typedef short int16_t; //16bit integer
    :
    : // other data types..
    :
    : # endif
    : #else
    :
    : #if defined ( _WIN16 )
    :
    : typedef int int16_t; //16bit integer (standard int on 16bit
    : // versions of Windows
    :
    : // Because 16bit versions does not support 32bit ints, we need to
    : // emulate them.. (Insure you pack the following struct on 1 byte
    : // bounderies!!)
    :
    : struct int32_t {
    : private:
    : int low, high; // 2 16bit ints= 1 32bit dword
    : public:
    : int32_t () {} // low and high are not initiliazed
    :
    : inline int32_t& operator = (const int32_t& int32) {
    : low=int32.low;
    : high=int32.high;
    : }
    : };
    :
    : #endif
    : #endif
    : [/code]
    : [blue]
    : Heres a little program (written in MSVC++ 2005) that tests this
    : design:[/blue][code]
    : #include
    : using namespace std;
    :
    : // pack struct to 1 byte bounderies
    : #pragma pack (push, 1)
    :
    : struct int32_t {
    : private:
    : short low, high; // 2 16bit ints= 1 32bit dword
    : public:
    : int32_t () {} // low and high are not initiliazed
    :
    : // 32bit to 32bit
    : inline int32_t& operator = (const int32_t& int32) {
    : low=int32.low;
    : high=int32.high;
    : }
    :
    : // storing 16bit int in low order byte
    : inline int operator = (const short int16) {
    : int old=low;
    : low=int16;
    : return old;
    : }
    :
    : // 16bit ints only can access low order word
    : inline operator short () {return low;}
    :
    : // etc..
    : };
    :
    : // typedef int32_t to other types..dword, perhaps?
    :
    : // restore standard alignment (useally 32bit)
    : #pragma pack (pop, 1)
    :
    : int main () {
    :
    : int32_t a; // our 32bit data type
    : a= 15; //..
    :
    : // print size (4) and value (15)
    : cout << sizeof (int32_t) << " bytes" << " value: " << a << endl;
    :
    : return 0;
    : }
    : [/code][blue]
    : This is one way I seen it implimented. You can hide the struct if its
    : a 32bit platform (and just [b]typedef int int32_t[/b], and use the
    : structure for other platforms.
    :
    : Unfortanly, you will need to create another struct ([b]uint32_t[/b]
    : to suport unsigned types.
    :
    : These examples use [b]short[/b] data type, which itself is system
    : dependent. If you want portability, insure [b]int32_t[/b] uses a
    : [b]int16_t[/b], using a [b]byte_t[/b], [b]int8_t[/b], whatever.
    : The basic idea is that you should insure *every* data type is delared.
    :
    : This might give you some ideas though..
    :
    : *edit:
    : Actually, in MSVC++ 2005, you dont need to pack the struct like I do.
    : (It works either way)
    : [/blue]
    :

    *Every*? Is it really necessary? How about only types whose byte length must be something certain(like variables that are bitfields), but not necessarily things it which it isn't important(like iterators whose value probably won't go very high) or char, which is always supposed to be a byte...

    {2}rIng


  • : [b][red]This message was edited by Gregry2 at 2007-3-3 21:21:34[/red][/b][hr]
    : : [b][red]This message was edited by MT2002 at 2007-3-3 14:38:58[/red][/b][hr]
    : :
    : : : : : : This is for the C type.
    : : : : : :
    : : : : : : Is it assured that short will always be larger in byte size than char on any platform?
    : : : : : :
    : : : : : : {2}rIng
    : : : : : :
    : : : : :
    : : : : : No
    : : : : :
    : : : : : But you can asume that...
    : : : : : Which brings me to another Q...
    : : : : :
    : : : : : Why wont this work?
    : : : : : [code]
    : : : : : int main(){
    : : : : : #if sizeof(int)!=4
    : : : : : wrong(platform).this(wont, run);
    : : : : : #endif
    : : : : : }
    : : : : : [/code]
    : : : : :
    : : : : : sizeof(int) is constant, and then the preproccessor should be able to find out about it, and then it would work.
    : : : : :
    : : : : : In a game I'm doing, I need to have a bitfield of size 32 bits, so I used an int32 (very specific to windows compilers) and everything worked fine. Now the problem is that I want to have it portable (since it written with opengl and is thus able to run on all platforms with it implemented(which is a lot)). The only problem is the little int32.
    : : : : : It should be possible to replace it with something like this:
    : : : : : [code]
    : : : : : #if sizeof(short)==4
    : : : : : typedef short box;
    : : : : : #elif sizeof(int)==4
    : : : : : typedef int box;
    : : : : : #elif sizeof(long)==4
    : : : : : typedef long box;
    : : : : : #endif
    : : : : : [/code]
    : : : : :
    : : : : : If all the obove would work, then gregry, you could do something like this:
    : : : : : [code]
    : : : : : #if !(sizeof(short) > sizeof(char))
    : : : : : you need a platform where short is bigger than char
    : : : : : #endif
    : : : : : [/code]
    : : : : :
    : : : : [blue]
    : : : : 1 method I use (and seen implimented alot of times) is redefining
    : : : : the size of data types based on the system in one location. that way,
    : : : : for example, an [b]int32[/b] is [b]guranteed[/b] to be a 32 bit integer
    : : : : on *all* platforms. Same for int64, float32, etc etc.
    : : : :
    : : : : This is the method Im using for my engine.
    : : : : [/blue]
    : : : :
    : : :
    : : : Hmm, like typedef int int32; ?
    : : : But how do you check if it's right?
    : : :
    : : : I got this kind of code:
    : : : [code]
    : : : typedef int int32;
    : : : int32 a;
    : : : ...
    : : : a|=0xff000000;
    : : : [/code]
    : : :
    : : : Here it would do a very wrong thing if int32 wasn't 32bit...
    : : :
    : : [blue]
    : : Kind of. In your example, if [b]a[/b] is a 16 bit integer, the bitwise OR would probably just set it to 0 (because the low order bits)
    : :
    : : This is what I do:[/blue][code]
    : : #if defined ( _WIN32 )
    : : # if defined ( _MSC_VER )
    : :
    : : // MSVC++ specific..
    : : typedef int int32_t; //32bit integer
    : : typedef short int16_t; //16bit integer
    : :
    : : // other data types..
    : :
    : : # endif
    : : #else
    : :
    : : #if defined ( _WIN16 )
    : :
    : : typedef int int16_t; //16bit integer (standard int on 16bit
    : : // versions of Windows
    : :
    : : // Because 16bit versions does not support 32bit ints, we need to
    : : // emulate them.. (Insure you pack the following struct on 1 byte
    : : // bounderies!!)
    : :
    : : struct int32_t {
    : : private:
    : : int low, high; // 2 16bit ints= 1 32bit dword
    : : public:
    : : int32_t () {} // low and high are not initiliazed
    : :
    : : inline int32_t& operator = (const int32_t& int32) {
    : : low=int32.low;
    : : high=int32.high;
    : : }
    : : };
    : :
    : : #endif
    : : #endif
    : : [/code]
    : : [blue]
    : : Heres a little program (written in MSVC++ 2005) that tests this
    : : design:[/blue][code]
    : : #include
    : : using namespace std;
    : :
    : : // pack struct to 1 byte bounderies
    : : #pragma pack (push, 1)
    : :
    : : struct int32_t {
    : : private:
    : : short low, high; // 2 16bit ints= 1 32bit dword
    : : public:
    : : int32_t () {} // low and high are not initiliazed
    : :
    : : // 32bit to 32bit
    : : inline int32_t& operator = (const int32_t& int32) {
    : : low=int32.low;
    : : high=int32.high;
    : : }
    : :
    : : // storing 16bit int in low order byte
    : : inline int operator = (const short int16) {
    : : int old=low;
    : : low=int16;
    : : return old;
    : : }
    : :
    : : // 16bit ints only can access low order word
    : : inline operator short () {return low;}
    : :
    : : // etc..
    : : };
    : :
    : : // typedef int32_t to other types..dword, perhaps?
    : :
    : : // restore standard alignment (useally 32bit)
    : : #pragma pack (pop, 1)
    : :
    : : int main () {
    : :
    : : int32_t a; // our 32bit data type
    : : a= 15; //..
    : :
    : : // print size (4) and value (15)
    : : cout << sizeof (int32_t) << " bytes" << " value: " << a << endl;
    : :
    : : return 0;
    : : }
    : : [/code][blue]
    : : This is one way I seen it implimented. You can hide the struct if its
    : : a 32bit platform (and just [b]typedef int int32_t[/b], and use the
    : : structure for other platforms.
    : :
    : : Unfortanly, you will need to create another struct ([b]uint32_t[/b]
    : : to suport unsigned types.
    : :
    : : These examples use [b]short[/b] data type, which itself is system
    : : dependent. If you want portability, insure [b]int32_t[/b] uses a
    : : [b]int16_t[/b], using a [b]byte_t[/b], [b]int8_t[/b], whatever.
    : : The basic idea is that you should insure *every* data type is delared.
    : :
    : : This might give you some ideas though..
    : :
    : : *edit:
    : : Actually, in MSVC++ 2005, you dont need to pack the struct like I do.
    : : (It works either way)
    : : [/blue]
    : :
    :
    : *Every*? Is it really necessary? How about only types whose byte length must be something certain(like variables that are bitfields),
    [blue]
    Thats the basic idea. In this case, [b]int32_t[/b] is [b]guaranteed[/b]
    to be a 32bit integer on both Win32 and Win16 platforms.
    [/blue]
    but not necessarily things it which it isn't important(like iterators whose value probably won't go very high) or char, which is always supposed to be a byte...
    [blue]
    If it doesnt matter, then pick one. I would recomment either using
    int16_t or int32_t though.

    Also, In UNICODE chars are 2 bytes. Multibyte character sets (Such as
    unicode) exist for languages with more then 256 characters.

    ASCII characters, on the other hand, only set the standard for 256
    characters. ASCII characters are guaranteed to be a byte in size.

    If you are not working with wide chars or multi byte character sets,
    you can simply just assume an [b]unsigned char[/b] is a byte, as
    you said.
    [/blue]
    :
    : {2}rIng
    :
    :
    :

  • : This is for the C type.
    :
    : Is it assured that short will always be larger in byte size than char on any platform?
    :
    : {2}rIng
    :


    I'm pretty sure that after reading the standard you can assume that char is always 8 bit and int is always 16 bit or larger, and short > char. Or maybe that is just C99, I can't bother to dig up that in the standard right now.
  • : : This is for the C type.
    : :
    : : Is it assured that short will always be larger in byte size than char on any platform?
    : :
    : : {2}rIng
    : :
    :
    :
    : I'm pretty sure that after reading the standard you can assume that char is always 8 bit and int is always 16 bit or larger, and short > char. Or maybe that is just C99, I can't bother to dig up that in the standard right now.
    :

    Eagg!! Who should I listen to!?!?!?

    Lol, jk. Sorry, but I'm using ANSI C. If anyone has a good documentary (mine goes along what MT2002 and niklas says that it could be different , but mine isn't official, just a freebie doc or if they know, could they help me?

    {2}rIng

    P.S. I have a documentary, its from
    http://www.acm.uiuc.edu/webmonkeys/book/c_guide/
    Does anyone have one they believe is better?

  • : : : This is for the C type.
    : : :
    : : : Is it assured that short will always be larger in byte size than char on any platform?
    : : :
    : : : {2}rIng
    : : :
    : :
    : :
    : : I'm pretty sure that after reading the standard you can assume that char is always 8 bit and int is always 16 bit or larger, and short > char. Or maybe that is just C99, I can't bother to dig up that in the standard right now.
    : :
    :
    : Eagg!! Who should I listen to!?!?!?
    :
    : Lol, jk. Sorry, but I'm using ANSI C. If anyone has a good documentary (mine goes along what MT2002 and niklas says that it could be different , but mine isn't official, just a freebie doc or if they know, could they help me?
    :
    : {2}rIng
    :
    : P.S. I have a documentary, its from
    : http://www.acm.uiuc.edu/webmonkeys/book/c_guide/
    : Does anyone have one they believe is better?
    :
    :


    I can't find all the special cases hidden in the standard. Still I think there is some statement somewhere claiming that char must be 8 bits, but I can't find it. In C99, char will obviously only be 8 bits since they introduce wchar_t for Unicode. It is however certain that both short and int will always be 16 bit or larger:


    ISO/IEC 9899:1999 ("C99")

    "5.2.4.2.1 Sizes of integer types"

    /--/

    "Their implementation-defined values shall be equal or greater in magnitude (absolute value) to those shown, with the same sign."

    /--/

    "USHRT_MAX 65535"

    /--/

    "UINT_MAX 65535"


  • : : : : This is for the C type.
    : : : :
    : : : : Is it assured that short will always be larger in byte size than char on any platform?
    : : : :
    : : : : {2}rIng
    : : : :
    : : :
    : : :
    : : : I'm pretty sure that after reading the standard you can assume that char is always 8 bit and int is always 16 bit or larger, and short > char. Or maybe that is just C99, I can't bother to dig up that in the standard right now.
    : : :
    : :
    : : Eagg!! Who should I listen to!?!?!?
    : :
    : : Lol, jk. Sorry, but I'm using ANSI C. If anyone has a good documentary (mine goes along what MT2002 and niklas says that it could be different , but mine isn't official, just a freebie doc or if they know, could they help me?
    : :
    : : {2}rIng
    : :
    : : P.S. I have a documentary, its from
    : : http://www.acm.uiuc.edu/webmonkeys/book/c_guide/
    : : Does anyone have one they believe is better?
    : :
    : :
    :
    :
    : I can't find all the special cases hidden in the standard. Still I think there is some statement somewhere claiming that char must be 8 bits, but I can't find it. In C99, char will obviously only be 8 bits since they introduce wchar_t for Unicode. It is however certain that both short and int will always be 16 bit or larger:
    :
    :
    : ISO/IEC 9899:1999 ("C99")
    :
    : "5.2.4.2.1 Sizes of integer types"
    :
    : /--/
    :
    : "Their implementation-defined values shall be equal or greater in magnitude (absolute value) to those shown, with the same sign."
    :
    : /--/
    :
    : "USHRT_MAX 65535"
    :
    : /--/
    :
    : "UINT_MAX 65535"
    :
    :
    [blue]
    Just wanted to add on that ANSI only defines standards for ASCII
    characters, not unicode, or other multi byte character sets.

    If you want your program ANSI C compatible, it is safe to say
    (as all characters are ASCII), the characters will always be one byte.
    Unicode, and multibyte character sets are not ANSI compatible
    (Please correct me if Im wrong).

    short and int are guaranteed to be 16bit or larger, as Lundin said.

    Just wanted to clarify.[/blue]
  • : : : : : This is for the C type.
    : : : : :
    : : : : : Is it assured that short will always be larger in byte size than char on any platform?
    : : : : :
    : : : : : {2}rIng
    : : : : :
    : : : :
    : : : :
    : : : : I'm pretty sure that after reading the standard you can assume that char is always 8 bit and int is always 16 bit or larger, and short > char. Or maybe that is just C99, I can't bother to dig up that in the standard right now.
    : : : :
    : : :
    : : : Eagg!! Who should I listen to!?!?!?
    : : :
    : : : Lol, jk. Sorry, but I'm using ANSI C. If anyone has a good documentary (mine goes along what MT2002 and niklas says that it could be different , but mine isn't official, just a freebie doc or if they know, could they help me?
    : : :
    : : : {2}rIng
    : : :
    : : : P.S. I have a documentary, its from
    : : : http://www.acm.uiuc.edu/webmonkeys/book/c_guide/
    : : : Does anyone have one they believe is better?
    : : :
    : : :
    : :
    : :
    : : I can't find all the special cases hidden in the standard. Still I think there is some statement somewhere claiming that char must be 8 bits, but I can't find it. In C99, char will obviously only be 8 bits since they introduce wchar_t for Unicode. It is however certain that both short and int will always be 16 bit or larger:
    : :
    : :
    : : ISO/IEC 9899:1999 ("C99")
    : :
    : : "5.2.4.2.1 Sizes of integer types"
    : :
    : : /--/
    : :
    : : "Their implementation-defined values shall be equal or greater in magnitude (absolute value) to those shown, with the same sign."
    : :
    : : /--/
    : :
    : : "USHRT_MAX 65535"
    : :
    : : /--/
    : :
    : : "UINT_MAX 65535"
    : :
    : :
    : [blue]
    : Just wanted to add on that ANSI only defines standards for ASCII
    : characters, not unicode, or other multi byte character sets.
    :
    : If you want your program ANSI C compatible, it is safe to say
    : (as all characters are ASCII), the characters will always be one byte.
    : Unicode, and multibyte character sets are not ANSI compatible
    : (Please correct me if Im wrong).
    :
    : short and int are guaranteed to be 16bit or larger, as Lundin said.
    :
    : Just wanted to clarify.[/blue]
    :


    C99 is just a draft standard so far. But yeah, ISO 9899:1989 doesn't define any characters outside the 7-bit ASCII table.
Sign In or Register to comment.

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Categories