Binary numbers (C++) - Programmers Heaven

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Categories

Welcome to the new platform of Programmer's Heaven! We apologize for the inconvenience caused, if you visited us from a broken link of the previous version. The main reason to move to a new platform is to provide more effective and collaborative experience to you all. Please feel free to experience the new platform and use its exciting features. Contact us for any issue that you need to get clarified. We are more than happy to help you.

Binary numbers (C++)

dwgeblerdwgebler Posts: 190Member
In Borland C++ if you want to directly type a hexadecimal value in to the source code, you enter 0x then the value, e.g. 0x3F. How does one indicate to the compiler that a number should be read as binary?


«1

Comments

  • stoberstober Posts: 9,765Member ✭✭✭
    : In Borland C++ if you want to directly type a hexadecimal value in to the source code, you enter 0x then the value, e.g. 0x3F. How does one indicate to the compiler that a number should be read as binary?
    :
    :
    :
    There is no support binary notation in either C or C++. Seems odd, but it was just never part of the specs.

  • dwgeblerdwgebler Posts: 190Member
    Fair enough, guess I'll just convert my binary numbers to hex first when I need them.

    : : In Borland C++ if you want to directly type a hexadecimal value in to the source code, you enter 0x then the value, e.g. 0x3F. How does one indicate to the compiler that a number should be read as binary?
    : :
    : :
    : :
    : There is no support binary notation in either C or C++. Seems odd, but it was just never part of the specs.
    :
    :

  • stoberstober Posts: 9,765Member ✭✭✭
    guess I'll just convert my binary numbers to hex first when I need them.
    :

    [blue]Yup! That's what everyone else does too. [/blue]

  • blitzblitz Posts: 620Member
    : In Borland C++ if you want to directly type a hexadecimal value in to the source code, you enter 0x then the value, e.g. 0x3F. How does one indicate to the compiler that a number should be read as binary?
    :
    :
    :

    Well, as stober said, there's no support in C/C++ for integer literals
    using a binary notation and everyone (including me) converts the bit
    masks and other binary constants in other bases supported by the
    language (mostly base 16, since the constants are shorter exprimed in
    this base :-)).

    However, I considered more closely the problem after reading your
    message and came up with an acceptable solution if you're okay with
    the use of C++ templates (that means the solution proposed won't work
    in plain old C, just C++ (and moreover with a pretty new compiler))

    Ok, so all you have to do is to #include "binary_val.h" with the
    following content in your source programs:
    [code]
    #ifndef __BINARY_VAL_H__
    #define __BINARY_VAL_H__

    template
    struct _b8tpl_ {
    enum {
    // incorrectly defined numbers will try to instantiate _b8tpl_<-1>
    ok = (i >= 0) && (i <= 011111111) && ((i & 7) <= 1),
    val = 2 * _b8tpl_<ok ? (i >> 3) : -1>::val + (i & 7)
    };
    };

    template <>
    struct _b8tpl_<0> {
    enum { val = 0 };
    };

    template <>
    struct _b8tpl_<-1> {
    // _b8tpl_<-1> doesn't define the enumerator "val"
    // so an erroneous number will produce a compile error
    };

    // several macros to support definition of integral values
    // on a 32-bit machine by using the binary notation
    // (we'll make sure that the integral literals are octal by prefixing them with a zero)
    #define BIN8(b) ((unsigned char)_b8tpl_<0##b>::val)
    #define BIN16(b1,b0) ((((unsigned short)BIN8(b1))<<8)|BIN8(b0))
    #define BIN32(b3,b2,b1,b0) ((((unsigned int)BIN16(b3,b2))<<16)|BIN16(b1,b0))
    #define BIN64(b7,b6,b5,b4,b3,b2,b1,b0) ((((unsigned __int64)BIN32(b7,b6,b5,b4))<<32)|BIN32(b3,b2,b1,b0))

    #endif /*__BINARY_VAL_H__*/
    [/code]
    then you can declare binary constants by using the macros BIN8, BIN16...
    as in the following example:
    [code]
    int main()
    {
    // a = 0xB3
    unsigned char a = BIN8(10110011);
    // b = 0xE7F9
    unsigned short b = BIN16(11100111, 11111001);
    // c = 0xA6C70B25
    unsigned int c = BIN32(10100110, 11000111, 00001011, 00100101);

    return 0;
    }[/code]
    I tested the code with VC++ 6 compiler and BC++ v5.5 and all
    seemed okay; I'm hoping that it will work with your compiler also... :-)

    Regards,
    Blitz
  • dwgeblerdwgebler Posts: 190Member
    [blue]This is fantastic. Thank you.[/blue]

    : : In Borland C++ if you want to directly type a hexadecimal value in to the source code, you enter 0x then the value, e.g. 0x3F. How does one indicate to the compiler that a number should be read as binary?
    : :
    : :
    : :
    :
    : Well, as stober said, there's no support in C/C++ for integer literals
    : using a binary notation and everyone (including me) converts the bit
    : masks and other binary constants in other bases supported by the
    : language (mostly base 16, since the constants are shorter exprimed in
    : this base :-)).
    :
    : However, I considered more closely the problem after reading your
    : message and came up with an acceptable solution if you're okay with
    : the use of C++ templates (that means the solution proposed won't work
    : in plain old C, just C++ (and moreover with a pretty new compiler))
    :
    : Ok, so all you have to do is to #include "binary_val.h" with the
    : following content in your source programs:
    : [code]
    : #ifndef __BINARY_VAL_H__
    : #define __BINARY_VAL_H__
    :
    : template
    : struct _b8tpl_ {
    : enum {
    : // incorrectly defined numbers will try to instantiate _b8tpl_<-1>
    : ok = (i >= 0) && (i <= 011111111) && ((i & 7) <= 1),
    : val = 2 * _b8tpl_<ok ? (i >> 3) : -1>::val + (i & 7)
    : };
    : };
    :
    : template <>
    : struct _b8tpl_<0> {
    : enum { val = 0 };
    : };
    :
    : template <>
    : struct _b8tpl_<-1> {
    : // _b8tpl_<-1> doesn't define the enumerator "val"
    : // so an erroneous number will produce a compile error
    : };
    :
    : // several macros to support definition of integral values
    : // on a 32-bit machine by using the binary notation
    : // (we'll make sure that the integral literals are octal by prefixing them with a zero)
    : #define BIN8(b) ((unsigned char)_b8tpl_<0##b>::val)
    : #define BIN16(b1,b0) ((((unsigned short)BIN8(b1))<<8)|BIN8(b0))
    : #define BIN32(b3,b2,b1,b0) ((((unsigned int)BIN16(b3,b2))<<16)|BIN16(b1,b0))
    : #define BIN64(b7,b6,b5,b4,b3,b2,b1,b0) ((((unsigned __int64)BIN32(b7,b6,b5,b4))<<32)|BIN32(b3,b2,b1,b0))
    :
    : #endif /*__BINARY_VAL_H__*/
    : [/code]
    : then you can declare binary constants by using the macros BIN8, BIN16...
    : as in the following example:
    : [code]
    : int main()
    : {
    : // a = 0xB3
    : unsigned char a = BIN8(10110011);
    : // b = 0xE7F9
    : unsigned short b = BIN16(11100111, 11111001);
    : // c = 0xA6C70B25
    : unsigned int c = BIN32(10100110, 11000111, 00001011, 00100101);
    :
    : return 0;
    : }[/code]
    : I tested the code with VC++ 6 compiler and BC++ v5.5 and all
    : seemed okay; I'm hoping that it will work with your compiler also... :-)
    :
    : Regards,
    : Blitz
    :

  • WerewolfWareWerewolfWare Posts: 304Member
    Can you explain to me what you did in that weird code?
  • blitzblitz Posts: 620Member
    : Can you explain to me what you did in that weird code?
    :

    Check out this link for explanation:
    http://osl.iu.edu/~tveldhui/papers/Template-Metaprograms/meta-art.html

    Regards,
    Blitz
  • rchandelierrchandelier Posts: 10Member
    This post has been deleted.
  • rchandelierrchandelier Posts: 10Member
    A few compilers (usually Microcontrollers ones) has a special feature implemented within recognizing literal binary numbers by prefix "0b..." preceding the number, although most compilers (C/C++ standards) don't have such feature and if it is the case, here it is my alternative solution:

    [code]
    #define B_0000 0
    #define B_0001 1
    #define B_0010 2
    #define B_0011 3
    #define B_0100 4
    #define B_0101 5
    #define B_0110 6
    #define B_0111 7
    #define B_1000 8
    #define B_1001 9
    #define B_1010 a
    #define B_1011 b
    #define B_1100 c
    #define B_1101 d
    #define B_1110 e
    #define B_1111 f

    #define _B2H(bits) B_##bits
    #define B2H(bits) _B2H(bits)
    #define _HEX(n) 0x##n
    #define HEX(n) _HEX(n)
    #define _CCAT(a,b) a##b
    #define CCAT(a,b) _CCAT(a,b)

    #define BYTE(a,b) HEX( CCAT(B2H( a),B2H( b)) )
    #define WORD(a,b,c,d) HEX( CCAT(CCAT(B2H( a),B2H( b)),CCAT(B2H( c),B2H( d))) )
    #define DWORD(a,b,c,d,e,f,g,h) HEX( CCAT( CCAT(CCAT(B2H( a),B2H( b)),CCAT(B2H( c),B2H( d))) , CCAT(CCAT(B2H( e),B2H( f)),CCAT(B2H( g),B2H( h))) ) )

    //using example
    char b = BYTE(0100,0001); //equivalent to b = 65; or b = 'A'; or b = 0x41;
    unsigned int w = WORD(1101,1111,0100,0011); //equivalent to w = 57155; or w = 0xdf43;
    unsigned long int dw = DWORD(1101,1111,0100,0011,1111,1101,0010,1000); //equivalent to dw = 3745774888; or dw = 0xdf43fd28;
    [/code]

    (*) Disadvantages: (it's not such a big ones)
    - The binary numbers have to be grouped 4 by 4;
    - The binary literals have to be only unsigned integer numbers;
    (*) Advantages:
    - Total preprocessor driven, not spending processor time in pointless operations (like "?.. :..", "<<", "+") to the executable program (it may be performed hundred of times in the final application);
    - It works "mainly in C" compilers and C++ as well (template+enum solution works only in C++ compilers);
    - It has only the limitation of "longness" for expressing "literal constant" values. There would have been earlyish longness limitation (usually 8bits:0-255) if one had expressed constant values by parsing resolve of "enum solution"(255 = reach enum limit), differently, "literal constant" limitations, in the compiler allows greater numbers;
    - Some other solutions demand exagerated number of constant definitions (#define's in my opinion) including long or several header files (in most cases not easily readable and understandable, and make the project become unnecessarily confused and extended, like that using "BOOST_BINARY()");
    - Simplicity of the solution: easily readable, understandable and adjustable for other cases (could be extended for grouping 8 by 8 too);

    I hope it helps, thanks. Renato Chandelier.
  • rchandelierrchandelier Posts: 10Member
    This post has been deleted.
«1
Sign In or Register to comment.