The difference between Decimal, Float and Double in C#? - Programmers Heaven

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Categories

The difference between Decimal, Float and Double in C#?

NickFNickF USAPosts: 132Member

What is the difference between Decimal, Float and Double in C#? What are the situations that each of these should be used?

Tagged:

Comments

  • DavidMDavidM USAPosts: 342Member

    decimal is a floating decimal point type.

    float and double are floating binary point types.

    The main difference is the precision.

    Float - 7 digits (32 bit)
    Double-15-16 digits (64 bit)
    Decimal -28-29 significant digits (128 bit)

    Decimals have much higher precision and are usually used within financial applications that require a high degree of accuracy. Decimals are much slower (up to 20X times in some tests) than a double/float.

    Decimals and Floats/Doubles cannot be compared without a cast whereas Floats and Doubles can. Decimals also allow the encoding or trailing zeros.

    (Extracted answer. Please visit Here for more details.)

Sign In or Register to comment.