Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Sign In with Facebook Sign In with Google Sign In with OpenID

Categories

We have migrated to a new platform! Please note that you will need to reset your password to log in (your credentials are still in-tact though). Please contact lee@programmersheaven.com if you have questions.
Welcome to the new platform of Programmer's Heaven! We apologize for the inconvenience caused, if you visited us from a broken link of the previous version. The main reason to move to a new platform is to provide more effective and collaborative experience to you all. Please feel free to experience the new platform and use its exciting features. Contact us for any issue that you need to get clarified. We are more than happy to help you.

The difference between Decimal, Float and Double in C#?

NickFNickF USAPosts: 132Member

What is the difference between Decimal, Float and Double in C#? What are the situations that each of these should be used?

Tagged:

Comments

  • DavidMDavidM USAPosts: 342Member

    decimal is a floating decimal point type.

    float and double are floating binary point types.

    The main difference is the precision.

    Float - 7 digits (32 bit) Double-15-16 digits (64 bit) Decimal -28-29 significant digits (128 bit)

    Decimals have much higher precision and are usually used within financial applications that require a high degree of accuracy. Decimals are much slower (up to 20X times in some tests) than a double/float.

    Decimals and Floats/Doubles cannot be compared without a cast whereas Floats and Doubles can. Decimals also allow the encoding or trailing zeros.

    (Extracted answer. Please visit Here for more details.)

Sign In or Register to comment.