ChatGPT解决这个技术问题 Extra ChatGPT

When should I use double instead of decimal?

I can name three advantages to using double (or float) instead of decimal:

Uses less memory. Faster because floating point math operations are natively supported by processors. Can represent a larger range of numbers.

But these advantages seem to apply only to calculation intensive operations, such as those found in modeling software. Of course, doubles should not be used when precision is required, such as financial calculations. So are there any practical reasons to ever choose double (or float) instead of decimal in "normal" applications?

Edited to add: Thanks for all the great responses, I learned from them.

One further question: A few people made the point that doubles can more precisely represent real numbers. When declared I would think that they usually more accurately represent them as well. But is it a true statement that the accuracy may decrease (sometimes significantly) when floating point operations are performed?

This gets upvoted pretty regularly and I still struggle with it. For example, I'm working on an application that does financial calculations so I'm using decimal throughout. But the Math and VisualBasic.Financial functions use double so there's a lot of converting which has me constantly second guessing the use of decimal.
@JamieIde that's crazy the Financial functions use double, money should always be in decimal.
@ChrisMarisic But what can jamie Ide do working with legacy crap using double? Then you should use double too else the many conversions will cause rounding errors... no wonders he mentioned VisualBasic pfffhh.....

N
Noldorin

I think you've summarised the advantages quite well. You are however missing one point. The decimal type is only more accurate at representing base 10 numbers (e.g. those used in currency/financial calculations). In general, the double type is going to offer at least as great precision (someone correct me if I'm wrong) and definitely greater speed for arbitrary real numbers. The simple conclusion is: when considering which to use, always use double unless you need the base 10 accuracy that decimal offers.

Edit:

Regarding your additional question about the decrease in accuracy of floating-point numbers after operations, this is a slightly more subtle issue. Indeed, precision (I use the term interchangeably for accuracy here) will steadily decrease after each operation is performed. This is due to two reasons:

the fact that certain numbers (most obviously decimals) can't be truly represented in floating point form rounding errors occur, just as if you were doing the calculation by hand. It depends greatly on the context (how many operations you're performing) whether these errors are significant enough to warrant much thought however.

In all cases, if you want to compare two floating-point numbers that should in theory be equivalent (but were arrived at using different calculations), you need to allow a certain degree of tolerance (how much varies, but is typically very small).

For a more detailed overview of the particular cases where errors in accuracies can be introduced, see the Accuracy section of the Wikipedia article. Finally, if you want a seriously in-depth (and mathematical) discussion of floating-point numbers/operations at machine level, try reading the oft-quoted article What Every Computer Scientist Should Know About Floating-Point Arithmetic.


Can you provide an examp,e of a base 10 number with which precision is lost when converting to base 2?
@Mark: 1.000001 is one example, at least according to Jon Skeet. (See question 3 of this page: yoda.arachsys.com/csharp/teasers-answers.html)
@Mark: very simple example: 0.1 is a periodic fraction in base 2 so it cannot be expressed precisely in a double. Modern computers will still print the correct value but only because they “guess” at the result – not because it really is expressed correctly.
The Decimal type has 93-bits of precision in the mantissa, compared with about 52 for double. I wish Microsoft supported the IEEE 80-bit format, though, even if it had to be padded out to 16 bytes; it would have allowed a larger range than double or Decimal, much better speed than Decimal, support for transcendental operations (e.g. sin(x), log(x), etc.), and precision which while not quite as good as Decimal would be way better than double.
@charlotte: If you read my full post, you'll see that's explained.
C
Community

You seem spot on with the benefits of using a floating point type. I tend to design for decimals in all cases, and rely on a profiler to let me know if operations on decimal is causing bottlenecks or slow-downs. In those cases, I will "down cast" to double or float, but only do it internally, and carefully try to manage precision loss by limiting the number of significant digits in the mathematical operation being performed.

In general, if your value is transient (not reused), you're safe to use a floating point type. The real problem with floating point types is the following three scenarios.

You are aggregating floating point values (in which case the precision errors compound) You build values based on the floating point value (for example in a recursive algorithm) You are doing math with a very wide number of significant digits (for example, 123456789.1 * .000000000000000987654321)

EDIT

According to the reference documentation on C# decimals:

The decimal keyword denotes a 128-bit data type. Compared to floating-point types, the decimal type has a greater precision and a smaller range, which makes it suitable for financial and monetary calculations.

So to clarify my above statement:

I tend to design for decimals in all cases, and rely on a profiler to let me know if operations on decimal is causing bottlenecks or slow-downs.

I have only ever worked in industries where decimals are favorable. If you're working on phsyics or graphics engines, it's probably much more beneficial to design for a floating point type (float or double).

Decimal is not infinitely precise (it is impossible to represent infinite precision for non-integral in a primitive data type), but it is far more precise than double:

decimal = 28-29 significant digits

double = 15-16 significant digits

float = 7 significant digits

EDIT 2

In response to Konrad Rudolph's comment, item # 1 (above) is definitely correct. Aggregation of imprecision does indeed compound. See the below code for an example:

private const float THREE_FIFTHS = 3f / 5f;
private const int ONE_MILLION = 1000000;

public static void Main(string[] args)
{
    Console.WriteLine("Three Fifths: {0}", THREE_FIFTHS.ToString("F10"));
    float asSingle = 0f;
    double asDouble = 0d;
    decimal asDecimal = 0M;

    for (int i = 0; i < ONE_MILLION; i++)
    {
        asSingle += THREE_FIFTHS;
        asDouble += THREE_FIFTHS;
        asDecimal += (decimal) THREE_FIFTHS;
    }
    Console.WriteLine("Six Hundred Thousand: {0:F10}", THREE_FIFTHS * ONE_MILLION);
    Console.WriteLine("Single: {0}", asSingle.ToString("F10"));
    Console.WriteLine("Double: {0}", asDouble.ToString("F10"));
    Console.WriteLine("Decimal: {0}", asDecimal.ToString("F10"));
    Console.ReadLine();
}

This outputs the following:

Three Fifths: 0.6000000000
Six Hundred Thousand: 600000.0000000000
Single: 599093.4000000000
Double: 599999.9999886850
Decimal: 600000.0000000000

As you can see, even though we are adding from the same source constant, the results of the double is less precise (although probably will round correctly), and the float is far less precise, to the point where it has been reduced to only two significant digits.


Point 1 is incorrect. Precision/rounding errors only occur in casting, not in calculations. It is of course correct that most mathematical operations are unstable, thus multiplying the error. But this is another issue and it applies the same for all data types of limited precision, so in particular for decimal.
@Konrad Rudolph, see the example in "EDIT 2" as evidence of the point I was trying to make in item # 1. Often, this problem doesn't manifest itself because the positive imprecision balances with the negative imprecision, and they wash in the aggregate, but aggregating the same number (as I did in the example) highlights the problem.
Great example. Just shown it to my junior developers, the kids were amazed.
Now can you do the same thing with 2/3rds instead of 3/5ths... You should learn about the sexagesimal number system which handles 2/3rds perfectly fine.
@gnasher729, using 2/3rds instead of 3/5ths was not handled perfectly fine for the different types. Interestingly, the float value yielded Single: 667660.400000000000 while the decimal value yielded Decimal: 666666.7000000000. The float value is a little less than one thousand over the correct value.
J
Joe

Use decimal for base 10 values, e.g. financial calculations, as others have suggested.

But double is generally more accurate for arbitrary calculated values.

For example if you want to calculate the weight of each line in a portfolio, use double as the result will more nearly add up to 100%.

In the following example, doubleResult is closer to 1 than decimalResult:

// Add one third + one third + one third with decimal
decimal decimalValue = 1M / 3M;
decimal decimalResult = decimalValue + decimalValue + decimalValue;
// Add one third + one third + one third with double
double doubleValue = 1D / 3D;
double doubleResult = doubleValue + doubleValue + doubleValue;

So again taking the example of a portfolio:

The market value of each line in the portfolio is a monetary value and would probably be best represented as decimal.

The weight of each line in the portfolio (= Market Value / SUM(Market Value)) is usually better represented as double.


F
FlySwat

Use a double or a float when you don't need precision, for example, in a platformer game I wrote, I used a float to store the player velocities. Obviously I don't need super precision here because I eventually round to an Int for drawing on the screen.


Precision being the ONLY advantage of decimals, this is right. You should not be asking when you should use floating point numbers over decimals. That should be your first thought. The question then is when you should use decimals (and the answer is right here... when precision matters).
@Daniel Straight, It's funny, but I have the opposite opinion. I think using a less precise type because of its performance characteristics amounts to a preoptimization. You will potentially have to pay for that preoptimization many times over before you realize its benefit.
@Michael Meadows, I can understand this argument. Something to note though is that one of the main complaints with premature optimization is that programmers don't tend to know what's going to be slow. We know without any doubt, though, that decimals are slower than doubles. Nevertheless, I suppose in most cases, the performance improvement won't be noticeable to the user anyway. Of course, in most cases, the precision isn't needed either. Heh.
Decimal floating-point is actually LESS precise than binary floating-point using the same number of bits. Decimal's advantage is being able to exactly represent DECIMAL fractions like 0.01 which are common in financial calculation.
Well, this is not quite correct :) - in many games floating-point numbers can be undesireable, because of the fact that they are not consistent. See here
G
G DeMasters

In some Accounting, consider the possibility of using integral types instead or in conjunction. For example, let say that the rules you operate under require every calculation result carry forward with at least 6 decimal places and the final result will be rounded to the nearest penny.

A calculation of 1/6th of $100 yields $16.66666666666666..., so the value carried forth in a worksheet will be $16.666667. Both double and decimal should yield that result accurately to 6 decimal places. However, we can avoid any cumulative error by carrying the result forward as an integer 16666667. Each subsequent calculation can be made with the same precision and carried forward similarly. Continuing the example, I calculate Texas sales tax on that amount (16666667 * .0825 = 1375000). Adding the two (it's a short worksheet) 1666667 + 1375000 = 18041667. Moving the decimal point back in gives us 18.041667, or $18.04.

While this short example wouldn't yield a cumulative error using double or decimal, it's fairly easy to show cases where simply calculating the double or decimal and carrying forward would accumulate significant error. If the rules you operate under require a limited number of decimal places, storing each value as an integer by multiplying by 10^(required # of decimal place), and then dividing by 10^(required # of decimal places) to get the actual value will avoid any cumulative error.

In situations where fractions of pennies do not occur (for example, a vending machine), there is no reason to use non-integral types at all. Simply think of it as counting pennies, not dollars. I have seen code where every calculation involved only whole pennies, yet use of double led to errors! Integer only math removed the issue. So my unconventional answer is, when possible, forgo both double and decimal.


W
Will Dean

If you need to binary interrop with other languages or platforms, then you might need to use float or double, which are standardized.


N
Neil Meyer

Depends on what you need it for.

Because float and double are binary data types you have some diifculties and errrors in the way in rounds numbers, so for instance double would round 0.1 to 0.100000001490116, double would also round 1 / 3 to 0.33333334326441. Simply put not all real numbers have accurate representation in double types

Luckily C# also supports the so-called decimal floating-point arithmetic, where numbers are represented via the decimal numeric system rather than the binary system. Thus, the decimal floating point-arithmetic does not lose accuracy when storing and processing floating-point numbers. This makes it immensely suited to calculations where a high level of accuracy is needed.


p
plugwash

Note: this post is based on information of the decimal type's capabilities from http://csharpindepth.com/Articles/General/Decimal.aspx and my own interpretation of what that means. I will assume Double is normal IEEE double precision.

Note2: smallest and largest in this post reffer to the magnitude of the number.

Pros of "decimal".

"decimal" can represent exactly numbers that can be written as (sufficiently short) decimal fractions, double cannot. This is important in financial ledgers and similar where it is important that the results exactly match what a human doing the calculations would give.

"decimal" has a much larger mantissa than "double". That means that for values within it's normalised range "decimal" will have a much higher precision than double.

Cons of decimal

It will be Much slower (I don't have benchmarks but I would guess at least an order of magnitude maybe more), decimal will not benefit from any hardware acceleration and arithmetic on it will require relatively expensive multiplication/division by powers of 10 (which is far more expensive than multiplication and dividion by powers of 2) to match the exponent before addition/subtraction and to bring the exponent back into range after multiplication/division.

decimal will overflow earlier tha double will. decimal can only represent numbers up to ±296-1 . By comparision double can represent numbers up to nearly ±21024

decimal will underflow earlier. The smallest numbers representable in decimal are ±10-28 . By comparision double can represent values down to 2-149 (approx 10-45) if subnromal numbers are supported and 2-126 (approx 10-38) if they are not.

decimal takes up twice as much memory as double.

My opinion is that you should default to using "decimal" for money work and other cases where matching human calculation exactly is important and that you should use use double as your default choice the rest of the time.


M
Mark Brackett

Use floating points if you value performance over correctness.


Decimal numbers aren't more correct, except in certain limited cases that are sometimes (by no means always) important.
K
Khan

Choose the type in function of your application. If you need precision like in financial analysis, you have answered your question. But if your application can settle with an estimate your ok with double.

Is your application in need of a fast calculation or will he have all the time in the world to give you an answer? It really depends on the type of application.

Graphic hungry? float or double is enough. Financial data analysis, meteor striking a planet kind of precision ? Those would need a bit of precision :)


Decimal numbers are estimates, too. They conform to the conventions of financial arithmetic, but there's no advantage in, say, calculations involving physics.
J
Jeson Martajaya

Decimal has wider bytes, double is natively supported by CPU. Decimal is base-10, so a decimal-to-double conversion is happening while a decimal is computed.

For accounting - decimal
For finance - double
For heavy computation - double

Keep in mind .NET CLR only supports Math.Pow(double,double). Decimal is not supported.

.NET Framework 4

[SecuritySafeCritical]
public static extern double Pow(double x, double y);

c
chris klassen

A double values will serialize to scientific notation by default if that notation is shorter than the decimal display. (e.g. .00000003 will be 3e-8) Decimal values will never serialize to scientific notation. When serializing for consumption by an external party, this may be a consideration.