What is the best data type to use for money in C#?
using System.ComponentModel.DataAnnotations;
... [DataType(DataType.Currency)]
msdn.microsoft.com/en-us/library/…
As it is described at decimal as:
The decimal keyword indicates a 128-bit data type. Compared to floating-point types, the decimal type has more precision and a smaller range, which makes it appropriate for financial and monetary calculations.
You can use a decimal as follows:
decimal myMoney = 300.5m;
The Decimal value type represents decimal numbers ranging from positive 79,228,162,514,264,337,593,543,950,335 to negative 79,228,162,514,264,337,593,543,950,335. The Decimal value type is appropriate for financial calculations requiring large numbers of significant integral and fractional digits and no round-off errors. The Decimal type does not eliminate the need for rounding. Rather, it minimizes errors due to rounding.
I'd like to point to this excellent answer by zneak on why double shouldn't be used.
Use the Money pattern from Patterns of Enterprise Application Architecture. specify amount as decimal and the currency as an enum.
Money
nuget has a dead github link for project site so...no docs?
Decimal. If you choose double you're leaving yourself open to rounding errors
double
can introduce rounding errors because floating point cannot represent all numbers exactly (e.g. 0.01 has no exact representation in floating point). Decimal
, on the other hand, does represent numbers exactly. (The trade-off is Decimal
has a smaller range than floating point) Floating point can give you * inadvertent* rounding errors (e.g. 0.01+0.01 != 0.02
). Decimal
can give you rounding errors, but only when you asked for it (e.g. Math.Round(0.01+0.02)
returns zero)
double
and carefully applies scaling and domain-specific rounding when appropriate, it can be perfectly precise. If one is sloppy in one's rounding, decimal
may yield results which are semantically incorrect (e.g. if one adds together multiple values which are supposed to be rounded to the nearest penny, but doesn't actually around them first). The only good thing about decimal
is that scaling is built-in.
decimal has a smaller range, but greater precision - so you don't lose all those pennies over time!
Full details here:
http://msdn.microsoft.com/en-us/library/364x0z75.aspx
Agree with the Money pattern: Handling currencies is just too cumbersome when you use decimals.
If you create a Currency-class, you can then put all the logic relating to money there, including a correct ToString()-method, more control of parsing values and better control of divisions.
Also, with a Currency class, there is no chance of unintentionally mixing money up with other data.
Another option (especially if you're rolling you own class) is to use an int or a int64, and designate the lower four digits (or possibly even 2) as "right of the decimal point". So "on the edges" you'll need some "* 10000" on the way in and some "/ 10000" on the way out. This is the storage mechanism used by Microsoft's SQL Server, see http://msdn.microsoft.com/en-au/library/ms179882.aspx
The nicity of this is that all your summation can be done using (fast) integer arithmetic.
Most applications I've worked with use decimal
to represent money. This is based on the assumption that the application will never be concerned with more than one currency.
This assumption may be based on another assumption, that the application will never be used in other countries with different currencies. I've seen cases where that proved to be false.
Now that assumption is being challenged in a new way: New currencies such as Bitcoin are becoming more common, and they aren't specific to any country. It's not unrealistic that an application used in just one country may still need to support multiple currencies.
Some people will say that creating or even using a type just for money is "gold plating," or adding extra complexity beyond the known requirements. I strongly disagree. The more ubiquitous a concept is within your domain, the more important it is to make a reasonable effort to use the correct abstraction up front. If you want to see complexity, try working in an application that used to use decimal
and now there's an additional Currency
property next to every decimal
property.
If you use the wrong abstraction up front, replacing it later will be a hundred times more work. That means potentially introducing defects into existing code, and the best part is that those defects will likely involve amounts of money, transactions with money, or just anything with money.
And it's not that difficult to use something other than decimal. Google "nuget money type" and you'll see that numerous developers have created such abstractions (including me.) It's easy. It's as easy as using DateTime
instead of storing a date in a string
.
Create your own class. This seems odd, but a .Net type is inadequate to cover different currencies.
Success story sharing