ChatGPT解决这个技术问题 Extra ChatGPT

What is the best data type to use for money in C#?

What is the best data type to use for money in C#?

You might find answers from this post helpful.
Here's a mapping for all data types: docs.microsoft.com/en-us/dotnet/framework/data/adonet/…
Also, if using data annotations, include using System.ComponentModel.DataAnnotations; ... [DataType(DataType.Currency)] msdn.microsoft.com/en-us/library/…

Y
Yves

As it is described at decimal as:

The decimal keyword indicates a 128-bit data type. Compared to floating-point types, the decimal type has more precision and a smaller range, which makes it appropriate for financial and monetary calculations.

You can use a decimal as follows:

decimal myMoney = 300.5m;

You should explain what about that link is important. An answer should be good enough on its own, with a link as additional reference or detail. See stackoverflow.com/help/how-to-answer
So the minimum-length answer can be fewer characters than the minimum-length comment - interesting! Not that I have a problem with the terse/concise answer, especially when it is also "deep" in that it links to further discussion.
Amazing answer, and I don't feel it needs further explanation since it completely answers the question. The link to MSDN documentation is a bonus as far as I'm concerned. Bravo!
C
Community

System.Decimal

The Decimal value type represents decimal numbers ranging from positive 79,228,162,514,264,337,593,543,950,335 to negative 79,228,162,514,264,337,593,543,950,335. The Decimal value type is appropriate for financial calculations requiring large numbers of significant integral and fractional digits and no round-off errors. The Decimal type does not eliminate the need for rounding. Rather, it minimizes errors due to rounding.

I'd like to point to this excellent answer by zneak on why double shouldn't be used.


i
iliketocode

Use the Money pattern from Patterns of Enterprise Application Architecture. specify amount as decimal and the currency as an enum.


I was actually going to suggest this, but I make Currency a class so I can define an exchange rate (in relation to a "base currency", often the US dollar [which I set to have an exchange rate of 1.00]).
For the future visitors of this thread (like me), there is now this: nuget.org/packages/Money and it rocks!
Wondering if such a type should be a struct or class. A decimal + an (int) enum makes it 20 bytes. My money is on struct still.
That Money nuget has a dead github link for project site so...no docs?
The problem with this is if you're creating your own implementation, you have to figure out how to actually persist it. And the most popular ORM (EF) has no support at all for custom data types. Therefore someone is asked to get really deep in the weeds to do what should be a pretty straightforward thing.
S
SquidScareMe

Decimal. If you choose double you're leaving yourself open to rounding errors


@Jess double can introduce rounding errors because floating point cannot represent all numbers exactly (e.g. 0.01 has no exact representation in floating point). Decimal, on the other hand, does represent numbers exactly. (The trade-off is Decimal has a smaller range than floating point) Floating point can give you * inadvertent* rounding errors (e.g. 0.01+0.01 != 0.02). Decimal can give you rounding errors, but only when you asked for it (e.g. Math.Round(0.01+0.02) returns zero)
@IanBoyd: The value "$1.57" can be precisely represented (double)157. If one uses double and carefully applies scaling and domain-specific rounding when appropriate, it can be perfectly precise. If one is sloppy in one's rounding, decimal may yield results which are semantically incorrect (e.g. if one adds together multiple values which are supposed to be rounded to the nearest penny, but doesn't actually around them first). The only good thing about decimal is that scaling is built-in.
@supercat, regarding this comment "if one adds together multiple values which are supposed to be rounded to the nearest penny, but doesn't actually around them first", i do not see how a float would solve this. It is a user error and has nothing to do with decimals IMHO. i do get the point but i feel it has been misplaced, mainly because IanBoyd did specify that ...if you ask for it.
d
dommer

decimal has a smaller range, but greater precision - so you don't lose all those pennies over time!

Full details here:

http://msdn.microsoft.com/en-us/library/364x0z75.aspx


L
Lennaert

Agree with the Money pattern: Handling currencies is just too cumbersome when you use decimals.

If you create a Currency-class, you can then put all the logic relating to money there, including a correct ToString()-method, more control of parsing values and better control of divisions.

Also, with a Currency class, there is no chance of unintentionally mixing money up with other data.


d
dsz

Another option (especially if you're rolling you own class) is to use an int or a int64, and designate the lower four digits (or possibly even 2) as "right of the decimal point". So "on the edges" you'll need some "* 10000" on the way in and some "/ 10000" on the way out. This is the storage mechanism used by Microsoft's SQL Server, see http://msdn.microsoft.com/en-au/library/ms179882.aspx

The nicity of this is that all your summation can be done using (fast) integer arithmetic.


S
Scott Hannen

Most applications I've worked with use decimal to represent money. This is based on the assumption that the application will never be concerned with more than one currency.

This assumption may be based on another assumption, that the application will never be used in other countries with different currencies. I've seen cases where that proved to be false.

Now that assumption is being challenged in a new way: New currencies such as Bitcoin are becoming more common, and they aren't specific to any country. It's not unrealistic that an application used in just one country may still need to support multiple currencies.

Some people will say that creating or even using a type just for money is "gold plating," or adding extra complexity beyond the known requirements. I strongly disagree. The more ubiquitous a concept is within your domain, the more important it is to make a reasonable effort to use the correct abstraction up front. If you want to see complexity, try working in an application that used to use decimal and now there's an additional Currency property next to every decimal property.

If you use the wrong abstraction up front, replacing it later will be a hundred times more work. That means potentially introducing defects into existing code, and the best part is that those defects will likely involve amounts of money, transactions with money, or just anything with money.

And it's not that difficult to use something other than decimal. Google "nuget money type" and you'll see that numerous developers have created such abstractions (including me.) It's easy. It's as easy as using DateTime instead of storing a date in a string.


N
Noel Kennedy

Create your own class. This seems odd, but a .Net type is inadequate to cover different currencies.