Do minor currency units have a ISO standard? - internationalization

ISO 4217 defines 3-letter currency symbols:
EUR
USD
LKR
GBP
Do currencies' minor units (cent, pence) have a ISO or similar standard, too, that defines codes for those sub-units like
ct
p
?

The standard also defines the relationship between the major currency unit and any minor currency unit. Often, the minor currency unit has a value that is 1/100 of the major unit, but 1/1000 is also common. Some currencies do not have any minor currency unit at all. In others, the major currency unit has so little value that the minor unit is no longer generally used (e.g. the Japanese sen, 1/100th of a yen). This is indicated in the standard by the currency exponent. For example, USD has exponent 2, while JPY has exponent 0. Mauritania does not use a decimal division of units, setting 1 ouguiya (UM) = 5 khoums, and Madagascar has 1 ariary = 5 iraimbilanja.
Wikipedia.
As for a better word, how does minor currency unit suit? Although, Wikipedia also refers to it as sub unit. Take your pick.
There is a table on that Wikipedia article listing the standard precision for the minor currency unit.
As a sidenote, Wikipedia provides the fractional unit name for all circulating currencies.

You need to look at the standard itself.
From the ISO website:
ISO 4217:2008 specifies the structure
for a three-letter alphabetic code and
an equivalent three-digit numeric code
for the representation of currencies
and funds. For those currencies having
minor units, it also shows the decimal
relationship between such units and
the currency itself.
ISO 4217:2008 also establishes
procedures for a Maintenance Agency,
and specifies the method of
application for codes.
The key bit is:
it also shows the decimal
relationship between such units and
the currency itself.
So to answer your question, I couldn't find an ISO Standard that discusses minor units. Similar standards discuss Commercial Administration and Finance.

In the financial markets there's roughly two established industry standards.
The first one is really a case-by-case agreement, mostly enforced by exchanges that have their securities quote in the minor currency unit.
This lead to:
GBX for British pence
ZAC for South-African cents
ILA for Israeli agorot
Probably pioneered by Reuters and Bloomberg, the second standard is far more wide-spread and consistent. The agreement is to lowercase the third letter to denote the minor units.
GBp, ZAr, ILs, USd, EUr, etc.
Related discussions:
http://www.fixtradingcommunity.org/pg/discussions/topicpost/167427/

Related

What is the range of a VHDL integer definition VHDL-2019?

While using the VHDL-2019 IEEE spec
section. 5.2.3.1. General
"However, an implementation shall allow the declaration of any integer
type whose range is wholly contained within the bounds –(2**63) and
(2**63)–1 inclusive."
(I added the exponential **)
Does this mean –(2**63) = -9223372036854775808 ?
In the 1993 spec it states -((2**31) - 1) and (2**31) - 1)
-2147483647 & 2147483647
Does the new VHDL spec have an error in that definition?
Ken
The change is quite intentional. See LCS2016_026c. You could note this gives the same range as a 64 bit integer in programming languages. The non-symmetrical effect comes from two's complement numbers which are the basis of integer types in VHDL tool implementations, the age of big iron with decimal based ALUs long faded.
The previous symmetrical range was not an implementation concern, VHDL arithmetic semantics requires run time detection of rollover or underflow. This change allows simpler detection based on changing signs without testing values while performing arithmetic in yet a larger than 64 bits universal integer.
The value range increase is an attempt to force synthesis vendors to support more than the minimum range specified in previous editions of the the standard. How well that works out (and over what implementation interval) will be a matter of history at some future date. There are also secondary effects based on index ranges (IEEE Std 1076-2019 5.3.2.2 Index constraints and discrete ranges) and positional correspondence for enumerated types (5.2.2 Enumerated types, 5.2.2.1 General). It's not practicable to simulate (or synthesize) composite objects with extreme index value ranges, starting with stack size issues. Industry practice isn't settled, and likely may result in today's HDLs being obsoleted.
Concerns to the accuracy of the standard's semantic description can be addressed to the IEEE-SAs VASG subcommittee which encourages participation by interested parties. You will find Stackoverflow vhdl tag denizens here who have been involved in the standardization process.

Is every integer a possible year value?

I've just been reading this mind-blowing and hilarious post about some common falsehoods regarding time. Number forty is:
Every integer is a theoretical possible year
This implies that every integer is not a theoretical possible year. What is the negative case here? What integer is not a theoretically possible year?
Depending on the context, 0 is not a valid year number. In the Gregorian calendar we're currently using (and in its predecessor, the Julian calendar), the year 1 (CE/AD) was immediately preceded by the year -1 (1 BCE/BC). (For dates before the Gregorian calendar was introduced, we can use either the Julian calendar or the proleptic Gregorian calendar).
In a programming context, this may or may not be directly relevant. Different languages, libraries, and frameworks represent years in different ways. ISO 8601, for example, supports years from 0000 to 9999, where 0000 is 1 BCE; wider ranges can be supported by mutual agreement. Some implementations of the C standard library can only represent times from about 1901 to 2038; others, using 64-bit time_t can represent a much wider range, and typically treat -1, 0, and 1 as consecutive years.
Ultimately you'll need to check the documentation for whatever language/library/framework you're using.

Go type for purchasing/financial calculations

I'm building an online store in Go. As would be expected, several important pieces need to record exact monetary amounts. I'm aware of the rounding problems associated with floats (i.e. 0.3 cannot be exactly represented, etc.).
The concept of currency seems easy to just represent as a string. However, I'm unsure of what type would be most appropriate to express the actual monetary amount in.
The key requirements would seem to be:
Can exactly express decimal numbers down to a specified number of decimal places, based on the currency (some currencies use more than 2 decimal places: http://www.londonfx.co.uk/ccylist.html )
Obviously basic arithmetic operations are needed - add/sub/mul/div.
Sane string conversion - which would essentially mean conversion to it's decimal equivalent. Also, internationalization would need to be at least possible, even if all of the logic for that isn't built in (1.000 in Europe vs 1,000 in the US).
Rounding, possibly using alternate rounding schemes like Banker's rounding.
Needs to have a simple and obvious way to correspond to a database value - MySQL in my case. (Might make the most sense to treat the value as a string at this level in order to ensure it's value is preserved exactly.)
I'm aware of math/big.Rat and it seems to solve a lot of these things, but for example it's string output won't work as-is, since it will output in the "a/b" form. I'm sure there is a solution for that too, but I'm wondering if there is some sort of existing best practice that I'm not aware of (couldn't easily find) for this sort of thing.
UPDATE: This package looks promising: https://code.google.com/p/godec/
You should keep i18n decoupled from your currency implementation. So no, don't bundle everything in a struct and call it a day. Mark what currency the amount represents but nothing more. Let i18n take care of formatting, stringifying, prefixing, etc.
Use an arbitrary precision numerical type like math/big.Rat. If that is not an option (because of serialization limitations or other barriers), then use the biggest fixed-size integer type you can use to represent the amount of atomic money in whatever currency you are representing – cents for USD, yens for JPY, rappen for CHF, cents for EUR, and so forth.
When using the second approach take extra care to not incur in overflows and define a clear and meaningful rounding behaviour for division.

Decimal Presentation : different purpose between zoned decimal and packed decimal

I'm learning Computer Science course and when I read to these definition, I understand. But I don't know what different purpose of two presentations and why.
Here some short explanation of purpose that my book said:
Zone decimal : hightly compatible with text data.
Packed decimal : faster computing speed.
Something I want to know is:
1) in zone decimal presentation there is a zone section that duplicate every digit. Why ? I see this is no purpose :(
2) why they say zone decimal is compatible with text data and why packed decimaal is faster.
Thanks :)
Firstly - where are you learning CS? Those terms are from the 1960s, the more common name is BCD (Binary Coded Decimal)
Zone decimal uses an entire byte for each digit. This means you can just print a number as if it was text (each 'character' stores a digit 0-9) but since there are only 10 digits and a byte can hold 256 different values this is a bit wasteful.
Packed decimal uses the fact that 4bits can store 16different values. So you can store two digits in a byte (top 4bits and bottom 4bits). This is still a bit wasteful since you only use half the capacity. But it's pretty easy to extract the two digits with just shift and mask operations.
Pretty much the only place you would see BCD these days is in some low level hardware where you want to read/x-mit a digit without using a microprocessor at all. It's easy to make a BCD counter just in transistors
but if you want to do any maths you either have to do long multiplication on each digit like you would on paper - or convert into regular ints and back again
Both of these representations have fallen out of favor, perhaps because they are not directly supported by C, and hence all of the systems descended from Unix.
Packed decimal has an advantage in two respects: since takes up less space it can get off the bus and into the processor faster, and many CISC instruction sets have dedicated instructions for arithmetic. To quote from http://en.wikipedia.org/wiki/Packed_decimal#Packed_BCD:
Packed BCD [binary coded decimal] is supported in the COBOL programming language as the
"COMPUTATIONAL-3" (an IBM extension adopted by many other compiler
vendors) or "PACKED-DECIMAL" (part of the 1985 COBOL standard) data
type. Besides the IBM System/360 and later compatible mainframes,
packed BCD was implemented in the native instruction set of the
original VAX processors from Digital Equipment Corporation and was the
native format for the Burroughs Corporation Medium Systems line of
mainframes (descended from the 1950s Electrodata 200 series).
Zoned decimal (http://en.wikipedia.org/wiki/Zoned_decimal#Zoned_decimal) has an easy mapping between characters on punch cards and their representation in memory, which perhaps explains your textbook's claim that it is "highly compatible with text data." As the Wikipedia article suggests, it's a term more used in IBM mainframe circles. On minis, we tended to just call it plain old decimal, PIC 9 data.
"Zoned Decimal" in its natural environment is meant to be compatable with the EBCDIC char set .
ASCII represents numbers as x'3x' -- x'39' which display as character "0" to "9".
The EBCDIC character sets (which has its origins in Hollerith pucnched cards) uses a similar but different scheme where x'F0' is displayed as characer "0' and x'F9' is displayed as character '9'.
Punched cards had a fixed length of 80 characters in many cases 10 or 12 of these characters were eaten up with record type identifiers and sequence numbers (desperately important if you dropped a bunch of cards on the floor!). So space was at a premium. Rather than enter a "+" or "-" character next to each number an "overpunch" extra holes near the top bit of the card was used to represent a positive or negative numbers, so saving a byte.
These overpunched characters were encdoded in EBCDIC as x"D0' to x'D9" for -0 to -9 and x'C0' to x'C9' for +0 to +9 usually in the last digit of the number.
Hence the "Zoned Decimal" format. The first four bits of each byte are the Zone, the second four bits the "number" to -42 was encoded as x'F4D2'.
This is more of a convention than anything else as the computer could not do anything with this format. So it needed to be encoded into "packed" format before any calculations took place. This is pretty easy s 'X'F4D2' -> x'042D' is mostly a case a grabbing the last zone then extracting the "numeric" four bits from each byte, which, could then be converted to binary.
When IBM mainframes were designed the largest group of users were banks, insurance companies and utility companies. The bulk of there processing followed this pattern.
read punch card.
read tape record.
add monthly payment to balance
store new balance on tape
print new balance
Most of the calculations involved currency amounts and most of the results were displayed immediately. It became clear that if the machine could do the arithmetic directly on the packed decimal values you could avoid several expensive "convert to binary" and "convert to decimal" instructions. As a bonus it made it easy to place the decimal point at the correct position and perform any decimal rounding. So a great deal of work went into implementing native packed decimal instructions (zero, add, subtract, multiply, divide, shift and round etc.).
This has been the preferred currency format for IBM mainframes ever since.
For many years developers on other platforms poured scorn on the mainframers for using such an archaic format, and, only recently began to realize how difficult it was to do fixed point decimal arithmetic to the standards accountants and tax collectors expect. Thanks to the efforts of Mike_Cowlishaw and others the rest of the world has caught up with the venerable IBM 360 and Java programmers can now calculate sales tax correctly using the BigDecimal library which is based on a variation on the old packed decimal format.

Should I use NSDecimalNumber to deal with money?

As I started coding my first app I used NSNumber for money values without thinking twice. Then I thought that maybe c types were enough to deal with my values. Yet, I was advised in the iPhone SDK forum to use NSDecimalNumber, because of its excellent rounding capabilities.
Not being a mathematician by temperament, I thought that the mantissa/exponent paradigm might be overkill; still, googlin' around, I realised that most talks about money/currency in cocoa were referred to NSDecimalNumber.
Notice that the app I am working on is going to be internationalised, so the option of counting the amount in cents is not really viable, for the monetary structure depends greatly on the locale used.
I am 90% sure that I need to go with NSDecimalNumber, but since I found no unambiguous answer on the web (something like: "if you deal with money, use NSDecimalNumber!") I thought I'd ask here. Maybe the answer is obvious to most, but I want to be sure before starting a massive re-factoring of my app.
Convince me :)
Marcus Zarra has a pretty clear stance on this: "If you are dealing with currency at all, then you should be using NSDecimalNumber." His article inspired me to look into NSDecimalNumber, and I've been very impressed with it. IEEE floating point errors when dealing with base-10 math have been irritating me for a while (1 * (0.5 - 0.4 - 0.1) = -0.00000000000000002776) and NSDecimalNumber does away with them.
NSDecimalNumber doesn't just add another few digits of binary floating point precision, it actually does base-10 math. This gets rid of the errors like the one shown in the example above.
Now, I'm writing a symbolic math application, so my desire for 30+ decimal digit precision and no weird floating point errors might be an exception, but I think it's worth looking at. The operations are a little more awkward than simple var = 1 + 2 style math, but they're still manageable. If you're worried about allocating all sorts of instances during your math operations, NSDecimal is the C struct equivalent of NSDecimalNumber and there are C functions for doing the exact same math operations with it. In my experience, these are plenty fast for all but the most demanding applications (3,344,593 additions/s, 254,017 divisions/s on a MacBook Air, 281,555 additions/s, 12,027 divisions/s on an iPhone).
As an added bonus, NSDecimalNumber's descriptionWithLocale: method provides a string with a localized version of the number, including the correct decimal separator. The same goes in reverse for its initWithString:locale: method.
Yes. You have to use
NSDecimalNumber and
not double or float when you deal with currency on iOS.
Why is that??
Because we don't want to get things like $9.9999999998 instead of $10
How that happens??
Floats and doubles are approximations. They always comes with a rounding error. The format computers use to store decimals cause this rouding error.
If you need more details read
http://floating-point-gui.de/
According to apple docs,
NSDecimalNumber is an immutable subclass of NSNumber, provides an object-oriented wrapper for doing base-10 arithmetic. An instance can represent any number that can be expressed as mantissa x 10^exponent where mantissa is a decimal integer up to 38 digits long, and exponent is an integer from –128 through 127.wrapper for doing base-10 arithmetic.
So NSDecimalNumber is recommonded for deal with currency.
(Adapted from my comment on the other answer.)
Yes, you should. An integral number of pennies works only as long as you don't need to represent, say, half a cent. If that happens, you could change it to count half-cents, but what if you then need to represent a quarter-cent, or an eighth of a cent?
The only proper solution is NSDecimalNumber (or something like it), which puts off the problem to 10^-128¢ (i.e.,
0.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
000000000000000000000000000000000000001¢).
(Another way would be arbitrary-precision arithmetic, but that requires a separate library, such as the GNU MP Bignum library. GMP is under the LGPL. I've never used that library and don't know exactly how it works, so I couldn't say how well it would work for you.)
[Edit: Apparently, at least one person—Brad Larson—thinks I'm talking about binary floating-point somewhere in this answer. I'm not.]
I've found it convenient to use an integer to represent the number of cents and then divide by 100 for presentation. Avoids the whole issue.
A better question is, when should you not use NSDecimalNumber to deal with money. The short answer to that question is, when you can't tolerate the performance overhead of NSDecimalNumber and you don't care about small rounding errors because you're never dealing with more than a few digits of precision. The even shorter answer is, you should always use NSDecimalNumber when dealing with money.
VISA, MasterCards and others are using integer values while passing amounts. It's up to sender and reciever to parse amouts correctly according to currency exponent (divide or multiply by 10^num, where num - is an exponent of the currency). Note that different currencies have different exponents. Usually it's 2 (hence we divide and multiply by 100), but some currencies have exponent = 0 (VND,etc), or = 3.

Resources