Talk:Arbitrary-precision arithmetic

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
WikiProject Mathematics (Rated Start-class, Mid-importance)
WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
Start Class
Mid Importance
 Field:  Foundations, logic, and set theory
WikiProject Computing / Software (Rated Start-class, Low-importance)
WikiProject iconThis article is within the scope of WikiProject Computing, a collaborative effort to improve the coverage of computers, computing, and information technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Start-Class article Start  This article has been rated as Start-Class on the project's quality scale.
 Low  This article has been rated as Low-importance on the project's importance scale.
Taskforce icon
This article is supported by WikiProject Software (marked as Mid-importance).
 

Didn't you want to talk about big floating-point numbers ? Are some people interessed ?


I moved the following HTML comments from the article source over to this talk page. They may be more useful here:

<!-- Please give known applications to specific problems, rather than making vague statements that bignum arithmetic is used in [[theoretical physics]] etc. -->
<!-- TODO: mention division algorithms, and hence square root etcetera. Mention arithmetic-geometric mean algorithms for computing e^x, trig functions, pi, etcetera. -->

Herbee 20:57, 23 April 2006 (UTC)

The newly-added Infinite Precision section.[edit]

Shouldn't this be arbitrary precision as well? I'm not familiar with the work being referenced, but I'm fairly certain that there is no such animal. Trying to get infinite precision off of 4/3 * pi just isn't happening on computers with finite memory any time soon. SnowFire 16:16, 22 May 2006 (UTC)

I agree. It's the usual confusion between "arbitrarily large" and "infinite". —Steven G. Johnson 16:45, 22 May 2006 (UTC)
The text basically describes symbolic algebra. Fredrik Johansson 16:52, 22 May 2006 (UTC)
You're right, upon closer reading symbolic algebra seems to be what was intended by the text. It's still misleading to call it "infinite precision", however, since (a) the symbolic expressions have arbitrary (limited by memory) but finite size, which directly corresponds to a form of finite precision, and (b) you're not really doing "arithmetic" until you calculate the numeric result, and that is done with arbitrary but finite precision. Besides, if you Google for "infinite precision arithmetic", it seems that this term is mainly used as a synonym for arbitrary precision. —Steven G. Johnson 16:59, 22 May 2006 (UTC)

I agree with all of the above given suitable context, but not with the text in the article which reads "...infinite-precision arithmetic, which is something of a misnomer: the number of digits of precision always remains finite...". I would argue that any representation for which there is a bijective mapping onto the set being modelled is precise. It's not infinite precision that's the problem, it's trying to represent elements of an infinite set on a finite state machine. IMHO, it is not an error to describe bignum arithmetic as "infinite precision" if it is operating over a finite set, an obvious example being the integers modulo N, for not-extremely-large N. --Will

Sure, and a one-bit integer is "infinite precision" in your sense, as long as you are only trying to represent the set {0,1}. As far as I can tell, however, that's simply not the way the term "precision" and especially "infinite precision" is used in the setting of numeric computation. In this context, the "precision" of a number is the number of significant digits that are specified (or some equivalent measure of information content). (Yes, the general English meaning is more vague than that...most English words are, but that doesn't prevent them from having a more, ahem, precise meaning in a technical context.) I think the meaning that you are describing would be better termed "perfect precision". —Steven G. Johnson 04:03, 28 May 2006 (UTC)
If a number is represented in a format which provides n base-d digits of precision (even this makes unnecessary assumptions about representation), then the error is proportional to d^-n. Thus n = -log(error), with n going to infinity as error goes to zero. I would have said that if the error is exactly zero for all operations and for all operands, then the precision is infinite. This is applicable to the modular arithmetic case (including the n=1 case you mentioned), but not for general symbolic algebra systems, where the size of the expression tree is artificially bounded by the machine's available state; I therefore don't agree with the section which keeps appearing. I'm not saying that "infinite precision" is a more appropriate term than "arbitrary precision" for finite number systems, merely that it's not inaccurate. But on reflection perhaps I was misinterpreting the sentence; which on re-reading is perhaps only dismissing it as a misnomer when used as a synonym for arbitrary precision. --Will (86.141.151.165 11:06, 28 May 2006 (UTC))
Be careful not to confuse accuracy/error with precision. 3.000000000000000 is very precise, but it is not a very accurate representation of, say, π. —Steven G. Johnson 18:54, 28 May 2006 (UTC)

You are wrong. 4/3*pi is a computable number, thus it can be represented in finite manner in the memory of the computer. There is an example in the references section. Sounds like you don't know what you're talking about.  Grue  06:17, 28 May 2006 (UTC)

It's the same thing that happens if I type 4/3*Pi in Mathematica. Symbolic algebra: expressions evaluate to expressions, with pointers to code to compute the atomic elements numerically with any desired precision. The only difference between Mathematica and the Lisp library is that the latter shows a numerical value instead of the underlying expression in the REPL. Fredrik Johansson 09:43, 28 May 2006 (UTC)
The set of computable reals is infinitely large. The set of states for a machine running mathematica (or any similar package) is finite. Therefore there exist pairs of distinct computable numbers which have identical representation within the machine. Where this happens, there is nonzero error. You can't have infinite precision if there's error. --Will (86.141.151.165 11:16, 28 May 2006 (UTC))
No, some computable numbers are just too complicated to fit in memory and they can't be represented. So, only finite set of different computable numbers fits in any given computer. As for symbolic computation, I think there are some noticeable differences. In case of π, there is a symbol for a number, but many computable numbers don't have any symbol nor can be represented as a finite composition of arithmetic operations on numbers that have a symbol (there are also numbers that have a symbol, but are not computable, like Chaitin's constant).  Grue  12:44, 28 May 2006 (UTC)
Grue, this article is about arithmetic and that means computation of a numeric result. By definition, every computable number can be represented by a finite-length (though possibly very large) computer program that computes it to any arbitrary (but not infinite) precision in finite time. The representation of a number by a computer program is perfectly precise in the sense of determining a given real number uniquely, but does not provide infinite-precision arithmetic. —Steven G. Johnson 19:15, 28 May 2006 (UTC)

Table needs cleanup[edit]

The table of arbitrary precision libraries needs improvement. For one thing, "Number Type" for several of the libraries does not actually list what number types are provided. It would also be useful to include a short overview of what features each library provides in addition to just the number types. For example, GMP implementing highly optimized low level operations, MPFR providing correctly rounded transcendental functions, etc. Fredrik Johansson 13:46, 3 October 2008 (UTC)

Those sound like useful suggestions, but I would refrain from saying that one library or another is "highly optimized" in the absence of benchmarks comparing several of the available libraries. When someone claims that their code is "highly optimized" it usually means "I made it much faster than my first implementation," which doesn't mean that there isn't a completely different implementation that is much faster. Put in terms of Wikipedia policy, a project's web site claiming that it is fast is not a reputable source unless they have published benchmarks backing them up. —Steven G. Johnson (talk) 16:35, 3 October 2008 (UTC)

Proposed addition to list of arbitrary-precision libraries[edit]

I have an arbitrary-precision library I'd like to add to the list of arbitrary-precision libraries, but it would be a conflict of interest to add it myself, so I'm proposing that someone else add it, per the instructions here:

WP:SCOIC

The list I'm referring to is the table toward the end, where the first item in the list is "apfloat".

My understanding is that it's in the general Wikipedia interest for the list to be more complete than less, and that therefore any useful arbitrary-precision library should be included.

Also, the library I'd like to add to the list, xlPrecision, is unique in that it is designed to be easy and intuitive for Office VBA programmers (especially Excel VBA programmers) who may have no knowledge of C, C++, Java, or any of the other languages reperesented in the list, and who therefore might have difficulty using the other libraries, at least in the short term.

Here are the lines I'd like to see added to the list:

|xlPrecision |Decimal |VBA |Commercial

(Sorry, I'm not sure how to format those lines to appear correctly. They way they appear in "edit this page" is the way they would be added to the table.)


Thanks, and let me know if you have any questions for me.

Greg (talk) 02:20, 22 February 2009 (UTC)

W3bbo's edits[edit]

W3bbo just now changed several mentions of "floats" to "decimals". This is incorrect, as most arbitrary-precision libraries use binary, not decimal, floats. Binary and decimal arithmetic have very different characteristics and purposes. Using "decimal" generically for floating-point numbers is sloppy, especially so for an article on a topic like this one. Fredrik Johansson 11:26, 4 September 2009 (UTC)

I agree. The term "decimals" implies that the calculations occur using decimal arithmetic. I suspect that mostmany Arbitrary-precision packages use a binary representation (mantissa plus exponent) and only support decimal for input/output purposes. I support changing the references to "decimals" back to "floats". If some packages are shown to actually use decimal arithmetic internally, the term "decimals" would be appropriate for those. -- Tcncv (talk) 23:36, 6 September 2009 (UTC)
As a follow-up, I took a look at a quick look at a half-dozen packages and based on a scan of their web pages, documentation or source code, I found three that appear to use a decimal-power radix (apfloat, MAPM, and W3b.Sine) and three that use binary (ARPREC, mpmath, TTMath). While this is only a small sample, it does indicate a mix or representations, enough to justify undoing W3bbo's change. The original term "floats" is radix neutral, and if someone has time to research all of the packages, the more precise terms "decimal floats" or "binary floats" could be used. -- Tcncv (talk) 01:24, 7 September 2009 (UTC)
I made an attempt as updating the number types. It probably still needs work. Some of the long descriptions have made me consider if it might be better to break the number type description into several columns. One column could list the general representation (integer, float, rational, real), another could simply indicate built-in complex number space support (yes or no), and a third could indicate the internal radix used (decimal or binary). That might simplify descriptions like "decimal float and decimal complex float", while retaining all of the information. -- Tcncv (talk) 02:23, 7 September 2009 (UTC)

unbounded integers versus unlimited precision of floats[edit]

I found reading this article very confusing, since no distinction seems to be made between integer and floating point computations. Unbounded integers can of course be used to implement floating point arithmetic with a mantissa of varying length, but the two are not the same. Often one needs unbounded integers without any need to do floating point arithmetic; the factorial example is an illustration. In fact much of this article seems to be directed mostly at the case of integers, biut it never says so clearly.

Personally I find the notion of "precision" not suited to apply to integers, although I can see that the precision article says it does apply (and implies that a machine integer representing the number 17 on a 64-bit machine does so with more precision than one on a 32-bit machine). But even if I accept that the current article should apply to the case of representing integers of arbitrary length, then I insist that is should talk about that and about representing real numbers with extensible precision separately. The integer case falls into the category of exact algebraic computation, but the real number case fundamentally cannot. The latter case involves problems that do not apply to the former, such as deciding just where to stop the development of decimals if a number sqrt(2) is used within a computation. The article would in my eyes improve a lot if these issues were addressed. Marc van Leeuwen (talk) 13:42, 6 September 2009 (UTC)

I agree that talk of precision when applied to integers is odd and shouldn't occur, and any such occurrences should be rephrased. However, with floating-point (or fixed point but with fractional parts) there can be mergers. For instance, in the computation of numbers such as pi, e, sqrt(2), the digit array of integers would represent the number with an integer part and a fractional part with the position of the split understood. Thus, in base 100, pi would be stored as the integers (03),(14),(15),(92),(65), (35), (89), (79), etc. in an array of integers, which array should start with index zero to correspond directly to the (negative) powers of the base (though some computer languages insist on a starting index of one), and there is an implied point after the first digit. This particular number is known to not exceed 99, and so there is no need to have higher order integer parts in its representation and during calculation. More generally, the point would have to "float" about, and some scheme for locating it in a general number would be needed. So, there are multi-element big numbers for pure integer calculations (no fractional part), whereas if fractional parts are needed then either a pair of integer values in p/q style or else a direct fractional representation of the number is used for fixed-point calculations and floating-point calculations.
All such numbers are indeed real numbers (actually, only rational numbers) but only with the pure integer forms are the axia of real arithmetic followed (until overflow at the high-order end), since all lengths are finite and even an arbitrary length is still finite. The methods and representations are the same for a pure integer or a fixed-point or a floating-point number. Computations with fractional numbers will very quickly cause overflow in the low-order end due to the precision requirement of real arithmetic as in 1/3 in base 100 not using a rational number representation.
Once an external decision is made as to how many fractional digits shall be held, much proceeds as if the number is an integer with a known power offset, as in the fixed-point approach, and for floating-point, the accountancy is explicitly performed. NickyMcLean (talk) 21:12, 6 September 2009 (UTC)

example code[edit]

What's the n variable for in the example pseudo code? There's a loop, for n = 1 to 365, with no comment. One pass per day for a year? or 365! ?TomCerul (talk) 21:25, 6 July 2010 (UTC)

Being as much as possible an actual example, some number had to be chosen rather than waffle about with "N" or somesuch. In 1970 the annual NZ death count due to road accidents was about 360, or an average of one per day. At the time I wondered about the probability of all deaths occurring on the same day, a Poisson distribution calculation that required 365! which of course immediately overflowed the capacity of IBM1130 floating point arithmetic (32 bit), thus leading to the computation of log(365!) via Stirling's formula. Then I wondered about accuracy, and wrote a prog. to compute factorials via exact multi-precision arithmetic. Thus the 365. Cheers.NickyMcLean (talk) 21:37, 7 July 2010 (UTC)
hrm, list format help...
  1. That was funny and morbid.
  2. You didn't really answer my question
  3. Your answer was close enough that you did kind of answer my question
  4. Cheers!
TomCerul (talk) 20:47, 9 July 2010 (UTC)
Huh? What do you mean by note 2? Anyway, why introduce blather about "FactorialLimit" rather than a plain constant? Perhaps I should have added a comment to the line for n:=1 to 365 do saying something like "Last factorial to compute". Actually, the prog. will run into trouble with the number of digits needed - multiplying by 100 adds two digits, then by 101 adds a further two and a little, etc. so 365! will need a lot of digits. Thus the ending point is likely to be the croaked message "Overflow!". The initial version of the prog. I wrote in 1971 was set to quit once the number of digits filled one lineprinter line of 120 digits (allowing space for the "=nnn!"), though obviously, the number could be rolled over multiple lines if desired. Output to a disc file can produce a line of arbitrary length. NickyMcLean (talk) 04:22, 10 July 2010 (UTC)
The 365 was a Magic Number that my programmer OCD needed to label. :) Where you joking about that death rate math? TomCerul (talk) 17:51, 16 July 2010 (UTC)
Nope, I wasn't joking. I'd just been introduced to the Poisson distribution in applied mathematics and statistics and Physics, for instance with regard to Geiger-counter behaviour or earthquake occurrence, and indeed at the time, the annual death rate on the New Zealand roads was about 360. There surely must be attention paid to the likely number of calls on emergency services in any area and allowances for some level of unusual surge in admissions, but my thought was of course rather extreme. Although 365 is a magic number, its use in this case was relevant, though the context was not given in the article. Actually, all integers are magical (or special, or interesting). For if not all, one would be the largest magical integer, say n, and as such n + 1 would also be magical by association. Amongst my favourites are 396, 1103, 9801, and 26390, all appearing in Sirinivasa Ramanujan's astounding formula for 1/pi. NickyMcLean (talk) 01:41, 17 July 2010 (UTC)

Recent reverts[edit]

Everyone involved is advised to read Wikipedia:Edit warring before this gets out of hand. Thanks! Guy Macon (talk) 21:55, 10 May 2011 (UTC)

dc[edit]

Text says "dc: the POSIX desk calculator". But dc doesn't seem to be part of POSIX standard. See http://pubs.opengroup.org/onlinepubs/9699919799/idx/id.html for a list of everything in POSIX starting with 'd', like dd or df. 18:13, 12 September 2011 (UTC) — Preceding unsigned comment added by 187.23.87.80 (talk)

Then it should probably be changed to bc, which is part of the posix standard. AJRobbins (talk) 16:35, 21 May 2014 (UTC)

Examples in many systems[edit]

An editor has restored his link showing many systems doing one calculation without supporting it (which is his burden). The link does not add reasonable content; there is little doubt that different systems should compute the same answer; that they do it with slightly different presentations is not of interest here. Calling the examples a case study does not increase the value to WP. Furthermore, the article would not include such multiple examples; the link is not about something the article should have at some point in the future. Glrx (talk) 22:22, 25 November 2011 (UTC)

The link goes a long way in obviating the need to clutter the page with examples in many languages which often happens to pages like this. The case studied needs to be solved using the technique in question. How different languages present their implementations of APA is of interest to some readers of WP, and, furthermore, the links in WP articles can have merit in and of themselves and exist to show content not necessarily duplicated on the page or that should in future appear on the page.
"That they do it in slightly different presentations" is of great interest to those studying language design for example and such nuances are the proper content of an encyclopedia as well as being used on other wikipedia topics.
As for adding the link without supporting it, links are often added to wp with only an edit comment. It is standard practice. This link is in dispute and so is being discussed in the talk page. That one user is not interested in this aspect of the subject should not additionally taint the article. --Paddy (talk) 23:13, 25 November 2011 (UTC)

List of implementations[edit]

This article focuses more on implementations than APA. The list of implementations should be spun off to something like List of arbitrary-precision arithmetic implementations.

I would delete the Arbitrary-precision arithmetic#Pre-set precision section; it's about implementation choices; it sounds more in extended precision rather than arbitrary precision.

The history section is computer-centric. The first algorithms (before computers) were about how to perform calculations -- stuff we were taught in grade school. You don't need a computer to do long division, but it does make sense to have a good trial divisor.

Knuth may have published The Art of Computer Programming (Vol 2 1969?) before White and Bill Gosper implemented bignums for Maclisp (Maclisp manual is 1979, but the implementation was earlier; arbitrary precision arithmetic was needed for Macsyma in the late 1960s). William A. Martin's 1967 thesis may have used bignums.

There's nothing about rationals.

Glrx (talk) 23:00, 25 November 2011 (UTC)

 Done. I agree, that section has degenerated into listcruft, I have split it out into List of arbitrary-precision arithmetic software. -- intgr [talk] 19:26, 13 November 2014 (UTC)

Link to article about undergrad project[edit]

There have been continuing additions/reverts of C-Y Wang's paper.

  • C.-Y. Wang et al., Arithmetic Operations Beyond Floating Point Number Precision, International Journal of Computational Science and Engineering, 2011, Vol. 6, No. 3, pp. 206-215. [1]

The insertions have not argued the merit of the citation. This revert characterized the paper as a "Low relevance undergraduate project".

My talk page had the following discussion about this paper some time ago. Here is the text:

Please state the reasons that you removed the references for arbitrary precision after 2007 and only kept the original 2 references (without publish year)? Wiki said Please help improve this article by adding reliable references after 2007. The new papers cited reflect recent development and applications of arbitrary precision in scinetific and engineering fields. —Preceding unsigned comment added by 220.128.221.62 (talk) 19:29, 9 May 2011 (UTC)
The refs in arbitrary-precision arithmetic were removed because they were not used / cited in the article. They are after-the-fact additions that were not used to compile the original article and do not have inline citations pointing to them.
The notice at the top of the article is not asking for references after 2007. It states that in 2007 the article was marked because its text did not have inline citations to references. Nothing requires those inline citations to be post 2007.
Wikipedia is not a how to do it manual. WP:NOTHOWTO One of the references (one that appears to have a conflict of interest WP:COI with the editor) is a how to exercise for a class. Some other references appear to have little relevance to the main article; the article is not, for example, interested in FPGA implementations.
Glrx (talk) 20:14, 9 May 2011 (UTC)
Dear Mr/Ms. Glrx,
(1) Are you the editor of the page of arbitrary precision? (2) The papers written by academic researchers published in conference proceedings and journals as in the new references are very helpful to the wide reader in the world. (3) The texts can be edited, too, to reflect why those papers are necessary for readers with scientific and engineering backgrounds. Please read through all the papers before you brutally deleted them. (4) You seem to have a bad history of removing other people's editing, violating the value of Wiki. (5)I strongly object you being an editor if you are. Have not authored any journal or conference papers in the subjects of arbitrary precision? If not, please leave space for other experts. —Preceding unsigned comment added by Yuehwang (talkcontribs) 21:27, 9 May 2011 (UTC)

My take is the various 220.128.221.x editors have a WP:COI with the paper. Glrx (talk) 23:12, 24 December 2011 (UTC)

BTW, 220.12.221.59 also did this edit]. Glrx (talk) 23:26, 24 December 2011 (UTC)

my edit which was reverted[edit]

OK, it is correct that software/list entries which don't have any article can be listed in such lists/comparisons, but then please add a) a independent, third party, reliable reference to show me that is software exists and is notable and b) remove that spammy external links! Wikipedia is not a directory! mabdul 21:09, 17 January 2012 (UTC)

Generally, I agree with the notion of severely trimming the lists of implementations. WP is not a directory even if the paper it is printed on is cheap. For a topic to be an article, it must be notable. Notability is not a requirement for including something in an article, but there are similar concerns. Generally, we should not be giving something WP:UNDUE attention or serving as its advertising agent. Glrx (talk) 21:22, 17 January 2012 (UTC)

Citation for usage in encryption[edit]

I quote: "A common application is public-key cryptography (such as that in every modern Web browser), whose algorithms commonly employ arithmetic with integers having hundreds or thousands of digits."

My understanding of encryption was that it actually didn't use this, and this statement sounds like an assumption. Please prove me wrong I would very much like to find usage examples. DarkShroom (talk) 16:20, 30 March 2012 (UTC)

The RSA (algorithm) public-key cryptography uses very large integers. Key-size of 1024 bits is now considered weak. Researchers: 307-digit key crack endangers 1024-bit RSA Glrx (talk) 17:03, 31 March 2012 (UTC)

Old page with much information, for reference[edit]

Package, library name Number type Language License
apfloat Decimal floats, integers, rationals, and complex Java and C++ LGPL and Freeware
BeeCrypt Cryptography Library Integers Assembly, C, C++, Java LGPL
ARPREC and MPFUN Integers, binary floats, complex binary floats C++ with C++ and Fortran bindings BSD
Base One Number Class Decimal floats C++ Proprietary
bbnum library Integers and floats Assembler and C++ New BSD
phpseclib Decimal floats PHP LGPL
BCMath Arbitrary Precision Mathematics Decimal floats PHP PHP License
BigDigits Naturals C Freeware [2]
BigFloat Binary Floats C++ GPL
BigNum Binary Integers, Floats (with math functions) C#, .NET Freeware
C++ Big Integer Library Integers C++ Public domain
Class Library for Numbers (CLN) Integers, rationals, floats and complex C and C++ GPL
Computable Real Numbers Reals Common Lisp
dbl<n>, Git repo n x 53 bits precision compact & fast floating point numbers (n=2,3,4,5) C++ template Proprietary or GPL
IMSL C Proprietary
decNumber Decimals C ICU licence (MIT licence) [3]
FMLIB Floats Fortran
GNU Multi-Precision Library (and MPFR) Integers, rationals and floats C and C++ with bindings (GMPY,...) LGPL
MPCLI Integers C#, .NET MIT
C# Bindings for MPIR (MPIR is a fork of the GNU Multi-Precision Library)] Integers, rationals and floats C#, .NET LGPL
GNU Multi-Precision Library for .NET Integers C#, .NET LGPL
Eiffel Arbitrary Precision Mathematics Library Integers Eiffel LGPL
HugeCalc Integers C++ and Assembler Proprietary
IMath Integers and rationals C MIT
IntX Integers C#, .NET New BSD
JScience LargeInteger Integers Java
libgcrypt Integers C LGPL
libmpdec (and cdecimal) Decimals C, C++ and Python Simplified BSD
LibTomMath, Git repo Integers C and C++ Public domain
LiDIA Integers, floats, complex floats and rationals C and C++ Free for non-commercial use
MAPM Integers and decimal floats C (bindings for C++ and Lua) Freeware
MIRACL Integers and rationals C and C++ Free for non-commercial use
MPI Integers C LGPL
MPArith Integers, floats, and rationals Pascal, Delphi zlib
mpmath Floats, complex floats Python New BSD
NTL Integers, floats C and C++ GPL
bigInteger (and bigRational) Integers and rationals C and Seed7 LGPL
TTMath library Integers and binary floats Assembler and C++ New BSD
vecLib.framework Integers C Proprietary
W3b.Sine Decimal floats C#, .NET New BSD
Eiffel Arbitrary Precision Mathematics Library (GMP port) Integers Eiffel LGPL
BigInt Integers JavaScript Public domain
javascript-bignum Scheme-compatible decimal integers, rationals, and complex JavaScript MIT
MathX Integers, floats C++ Boost
ArbitraryPrecisionFloat floats (Decimals, Integer and Rational are built in) Smalltalk MIT
vlint Integers C++ BSD
hapint Integers JavaScript MIT or GPL

Terminology[edit]

An editor just added a comment about more extensive floating point packages with arbitrary precision that include trig functions.

I have no problem with the notion of arbitrary precision integers/bignums -- the integers take the space they need. The article explains how integer calculations can remain exact; division may require using rationals.

That cannot be the same notion with floating point and typical FP operations: sqrt(2) would immediately run out of space; π has the same problem. Arbitrary-precision in the floating point context would be the user setting the sizes of the exponent and mantissa. For example, MPFR has a target precision. (And there's an awkward topic about how to compute sin(x) to a desired precision.)

The article never addresses the floating point issue. Symbolic algebra systems use symbols and rationals to avoid problems with division; they keep irrationals as symbolic quantities; the results are exact/have infinite precision. Floating point is not usually exact. The article mentions some packages have a pre-set precision. Even Fortran has REAL*8.

There should be a clear notion of arbitrary precision arithmetic. I distinguish between arbitrary precision (integers/rationals that grow as needed; unbounded size; results exact), multiple precision (integers; fixed upper bound set by user; bounded size; division has a remainder; results exact; may overflow), and extended precision (floating point where user sets limits on exponent and mantissa; bounded size; results inexact).

I would restrict the current article to integers/rationals. Floating point belongs elsewhere.

It's not clear that some packages are appropriate even now (under definition that states "calculations are performed on numbers which digits of precision are limited only by the available memory of the host system"):

  • MathX is described as a template library for FIXED length arithmetic types. (Predefined up to 8192 bits.)
  • TTCalc is not arbitrary precision: 1024 bit mantissa and 128 bit exponent.

Although I can accept a notion of "arbitrary" where the user can set the precision parameters to any value he chooses, I think that is better fits the notion of multiple or extended precision.

Glrx (talk) 15:58, 12 September 2012 (UTC)

Second person[edit]

There are at least three uses of "we" that should be eliminated. Bubba73 You talkin' to me? 03:31, 16 June 2013 (UTC)

Claimed fact does not appear in reference[edit]

In Arbitrary-precision arithmetic#Applications, this phrase appears: "for example the √⅓ that appears in Gaussian integration." That article does not confirm this claimed fact. It does not appear to contain "√⅓" at all. David Spector (talk) 03:23, 8 September 2013 (UTC)

Huh? Try Gauss-Legendre (i.e. Gaussian but with with weight constant) and N = 2. NickyMcLean (talk) 09:09, 16 November 2014 (UTC)

External links modified[edit]

Hello fellow Wikipedians,

I have just modified one external link on Arbitrary-precision arithmetic. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at {{Sourcecheck}}).

As of February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete the "External links modified" sections if they want, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{sourcecheck}} (last update: 15 July 2018).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.


Cheers.—InternetArchiveBot (Report bug) 02:30, 17 October 2016 (UTC)