arb
mpmath
arb | mpmath | |
---|---|---|
11 | 10 | |
457 | 911 | |
0.9% | 1.0% | |
2.2 | 9.1 | |
about 2 months ago | 3 days ago | |
C | Python | |
GNU Lesser General Public License v3.0 only | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
arb
-
Patriot Missile Floating point Software Problem lead to deaths 28 Americans
You can instead list your criteria for good number format and look at alternatives with those lenses. Floating point is designed for a good balance between dynamic range and precision, and IEEE 754 binary formats can be seen as a FP standard particularly optimized for numerical calculation.
There are several other FP formats. The most popular one is IEEE 754 minus subnormal numbers, followed by bfloat16, IEEE 754 decimal formats (formerly IEEE 854) and posits. Only first two have good hardware supports. The lack of subnormal number means that `a <=> b` can't be no longer rewritten to `a - b <=> 0` among others but is widely believed to be faster. (I don't fully agree, but it's indeed true for existing contemporary hardwares.) IEEE 754 decimal formats are notable for lack of normalization guarantee. Posits are, in some sense, what IEEE 754 would have been if designed today, and in fact aren't that fundamentally different from IEEE 754 in my opinion.
Fixed-point formats share pros and cons of finitely sized integer numbers and you should have no difficulty to analyze them. In short, they offer a smaller dynamic range compared to FP, but its truncation model is much simpler to reason. In turn you will get a varying precision and out-of-bound issues.
Rational number formats look very promising at the beginning, but they are much harder to implement efficiently. You will need a fast GCD algorithm (not Euclidean) and also have to handle out-of-bound numerators and denumerators. In fact, many rational number formats rely on arbitrary-precision integers precisely for avoiding those issues, and inherit the same set of issues---unbounded memory usage and computational overhead. Approximate rational number formats are much rarer, and I'm only aware of the Inigo Quilez's floating-bar experiment [1] in this space.
[1] https://iquilezles.org/articles/floatingbar/
Interval/ball/affine arithmetics and others are means to automatically approximate an error analysis. They have a good property of being never incorrect, but it is still really easy for them to throw up and give a correct but useless answer like [-inf, inf]. Also they are somewhat awkward in a typical procedural paradigm because comparisons will return a tri-state boolean (true, false, unsure). Nevertheless they are often useful when correctly used. Fredrik Johansson's Arb [2] is a good starting point in my opinion.
[2] https://arblib.org/
Finally you can model a number as a function that returns a successively accurate approximation. This is called the constructive or exact real number, and simultaneously most expensive and most correct. One of the most glaring problems is that an equality is not always decidable, and practical applications tend to have various heuristics to get around this fact. Amazingly enough, Android's built-in calculator is one of the most used applications that use this model [3].
[3] https://dl.acm.org/doi/pdf/10.1145/2911981
- Beyond Automatic Differentiation
-
Cosine Implementation in C
https://github.com/JuliaMath/Bessels.jl/blob/master/src/bess...
Thanks! I love it, so easy to understand and follow.
My favourite work on the subject is Fredrik Johansson's:
https://github.com/fredrik-johansson/arb
Whenever I feel down and without energy I just read something in there
-
Math with Significant Figures
Probably the most popular package for dealing with error propagation and arbitrary precision arithmetic in Python is mpmath, more specifically the mp.iv module. For more serious applications I'd take a look at MPFR and Arb, both in C. And there are tons of ball arithmetic and interval arithmetic libraries in Fortran.
-
Function betrayal
You're in good company too. Using intervals to bound error is the entire idea behind the arb library.
-
What are some best practices in dealing with precision errors in computing?
The error bounds approach is probably what you’re looking for. A better search term for that is “interval arithmetic.” There are many good software packages for interval arithmetic, like Arb.
-
Numeric equality
I do agree with your list, so that is something! I will add, balls are underrated, ditto intervals (nominally more efficient, but on x86 switching rounding modes is 20-30 cycles...)
-
Cutting-edge research on numerical representations?
Ball arithmetic looks interesting. As far as I know, arb is the primary implementation.
- Is there a language which can keep track of the potential epsilon error when doing calculations?
- Beware of Fast-Math
mpmath
- mpmath – Python library for arbitrary-precision floating-point arithmetic
-
Lies My Calculator and Computer Told Me [pdf]
What you've done here is tell SymPy to use extra precision for the intermediate (and final) output. This doesn't truly fix the problem of cancellation and loss of precision, but for many practical purposes it can postpone the problem long enough to give you a useful result.
Internally, SymPy uses mpmath (https://mpmath.org/) for representation of numbers to arbitrary precision. You could install and use the latter library directly, gaining extra precision without going through symbolic manipulation.
All that being said, it's still good practice to avoid loss of precision at the outset. Arbitrary-precision calculations are slow compared to hardware-native floating point operations. Using the example from mpmath's homepage in iPython:
In [1]: import mpmath as mp; import scipy as sp; import numpy as np
-
mpmath VS gmpy - a user suggested alternative
2 projects | 2 Aug 2023
-
How can I compute the Mandelbrot Set at infinite zoom level
Either use a fixed point system with enough precision (determined beforehand) or consider a library like https://mpmath.org.
- How do I get more decimal places for numbers in Python?
-
My function isn't working correctly
Sure you can, check out projects for high precision numbers like https://mpmath.org/
- How to preserve decimal places
-
Math with Significant Figures
Probably the most popular package for dealing with error propagation and arbitrary precision arithmetic in Python is mpmath, more specifically the mp.iv module. For more serious applications I'd take a look at MPFR and Arb, both in C. And there are tons of ball arithmetic and interval arithmetic libraries in Fortran.
-
Integrating an extremely oscillating function!
Elaborating on /u/lanemik: if you're forced to do everything numerically and aren't able to use rationals, you can also use multiple-precision arithmetic. It's significantly slower, but it's as precise as you need it to be. Note that numpy will happily work with other objects that define arithmetic operations. I haven't messed with scipy enough to know how it does things.
- were can i find advance ( hardest ) python projects with source code ?
What are some alternatives?
Arblib.jl - Thin, efficient wrapper around Arb library (http://arblib.org/)
NumPy - The fundamental package for scientific computing with Python.
calc - C-style arbitrary precision calculator
SigFigs - Implementation of a Sigfig class and an Exact class that allow math to be done while keeping the correct number of significant digits.
MultiFloats.jl - Fast, SIMD-accelerated extended-precision arithmetic for Julia
gmpy - General Multi-Precision arithmetic for Python 2.6+/3+ (GMP, MPIR, MPFR, MPC)
tiny-bignum-c - Small portable multiple-precision unsigned integer arithmetic in C
SciPy - SciPy library main repository
The-RLIBM-Project - A combined repository for all RLIBM prototypes
number-precision - 🚀1K tiny & fast lib for doing addition, subtraction, multiplication and division operations precisely
SuiteSparse - SuiteSparse: a suite of sparse matrix packages by @DrTimothyAldenDavis et al. with native CMake support
SymPy - A computer algebra system written in pure Python