floatingPoint   556

« earlier    

Posit Arithmetic (PDF)
This document focuses on the details of posit mode, which can be thought of as “beating floats at their own game.” It introduces a type of unum, Type III, that combines many of the advantages of Type I and Type II unums, but is less radical and is designed as a “drop-in” replacement for an IEEE 754 Standard float with no changes needed to source code at the application level. The following table may help clarify the terminology.
floatingpoint  posits  unum 
10 days ago by euler
Intel 8087 - Wikipedia
The sales of the 8087 received a significant boost when IBM included a coprocessor socket on the IBM PC motherboard. Due to a shortage of chips, IBM did not actually offer the 8087 as an option for the PC until it had been on the market for six months. Development of the 8087 led to the IEEE 754-1985 standard for floating-point arithmetic. There were later x87 coprocessors for the 80186 (not used in PC-compatibles), 80286, 80386, and 80386SX processors. Starting with the 80486, the later Intel x86 processors did not use a separate floating point coprocessor; floating point functions were provided integrated with the processor.
floatingpoint  posits  unum  8087 
10 days ago by euler
Unum (number format) - Wikipedia
The unums (universal numbers[1]) is an arithmetic and a binary representation format for real numbers analogous to floating point, proposed by John Gustafson as an alternative to the now ubiquitous IEEE 754 arithmetic. The first version of unums, now officially known as Type I unum, was introduced in his book The End of Error.[2] Gustafson has since created two newer revisions of the unum format, Type II and Type III, in late 2016. Type III unum is also known as posits[3][4][5] and valids; posits present arithmetic for single real values and valids present the interval arithmetic version. This data type can serve as a replacement for IEEE 754 floats for programs which do not depend on specific features of IEEE 754. Details of valids have yet to be officially articulated by Gustafson.
floatingpoint  posits  unum 
10 days ago by euler
New Approach Could Sink Floating Point Computation
We caught up with Gustafson at ISC19. For that particular crowd, the supercomputing set, one of the primary advantages of the posits format is that you can get more precision and dynamic range using less bits than IEEE 754 numbers. And not just a few less. Gustafson told us that a 32-bit posit can replace a 64-bit float in almost all cases, which would have profound implications for scientific computing. Cutting the number of bits in half not only reduces the amount of cache, memory and storage to hold these values, but also substantially reduces the bandwidth needed to shuffle these values to and from the processor. It’s the main reason why he thinks posit-based arithmetic would deliver a two-fold to four-fold speedup compared to IEEE floats.
floatingpoint  posits  unum 
10 days ago by euler
New Approach Could Sink Floating Point Computation
In 1985, the Institute of Electrical and Electronics Engineers (IEEE) established IEEE 754, a standard for floating point formats and arithmetic that
floatingpoint 
11 days ago by synergyfactor
New Approach Could Sink Floating Point Computation
In 1985, the Institute of Electrical and Electronics Engineers (IEEE) established IEEE 754, a standard for floating point formats and arithmetic that
math  programming  floatingPoint  computerScience  cpus  everynumber  IFTTT  Pocket  posits  via:HackerNews 
12 days ago by unconsidered
New Approach Could Sink Floating Point Computation
There is ACTUALLY a credible alternative to IEEE 754 for floating point.
floatingpoint  programming  computerscience  via:HackerNews 
12 days ago by mcherm
Floating point error is the least of my worries
Modeling error is usually several orders of magnitude greater than floating point error. People who nonchalantly model the real world and then sneer at floating point as just an approximation strain at gnats and swallow camels.

maths  modelling  computers  programming  floatingpoint 
4 weeks ago by pozorvlak
linux - bash + arithmetic calculation with bash - Unix & Linux Stack Exchange
Since arithmetic expansion only does integer division, the proposed answer uses AWK, passing in values as variables, with '/dev/null' as the input file, and the math in a 'BEGIN' block.
bash  awk  shellscripting  floatingpoint  math  tipsandtricks  solution 
6 weeks ago by kme

« earlier    

related tags

754  8087  aabb  accuracy  ai  algorithm  algorithms  analysis  appleii  arithmetic  arithmetics  article  asm  assembly  awk  bash  binary  bounding  bug  bvh  c++  c  calculation  cobol  collisiondetection  comparison  compsci  computation  computer  computers  computerscience  computing  cpu  cpus  cs  cuda  currency  dec64  decimal  deep  dev  development  discussion  division  diy  double  efficency  electronics  error  everynumber  everyprogrammershould  facebook  factcheck  fixedpoint  float  floating-point  floating  floating_point  floatingpoint  floats  format  formatting  fp  fpu  gamedev  gcc  gdc18  gpu  hackernews  history  howto  humor  iee754  ieee  ieee754  ifttt  integer  internals  intro  introduction  java  javascript  kahan  keepaneyeon  library  linux  linuxkernel  mantissa  math  mathematics  maths  modelling  money  number  numbers  numeric  numerical  numericalmath  numerics  numers  nvidia  opencl  optimization  performance  phd  pocket  posit  posits  precesion  precision  print  processors  programming  project  propagation  python  real  reference  representation  rocketry  rounding  rust  shellscripting  simd  software  solution  sse  sse2  stackoverflow  stevewozniak  teaching  testing  theory  tipsandtricks  tootme  tree  tutorial  unfit  unum  vintage  war  warfare  wasm  weapon  weaponry  wikipedia  windows  wsl 

Copy this bookmark:



description:


tags: