nhaliday + approximation   55

exponential function - Feynman's Trick for Approximating $e^x$ - Mathematics Stack Exchange
1. e^2.3 ~ 10
2. e^.7 ~ 2
3. e^x ~ 1+x
e = 2.71828...

errors (absolute, relative):
1. +0.0258, 0.26%
2. -0.0138, -0.68%
3. 1 + x approximates e^x on [-.3, .3] with absolute error < .05, and relative error < 5.6% (3.7% for [0, .3]).
nibble  q-n-a  overflow  math  feynman  giants  mental-math  calculation  multiplicative  AMT  identity  objektbuch  explanation  howto  estimate  street-fighting  stories  approximation  data  trivia  nitty-gritty
26 days ago by nhaliday
Errors in Math Functions (The GNU C Library)
https://stackoverflow.com/questions/22259537/guaranteed-precision-of-sqrt-function-in-c-c
For C99, there are no specific requirements. But most implementations try to support Annex F: IEC 60559 ﬂoating-point arithmetic as good as possible. It says:

An implementation that deﬁnes __STDC_IEC_559__ shall conform to the specifications in this annex.

And:

The sqrt functions in <math.h> provide the IEC 60559 square root operation.

IEC 60559 (equivalent to IEEE 754) says about basic operations like sqrt:

Except for binary <-> decimal conversion, each of the operations shall be performed as if it first produced an intermediate result correct to infinite precision and with unbounded range, and then coerced this intermediate result to fit in the destination's format.

The final step consists of rounding according to several rounding modes but the result must always be the closest representable value in the target precision.

[ed.: The list of other such correctly rounded functions is included in the IEEE-754 standard (which I've put w/ the C1x and C++2x standard drafts) under section 9.2, and it mainly consists of stuff that can be expressed in terms of exponentials (exp, log, trig functions, powers) along w/ sqrt/hypot functions.

Fun fact: this question was asked by Yeputons who has a codeforces profile.]
https://stackoverflow.com/questions/20945815/math-precision-requirements-of-c-and-c-standard
oss  libraries  systems  c(pp)  numerics  documentation  objektbuch  list  linux  unix  multi  q-n-a  stackex  programming  nitty-gritty  sci-comp  accuracy  types  approximation  IEEE  protocol-metadata  gnu
july 2019 by nhaliday
What every computer scientist should know about floating-point arithmetic
Floating-point arithmetic is considered as esoteric subject by many people. This is rather surprising, because floating-point is ubiquitous in computer systems: Almost every language has a floating-point datatype; computers from PCs to supercomputers have floating-point accelerators; most compilers will be called upon to compile floating-point algorithms from time to time; and virtually every operating system must respond to floating-point exceptions such as overflow. This paper presents a tutorial on the aspects of floating-point that have a direct impact on designers of computer systems. It begins with background on floating-point representation and rounding error, continues with a discussion of the IEEE floating point standard, and concludes with examples of how computer system builders can better support floating point.

https://stackoverflow.com/questions/2729637/does-epsilon-really-guarantees-anything-in-floating-point-computations
"you must use an epsilon when dealing with floats" is a knee-jerk reaction of programmers with a superficial understanding of floating-point computations, for comparisons in general (not only to zero).

This is usually unhelpful because it doesn't tell you how to minimize the propagation of rounding errors, it doesn't tell you how to avoid cancellation or absorption problems, and even when your problem is indeed related to the comparison of two floats, it doesn't tell you what value of epsilon is right for what you are doing.

...

Regarding the propagation of rounding errors, there exists specialized analyzers that can help you estimate it, because it is a tedious thing to do by hand.

https://www.di.ens.fr/~cousot/projects/DAEDALUS/synthetic_summary/CEA/Fluctuat/index.html

This was part of HW1 of CS24:
https://en.wikipedia.org/wiki/Kahan_summation_algorithm
In particular, simply summing n numbers in sequence has a worst-case error that grows proportional to n, and a root mean square error that grows as {\displaystyle {\sqrt {n}}} {\sqrt {n}} for random inputs (the roundoff errors form a random walk).[2] With compensated summation, the worst-case error bound is independent of n, so a large number of values can be summed with an error that only depends on the floating-point precision.[2]

cf:
https://en.wikipedia.org/wiki/Pairwise_summation
In numerical analysis, pairwise summation, also called cascade summation, is a technique to sum a sequence of finite-precision floating-point numbers that substantially reduces the accumulated round-off error compared to naively accumulating the sum in sequence.[1] Although there are other techniques such as Kahan summation that typically have even smaller round-off errors, pairwise summation is nearly as good (differing only by a logarithmic factor) while having much lower computational cost—it can be implemented so as to have nearly the same cost (and exactly the same number of arithmetic operations) as naive summation.

In particular, pairwise summation of a sequence of n numbers xn works by recursively breaking the sequence into two halves, summing each half, and adding the two sums: a divide and conquer algorithm. Its worst-case roundoff errors grow asymptotically as at most O(ε log n), where ε is the machine precision (assuming a fixed condition number, as discussed below).[1] In comparison, the naive technique of accumulating the sum in sequence (adding each xi one at a time for i = 1, ..., n) has roundoff errors that grow at worst as O(εn).[1] Kahan summation has a worst-case error of roughly O(ε), independent of n, but requires several times more arithmetic operations.[1] If the roundoff errors are random, and in particular have random signs, then they form a random walk and the error growth is reduced to an average of {\displaystyle O(\varepsilon {\sqrt {\log n}})} O(\varepsilon {\sqrt {\log n}}) for pairwise summation.[2]

A very similar recursive structure of summation is found in many fast Fourier transform (FFT) algorithms, and is responsible for the same slow roundoff accumulation of those FFTs.[2][3]

https://eng.libretexts.org/Bookshelves/Electrical_Engineering/Book%3A_Fast_Fourier_Transforms_(Burrus)/10%3A_Implementing_FFTs_in_Practice/10.8%3A_Numerical_Accuracy_in_FFTs
However, these encouraging error-growth rates only apply if the trigonometric “twiddle” factors in the FFT algorithm are computed very accurately. Many FFT implementations, including FFTW and common manufacturer-optimized libraries, therefore use precomputed tables of twiddle factors calculated by means of standard library functions (which compute trigonometric constants to roughly machine precision). The other common method to compute twiddle factors is to use a trigonometric recurrence formula—this saves memory (and cache), but almost all recurrences have errors that grow as O(n‾√) , O(n) or even O(n2) which lead to corresponding errors in the FFT.

...

There are, in fact, trigonometric recurrences with the same logarithmic error growth as the FFT, but these seem more difficult to implement efficiently; they require that a table of Θ(logn) values be stored and updated as the recurrence progresses. Instead, in order to gain at least some of the benefits of a trigonometric recurrence (reduced memory pressure at the expense of more arithmetic), FFTW includes several ways to compute a much smaller twiddle table, from which the desired entries can be computed accurately on the fly using a bounded number (usually <3) of complex multiplications. For example, instead of a twiddle table with n entries ωkn , FFTW can use two tables with Θ(n‾√) entries each, so that ωkn is computed by multiplying an entry in one table (indexed with the low-order bits of k ) by an entry in the other table (indexed with the high-order bits of k ).

[ed.: Nicholas Higham's "Accuracy and Stability of Numerical Algorithms" seems like a good reference for this kind of analysis.]
nibble  pdf  papers  programming  systems  numerics  nitty-gritty  intricacy  approximation  accuracy  types  sci-comp  multi  q-n-a  stackex  hmm  oly-programming  accretion  formal-methods  yak-shaving  wiki  reference  algorithms  yoga  ground-up  divide-and-conquer  fourier  books  tidbits  chart  caltech  nostalgia
may 2019 by nhaliday
Reconsidering epistemological scepticism – Dividuals
I blogged before about how I consider an epistemological scepticism fully compatible with being conservative/reactionary. By epistemological scepticism I mean the worldview where concepts, categories, names, classes aren’t considered real, just useful ways to categorize phenomena, but entirely mental constructs, basically just tools. I think you can call this nominalism as well. The nominalism-realism debate was certainly about this. What follows is the pro-empirical worldview where logic and reasoning is considered highly fallible: hence you don’t think and don’t argue too much, you actually look and check things instead. You rely on experience, not reasoning.

...

Anyhow, the argument is that there are classes, which are indeed artificial, and there are kinds, which are products of natural forces, products of causality.

...

And the deeper – Darwinian – argument, unspoken but obvious, is that any being with a model of reality that does not conform to such real clumps, gets eaten by a grue.

This is impressive. It seems I have to extend my one-variable epistemology to a two-variable epistemology.

My former epistemology was that we generally categorize things according to their uses or dangers for us. So “chair” is – very roughly – defined as “anything we can sit on”. Similarly, we can categorize “predator” as “something that eats us or the animals that are useful for us”.

The unspoken argument against this is that the universe or the biosphere exists neither for us nor against us. A fox can eat your rabbits and a lion can eat you, but they don’t exist just for the sake of making your life difficult.

Hence, if you interpret phenomena only from the viewpoint of their uses or dangers for humans, you get only half the picture right. The other half is what it really is and where it came from.

Copying is everything: https://dividuals.wordpress.com/2015/12/14/copying-is-everything/
Philosophy professor Ruth Millikan’s insight that everything that gets copied from an ancestor has a proper function or teleofunction: it is whatever feature or function that made it and its ancestor selected for copying, in competition with all the other similar copiable things. This would mean Aristotelean teleology is correct within the field of copyable things, replicators, i.e. within biology, although in physics still obviously incorrect.

Darwinian Reactionary drew attention to it two years ago and I still don’t understand why didn’t it generate a bigger buzz. It is an extremely important insight.

I mean, this is what we were waiting for, a proper synthesis of science and philosophy, and a proper way to rescue Aristotelean teleology, which leads to so excellent common-sense predictions that intuitively it cannot be very wrong, yet modern philosophy always denied it.

The result from that is the briding of the fact-value gap and burying the naturalistic fallacy: we CAN derive values from facts: a thing is good if it is well suitable for its natural purpose, teleofunction or proper function, which is the purpose it was selected for and copied for, the purpose and the suitability for the purpose that made the ancestors of this thing selected for copying, instead of all the other potential, similar ancestors.

...

What was humankind selected for? I am afraid, the answer is kind of ugly.

Men were selected to compete between groups, the cooperate within groups largely for coordinating for the sake of this competition, and have a low-key competition inside the groups as well for status and leadership. I am afraid, intelligence is all about organizing elaborate tribal raids: “coalitionary arms races”. The most civilized case, least brutal but still expensive case is arms races in prestige status, not dominance status: when Ancient Athens buildt pretty buildings and modern France built the TGV and America sent a man to the Moon in order to gain “gloire” i.e. the prestige type respect and status amongst the nations, the larger groups of mankind. If you are the type who doesn’t like blood, you should probably focus on these kinds of civilized, prestige-project competitions.

Women were selected for bearing children, for having strong and intelligent sons therefore having these heritable traits themselves (HBD kind of contradicts the more radically anti-woman aspects of RedPillery: marry a weak and stupid but attractive silly-blondie type woman and your son’s won’t be that great either), for pleasuring men and in some rarer but existing cases, to be true companions and helpers of their husbands.

https://en.wikipedia.org/wiki/Four_causes
- Matter: a change or movement's material cause, is the aspect of the change or movement which is determined by the material that composes the moving or changing things. For a table, that might be wood; for a statue, that might be bronze or marble.
- Form: a change or movement's formal cause, is a change or movement caused by the arrangement, shape or appearance of the thing changing or moving. Aristotle says for example that the ratio 2:1, and number in general, is the cause of the octave.
- Agent: a change or movement's efficient or moving cause, consists of things apart from the thing being changed or moved, which interact so as to be an agency of the change or movement. For example, the efficient cause of a table is a carpenter, or a person working as one, and according to Aristotle the efficient cause of a boy is a father.
- End or purpose: a change or movement's final cause, is that for the sake of which a thing is what it is. For a seed, it might be an adult plant. For a sailboat, it might be sailing. For a ball at the top of a ramp, it might be coming to rest at the bottom.

https://en.wikipedia.org/wiki/Proximate_and_ultimate_causation
A proximate cause is an event which is closest to, or immediately responsible for causing, some observed result. This exists in contrast to a higher-level ultimate cause (or distal cause) which is usually thought of as the "real" reason something occurred.

...

- Ultimate causation explains traits in terms of evolutionary forces acting on them.
- Proximate causation explains biological function in terms of immediate physiological or environmental factors.
gnon  philosophy  ideology  thinking  conceptual-vocab  forms-instances  realness  analytical-holistic  bio  evolution  telos-atelos  distribution  nature  coarse-fine  epistemic  intricacy  is-ought  values  duplication  nihil  the-classics  big-peeps  darwinian  deep-materialism  selection  equilibrium  subjective-objective  models  classification  smoothness  discrete  schelling  optimization  approximation  comparison  multi  peace-violence  war  coalitions  status  s-factor  fashun  reputation  civilization  intelligence  competition  leadership  cooperate-defect  within-without  within-group  group-level  homo-hetero  new-religion  causation  direct-indirect  ends-means  metabuch  physics  axioms  skeleton  wiki  reference  concept  being-becoming  essence-existence  logos  real-nominal
july 2018 by nhaliday
Sequence Modeling with CTC
A visual guide to Connectionist Temporal Classiﬁcation, an algorithm used to train deep neural networks in speech recognition, handwriting recognition and other sequence problems.
acmtariat  techtariat  org:bleg  nibble  better-explained  machine-learning  deep-learning  visual-understanding  visualization  analysis  let-me-see  research  sequential  audio  classification  model-class  exposition  language  acm  approximation  comparison  markov  iteration-recursion  concept  atoms  distribution  orders  DP  heuristic  optimization  trees  greedy  matching  gradient-descent  org:popup
december 2017 by nhaliday
Rank aggregation basics: Local Kemeny optimisation | David R. MacIver
This turns our problem from a global search to a local one: Basically we can start from any point in the search space and search locally by swapping adjacent pairs until we hit a minimum. This turns out to be quite easy to do. _We basically run insertion sort_: At step n we have the first n items in a locally Kemeny optimal order. Swap the n+1th item backwards until the majority think its predecessor is < it. This ensures all adjacent pairs are in the majority order, so swapping them would result in a greater than or equal K. This is of course an O(n^2) algorithm. In fact, the problem of merely finding a locally Kemeny optimal solution can be done in O(n log(n)) (for much the same reason as you can sort better than insertion sort). You just take the directed graph of majority votes and find a Hamiltonian Path. The nice thing about the above version of the algorithm is that it gives you a lot of control over where you start your search.
techtariat  liner-notes  papers  tcs  algorithms  machine-learning  acm  optimization  approximation  local-global  orders  graphs  graph-theory  explanation  iteration-recursion  time-complexity  nibble
september 2017 by nhaliday
Lecture 14: When's that meteor arriving
- Meteors as a random process
- Limiting approximations
- Derivation of the Exponential distribution
- Derivation of the Poisson distribution
- A "Poisson process"
nibble  org:junk  org:edu  exposition  lecture-notes  physics  mechanics  space  earth  probability  stats  distribution  stochastic-processes  closure  additive  limits  approximation  tidbits  acm  binomial  multiplicative
september 2017 by nhaliday
Constitutive equation - Wikipedia
In physics and engineering, a constitutive equation or constitutive relation is a relation between two physical quantities (especially kinetic quantities as related to kinematic quantities) that is specific to a material or substance, and approximates the response of that material to external stimuli, usually as applied fields or forces. They are combined with other equations governing physical laws to solve physical problems; for example in fluid mechanics the flow of a fluid in a pipe, in solid state physics the response of a crystal to an electric field, or in structural analysis, the connection between applied stresses or forces to strains or deformations.

Some constitutive equations are simply phenomenological; others are derived from first principles. A common approximate constitutive equation frequently is expressed as a simple proportionality using a parameter taken to be a property of the material, such as electrical conductivity or a spring constant. However, it is often necessary to account for the directional dependence of the material, and the scalar parameter is generalized to a tensor. Constitutive relations are also modified to account for the rate of response of materials and their non-linear behavior.[1] See the article Linear response function.
nibble  wiki  reference  article  physics  mechanics  electromag  identity  estimate  approximation  empirical  stylized-facts  list  dirty-hands  fluid  logos
august 2017 by nhaliday
Diophantine approximation - Wikipedia
- rationals perfectly approximated by themselves, badly approximated (eps>1/bq) by other rationals
- irrationals well-approximated (eps~1/q^2) by rationals:
https://en.wikipedia.org/wiki/Dirichlet%27s_approximation_theorem
nibble  wiki  reference  math  math.NT  approximation  accuracy  levers  pigeonhole-markov  multi  tidbits  discrete  rounding  estimate  tightness  algebra
august 2017 by nhaliday
Evolution of Virulence | West Hunter
Once upon a time, I thought a lot about evolution and pathogens. I still do, on occasion.

It used to be the case [and still is] that many biologists thought that natural selection would inevitably tend towards a situation in which pathogens did infinitesimal harm to their host. This despite the epidemics all around them. I remember reading a book on parasitology in which the gormless author mentioned a certain species of parasitic copepod that routinely blinded the fish they attached to. He said that many a naive grad student would think that that these parasitic copepods were bad for the fish, but sophisticated evolutionists like himself knew (and would explain to the newbies) that of course the fish didn’t suffer any reduction in fitness by going blind – theory said so ! Clearly, that man had a Ph.D.

If a pathogen can gain increased reproduction by tapping host resources, or by doing any damn thing that helps itself and hurts the host, that tactic may pay, and be selected for. It depends on the balance between the advantages and costs – almost entirely those to the pathogen, since the pathogen evolves much more rapidly than the host. In some cases, as much as a million times faster – because of generations that may be 20 minutes long rather than 20 years, because pathogens often have very large populations, which favors Fisherian acceleration, and in many cases, a relatively high mutation rate. Pathogen evolution is, at least some cases, so rapid that you see significant evolutionary change within a single host. Along the same lines, we have seen very significant evolutionary changes in antibiotic resistance among pathogenic bacteria over the past few decades, but I’m pretty sure that there hasn’t been much evolutionary change in mankind since I was a kid.

So when analyzing virulence, people mostly consider evolutionary pressures on the pathogens, rather than the host. Something like the Born-Oppenheimer approximation.
west-hunter  bio  disease  parasites-microbiome  red-queen  thinking  incentives  evolution  🌞  deep-materialism  discussion  mutation  selection  time  immune  scitariat  maxim-gun  cooperate-defect  ideas  anthropic  is-ought  gender  gender-diff  scale  magnitude  stylized-facts  approximation  analogy  comparison  pro-rata
april 2017 by nhaliday
Mean field theory - Wikipedia
In physics and probability theory, mean field theory (MFT also known as self-consistent field theory) studies the behavior of large and complex stochastic models by studying a simpler model. Such models consider a large number of small individual components which interact with each other. The effect of all the other individuals on any given individual is approximated by a single averaged effect, thus reducing a many-body problem to a one-body problem.
concept  atoms  models  physics  stat-mech  ising  approximation  parsimony  wiki  reference  nibble
march 2017 by nhaliday
Equivalence between counting and sampling
also: every counting problem either has FPTRAS or no approx. w/i polynomial factor
pdf  exposition  lecture-notes  berkeley  nibble  tcs  counting  sampling  characterization  complexity  approximation  rand-approx  proofs
february 2017 by nhaliday
Count–min sketch - Wikipedia
- estimates frequency vector (f_i)
- idea:
d = O(log 1/δ) hash functions h_j: [n] -> [w] (w = O(1/ε))
d*w counters a[r, c]
for each event i, increment counters a[1, h_1(i)], a[2, h_2(i)], ..., a[d, h_d(i)]
estimate for f_i is min_j a[j, h_j(i)]
- never underestimates but upward-biased
- pf: Markov to get constant probability of success, then exponential decrease with repetition
lecture notes: http://theory.stanford.edu/~tim/s15/l/l2.pdf
- note this can work w/ negative updates. just use median instead of min. pf still uses markov on the absolute value of error.
algorithms  data-structures  sublinear  hashing  wiki  reference  bias-variance  approximation  random  tcs  multi  stanford  lecture-notes  pdf  tim-roughgarden  nibble  pigeonhole-markov  PAC
february 2017 by nhaliday
Lecture 16
In which we define a multi-commodity flow problem, and we see that its dual is the relaxation of a useful graph partitioning problem. The relaxation can be rounded to yield an approximate graph partitioning algorithm.
pdf  lecture-notes  exposition  optimization  linear-programming  graphs  graph-theory  algorithms  duality  rounding  stanford  approximation  rand-approx  luca-trevisan  relaxation  nibble  stock-flow  constraint-satisfaction  tcs  tcstariat
january 2017 by nhaliday
What Chinese corner-cutting reveals about modernity | Aeon Essays
Your balcony fell off? Chabuduo. Vaccines are overheated? Chabuduo. How China became the land of disastrous corner-cutting

The copy is the original: https://aeon.co/essays/why-in-china-and-japan-a-copy-is-just-as-good-as-an-original
In China and Japan, temples may be rebuilt and ancient warriors cast again. There is nothing sacred about the ‘original'
news  org:mag  culture  china  business  institutions  asia  analytical-holistic  sinosphere  org:popup  n-factor  approximation  heavy-industry  speedometer  dirty-hands  quality  tightness  discipline  multi  japan  pop-diff  cultural-dynamics  innovation  creative  explanans  values  duplication  sanctity-degradation  europe  orient  occident  the-great-west-whale  religion  christianity  buddhism  morality  ethics  cycles  forms-instances  apollonian-dionysian  being-becoming  essence-existence
december 2016 by nhaliday

bundles : abstractacmtcs

Copy this bookmark:

description:

tags: