Kalman exercise

10 weeks ago by mike

Quick explanation of how a Kalman filter works just to make sure I understood it and I don't forget it, and also for others to learn.

math
statistics
timeseries
10 weeks ago by mike

Why Is It Taking 20 Minutes to Mine This Bitcoin Block?

10 weeks ago by mike

Poisson and gamma distributions and the hitchhicker's paradox

probability
math
cryptocurrency
bitcoin
10 weeks ago by mike

The Reasonable Effectiveness of the Multiplicative Weights Update Algorithm – Math ∩ Programming

10 weeks ago by mike

What mystical algorithm has such broad applications? Now that computer scientists have studied it in generality, it’s known as the Multiplicative Weights Update Algorithm (MWUA). Procedurally, the algorithm is simple. I can even describe the core idea in six lines of pseudocode. You start with a collection of objects, and each object has a weight.

Set all the object weights to be 1.

For some large number of rounds:

Pick an object at random proportionally to the weights

Some event happens

Increase the weight of the chosen object if it does well in the event

Otherwise decrease the weight

algorithms
math
linearprogramming
gradientdescent
gametheory
Set all the object weights to be 1.

For some large number of rounds:

Pick an object at random proportionally to the weights

Some event happens

Increase the weight of the chosen object if it does well in the event

Otherwise decrease the weight

10 weeks ago by mike

Beautiful differentiation

november 2017 by mike

Automatic differentiation (AD) is a precise, efficient, and convenient method for computing derivatives of functions. Its forward-mode implementation can be quite simple even when extended to compute all of the higher-order derivatives as well. The higher-dimensional case has also been tackled, though with extra complexity. This paper develops an implementation of higher-dimensional, higher-order, forward-mode AD in the extremely general and elegant setting of calculus on manifolds and derives that implementation from a simple and precise specification.

In order to motivate and discover the implementation, the paper poses the question “What does AD mean, independently of implementation?” An answer arises in the form of naturality of sampling a function and its derivative. Automatic differentiation flows out of this naturality condition, together with the chain rule. Graduating from first-order to higher-order AD corresponds to sampling all derivatives instead of just one. Next, the setting is expanded to arbitrary vector spaces, in which derivative values are linear maps. The specification of AD adapts to this elegant and very general setting, which even simplifies the development.

haskell
math
programming
In order to motivate and discover the implementation, the paper poses the question “What does AD mean, independently of implementation?” An answer arises in the form of naturality of sampling a function and its derivative. Automatic differentiation flows out of this naturality condition, together with the chain rule. Graduating from first-order to higher-order AD corresponds to sampling all derivatives instead of just one. Next, the setting is expanded to arbitrary vector spaces, in which derivative values are linear maps. The specification of AD adapts to this elegant and very general setting, which even simplifies the development.

november 2017 by mike

Floating Point Demystified, Part 1

november 2017 by mike

To turn things around, think about time_t. time_t is a type defined to represent the number of seconds since the epoch of 1970-01-01 00:00 UTC. It has traditionally been defined as a 32-bit signed integer (which means that it will overflow in the year 2038). Imagine that a 32-bit single-precision float had been chosen instead.

With a float time_t, there would be no overflow until the year 5395141535403007094485264579465 AD, long after the Sun has swallowed up the Earth as a Red Giant, and turned into a Black Dwarf. However! With this scheme the granularity of timekeeping would get worse and worse the farther we got from 1970. Unlike the int32 which gives second granularity all the way until 2038, with a float time_t we would already in 2014 be down to a precision of 128 seconds — far too coarse to be useful.

math
programming
With a float time_t, there would be no overflow until the year 5395141535403007094485264579465 AD, long after the Sun has swallowed up the Earth as a Red Giant, and turned into a Black Dwarf. However! With this scheme the granularity of timekeeping would get worse and worse the farther we got from 1970. Unlike the int32 which gives second granularity all the way until 2038, with a float time_t we would already in 2014 be down to a precision of 128 seconds — far too coarse to be useful.

november 2017 by mike

Hack the derivative

november 2017 by mike

Finite difference with complex analysis trick

calculus
math
python
november 2017 by mike

Math Has No God Particle | FiveThirtyEight

july 2017 by mike

David Vogan, who’s a mathematician at MIT and was involved with the atlas project, described academic mathematics as a garden. There are showy, flowery fields like number theory. Its beautiful problems and elegant results, such as the prime gap or Fermat’s last theorem, are math’s orchids. There are also the tomatoes — the things you can eat out of the garden, the practical yield. These disciplines, like Fourier analysis with its concrete applications to signal processing of audio, radio and light waves, are businesslike. And then there are the disciplines, often unheralded, that keep the rest of the garden growing — the hoes, the sprinklers. Lie groups, their representations and the atlas project are an example.

“Representation theory,” the field of the atlas group’s research, “is the fertilizer or the rose trellis, depending on the day of the week,” Vogan said.

math
research
journalism
scientism
ScienceWriting
“Representation theory,” the field of the atlas group’s research, “is the fertilizer or the rose trellis, depending on the day of the week,” Vogan said.

july 2017 by mike

algorithms - Prove Upper Bound (Big O) for Fibonacci's Sequence? - Mathematics Stack Exchange

december 2016 by mike

In general, given any linear recurrence relation, the same trick works: guess that the general nnth term is of the form cxncxn, write down an equation for xx (which will be a polynomial), and if αα is the largest root of that polynomial equation, then you'll have T(n)=O(αn)T(n)=O(αn).

math
cs
algorithms
december 2016 by mike

**related tags**

Copy this bookmark: