**probability**10223

The Coin Flip: A Fundamentally Unfair Proposition? (2009) | Hacker News

yesterday by np

See: https://econ.ucsb.edu/~doug/240a/Coin%20Flip.htm

"A good way of thinking about this is by looking at the ratio of odd numbers to even numbers when you start counting from 1.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

No matter how long you count, you'll find that at any given point, one of two things will be true:

You've touched more odd numbers than even numbers

You've touched an equal amount of odd numbers and even numbers

What will never happen, is this:

You've touched more even numbers than odd numbers.

Similarly, consider a coin, launched in the "heads" position, flipping heads over tails through the ether:

H T H T H T H T H T H T H T H T H T H T H T H T H

At any given point in time, either the coin will have spent equal time in the Heads and Tails states, or it will have spent more time in the Heads state. In the aggregate, it's slightly more likely that the coin shows Heads at a given point in time—including whatever time the coin is caught. And vice-versa if you start the coin-flip from the Tails position."

...

"John von Neumann figured out a solution for getting fair results from a biased coin:

1. Toss the coin twice. 2. If the results match, start over, forgetting both results. 3. If the results differ, use the first result, forgetting the second.

This has appeared on HN before, but no one's pointed it out so far in the discussion. More info: https://en.wikipedia.org/wiki/Fair_coin#Fair_results_from_a_biased_coin

...

If anybody's curious about the math, it's pretty simple.

Assume p is the probability of flipping heads. Then, pp is the probability of flipping 2 heads, (1-p)(1-p) is the probability of flipping 2 tails, (1-p)p is the probability of flipping tails then heads, and p(1-p) is the probability of flipping heads then tails.

So, basically p^2 + (1-p)^2 + p(1-p) + (1-p)p = 1.

Then, we ignore the first 2 terms (since that's when the results match and we start over), and we're only left with the p(1-p) case and the (1-p)p case. These are equally likely, and the first coin is heads in the first case and tails in the second."

math
probability
interesting
"A good way of thinking about this is by looking at the ratio of odd numbers to even numbers when you start counting from 1.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

No matter how long you count, you'll find that at any given point, one of two things will be true:

You've touched more odd numbers than even numbers

You've touched an equal amount of odd numbers and even numbers

What will never happen, is this:

You've touched more even numbers than odd numbers.

Similarly, consider a coin, launched in the "heads" position, flipping heads over tails through the ether:

H T H T H T H T H T H T H T H T H T H T H T H T H

At any given point in time, either the coin will have spent equal time in the Heads and Tails states, or it will have spent more time in the Heads state. In the aggregate, it's slightly more likely that the coin shows Heads at a given point in time—including whatever time the coin is caught. And vice-versa if you start the coin-flip from the Tails position."

...

"John von Neumann figured out a solution for getting fair results from a biased coin:

1. Toss the coin twice. 2. If the results match, start over, forgetting both results. 3. If the results differ, use the first result, forgetting the second.

This has appeared on HN before, but no one's pointed it out so far in the discussion. More info: https://en.wikipedia.org/wiki/Fair_coin#Fair_results_from_a_biased_coin

...

If anybody's curious about the math, it's pretty simple.

Assume p is the probability of flipping heads. Then, pp is the probability of flipping 2 heads, (1-p)(1-p) is the probability of flipping 2 tails, (1-p)p is the probability of flipping tails then heads, and p(1-p) is the probability of flipping heads then tails.

So, basically p^2 + (1-p)^2 + p(1-p) + (1-p)p = 1.

Then, we ignore the first 2 terms (since that's when the results match and we start over), and we're only left with the p(1-p) case and the (1-p)p case. These are equally likely, and the first coin is heads in the first case and tails in the second."

yesterday by np

The Coin Flip: A Fundamentally Unfair Proposition?

2 days ago by zethraeus

hen it's a true 50-50 toss, there is no strategy. But if we take it as granted, or at least possible, that a coin flip does indeed exhibit a 1% or more bias, then the following rules of thumb might apply.

probability
coin
toss
2 days ago by zethraeus

The Coin Flip: A Fundamentally Unfair Proposition?

2 days ago by MarcK

"In the 31-page Dynamical Bias in the Coin Toss, Persi Diaconis, Susan Holmes, and Richard Montgomery lay out the theory and practice of coin-flipping to a degree that's just, well, downright intimidating.

Suffice to say their approach involved a lot of physics, a lot of math, motion-capture cameras, random experimentation, and an automated "coin-flipper" device capable of flipping a coin and producing Heads 100% of the time.

Here are the broad strokes of their research:

If the coin is tossed and caught, it has about a 51% chance of landing on the same face it was launched. (If it starts out as heads, there's a 51% chance it will end as heads).

If the coin is spun, rather than tossed, it can have a much-larger-than-50% chance of ending with the heavier side down. Spun coins can exhibit "huge bias" (some spun coins will fall tails-up 80% of the time).

If the coin is tossed and allowed to clatter to the floor, this probably adds randomness.

If the coin is tossed and allowed to clatter to the floor where it spins, as will sometimes happen, the above spinning bias probably comes into play.

A coin will land on its edge around 1 in 6000 throws, creating a flipistic singularity.

The same initial coin-flipping conditions produce the same coin flip result. That is, there's a certain amount of determinism to the coin flip.

A more robust coin toss (more revolutions) decreases the bias."

coin-flips
probability
persi.diaconis
cheating
***
Suffice to say their approach involved a lot of physics, a lot of math, motion-capture cameras, random experimentation, and an automated "coin-flipper" device capable of flipping a coin and producing Heads 100% of the time.

Here are the broad strokes of their research:

If the coin is tossed and caught, it has about a 51% chance of landing on the same face it was launched. (If it starts out as heads, there's a 51% chance it will end as heads).

If the coin is spun, rather than tossed, it can have a much-larger-than-50% chance of ending with the heavier side down. Spun coins can exhibit "huge bias" (some spun coins will fall tails-up 80% of the time).

If the coin is tossed and allowed to clatter to the floor, this probably adds randomness.

If the coin is tossed and allowed to clatter to the floor where it spins, as will sometimes happen, the above spinning bias probably comes into play.

A coin will land on its edge around 1 in 6000 throws, creating a flipistic singularity.

The same initial coin-flipping conditions produce the same coin flip result. That is, there's a certain amount of determinism to the coin flip.

A more robust coin toss (more revolutions) decreases the bias."

2 days ago by MarcK

Gaussian Distributions are Soap Bubbles

7 days ago by Hwinkler

This post is just a quick note on some of the pitfalls we encounter when dealing with high-dimensional problems, even when working with something as simple as a Gaussian distribution.

bayesian
probability
gaussian
normal
7 days ago by Hwinkler

University of Toronto CSC 2547: Learning Discrete Latent Structure (Spring 2018)

7 days ago by doneata

New inference methods allow us to train learn generative latent-variable models. These models can generate novel images and text, find meaningful latent representations of data, take advantage of large unlabeled datasets, and even let us do analogical reasoning automatically. However, most generative models such as GANs and variational autoencoders currently have pre-specified model structure, and represent data using fixed-dimensional continuous vectors. This seminar course will develop extensions to these approaches to learn model structure, and represent data using mixed discrete and continuous data structures such as lists of vectors, graphs, or even programs. The class will have a major project component, and will be run in a similar manner to Differentiable Inference and Generative Models

deep-learning
probability
course
7 days ago by doneata

University of Toronto CSC 2541: Differentiable Inference and Generative Models (Fall 2016)

7 days ago by doneata

In the last few years, new inference methods have allowed big advances in probabilistic generative models. These models let us generate novel images and text, find meaningful latent representations of data, take advantage of large unlabeled datasets, and even let us do analogical reasoning automatically. This course will tour recent innovations in inference methods such as recognition networks, black-box stochastic variational inference, and adversarial autoencoders. It will also cover recent advances in generative model design, such as deconvolutional image models, thought vectors, and recurrent variational autoencoders. The class will have a major project component.

course
neural-networks
deep-learning
probability
7 days ago by doneata

**related tags**

Copy this bookmark: