probability   10223

« earlier    

The Coin Flip: A Fundamentally Unfair Proposition? (2009) | Hacker News

"A good way of thinking about this is by looking at the ratio of odd numbers to even numbers when you start counting from 1.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
No matter how long you count, you'll find that at any given point, one of two things will be true:

You've touched more odd numbers than even numbers
You've touched an equal amount of odd numbers and even numbers
What will never happen, is this:

You've touched more even numbers than odd numbers.
Similarly, consider a coin, launched in the "heads" position, flipping heads over tails through the ether:

At any given point in time, either the coin will have spent equal time in the Heads and Tails states, or it will have spent more time in the Heads state. In the aggregate, it's slightly more likely that the coin shows Heads at a given point in time—including whatever time the coin is caught. And vice-versa if you start the coin-flip from the Tails position."


"John von Neumann figured out a solution for getting fair results from a biased coin:
1. Toss the coin twice. 2. If the results match, start over, forgetting both results. 3. If the results differ, use the first result, forgetting the second.
This has appeared on HN before, but no one's pointed it out so far in the discussion. More info:

If anybody's curious about the math, it's pretty simple.
Assume p is the probability of flipping heads. Then, pp is the probability of flipping 2 heads, (1-p)(1-p) is the probability of flipping 2 tails, (1-p)p is the probability of flipping tails then heads, and p(1-p) is the probability of flipping heads then tails.
So, basically p^2 + (1-p)^2 + p(1-p) + (1-p)p = 1.
Then, we ignore the first 2 terms (since that's when the results match and we start over), and we're only left with the p(1-p) case and the (1-p)p case. These are equally likely, and the first coin is heads in the first case and tails in the second."
math  probability  interesting 
yesterday by np
The Coin Flip: A Fundamentally Unfair Proposition?
hen it's a true 50-50 toss, there is no strategy. But if we take it as granted, or at least possible, that a coin flip does indeed exhibit a 1% or more bias, then the following rules of thumb might apply.
probability  coin  toss 
2 days ago by zethraeus
The Coin Flip: A Fundamentally Unfair Proposition?
"In the 31-page Dynamical Bias in the Coin Toss, Persi Diaconis, Susan Holmes, and Richard Montgomery lay out the theory and practice of coin-flipping to a degree that's just, well, downright intimidating.

Suffice to say their approach involved a lot of physics, a lot of math, motion-capture cameras, random experimentation, and an automated "coin-flipper" device capable of flipping a coin and producing Heads 100% of the time.

Here are the broad strokes of their research:

If the coin is tossed and caught, it has about a 51% chance of landing on the same face it was launched. (If it starts out as heads, there's a 51% chance it will end as heads).
If the coin is spun, rather than tossed, it can have a much-larger-than-50% chance of ending with the heavier side down. Spun coins can exhibit "huge bias" (some spun coins will fall tails-up 80% of the time).
If the coin is tossed and allowed to clatter to the floor, this probably adds randomness.
If the coin is tossed and allowed to clatter to the floor where it spins, as will sometimes happen, the above spinning bias probably comes into play.
A coin will land on its edge around 1 in 6000 throws, creating a flipistic singularity.
The same initial coin-flipping conditions produce the same coin flip result. That is, there's a certain amount of determinism to the coin flip.
A more robust coin toss (more revolutions) decreases the bias."
coin-flips  probability  persi.diaconis  cheating  *** 
2 days ago by MarcK
Gaussian Distributions are Soap Bubbles
This post is just a quick note on some of the pitfalls we encounter when dealing with high-dimensional problems, even when working with something as simple as a Gaussian distribution.
bayesian  probability  gaussian  normal 
7 days ago by Hwinkler
University of Toronto CSC 2547: Learning Discrete Latent Structure (Spring 2018)
New inference methods allow us to train learn generative latent-variable models. These models can generate novel images and text, find meaningful latent representations of data, take advantage of large unlabeled datasets, and even let us do analogical reasoning automatically. However, most generative models such as GANs and variational autoencoders currently have pre-specified model structure, and represent data using fixed-dimensional continuous vectors. This seminar course will develop extensions to these approaches to learn model structure, and represent data using mixed discrete and continuous data structures such as lists of vectors, graphs, or even programs. The class will have a major project component, and will be run in a similar manner to Differentiable Inference and Generative Models
deep-learning  probability  course 
7 days ago by doneata
University of Toronto CSC 2541: Differentiable Inference and Generative Models (Fall 2016)
In the last few years, new inference methods have allowed big advances in probabilistic generative models. These models let us generate novel images and text, find meaningful latent representations of data, take advantage of large unlabeled datasets, and even let us do analogical reasoning automatically. This course will tour recent innovations in inference methods such as recognition networks, black-box stochastic variational inference, and adversarial autoencoders. It will also cover recent advances in generative model design, such as deconvolutional image models, thought vectors, and recurrent variational autoencoders. The class will have a major project component.
course  neural-networks  deep-learning  probability 
7 days ago by doneata

« earlier    

related tags

***  abdsc  accuracy  acm  anecdata  approximation  arrows  article  articles  atoms  awesome_articles  basics  basketball  bayes  bayesian  behavior  berlin  blogpost  book  books  brexit  britain  calculation  calculator  caltech  characterization  cheating  cheatsheet  cleaning  cloudera  coin-flips  coin  comparison  complexity  composition-decomposition  concentration-of-measure  concept  confluence  cool  correlation  counterexample  course  covariance  cryptography  cs  data-analysis  data-science  data  deep-learning  deeplearning  development  differential  direction  disaster  distribution  dynamical-systems  dynamics  economics  election  elections  engineering  ergodic  eric-kaufmann  esoteric  ethics  example  expectancy  extrema  finance  foundations  free  fun  functional  future  games  gaussian  gotchas  graph_limit  hannahfry  helloworld  history  human  hypothesis-testing  identity  iidness  inference  init  intelligence  interactive  interdisciplinary  interesting  intricacy  journalism  learning  lecture-notes  lectures  lifts-projections  limits  linearity  list  machine-learning  machine  machinelearning  magnitude  markov  markovchains  martingales  math  mathematics  maths  media  metabuch  methodology  micah  mit  models  moments  multiplicative  narrative  nate  network-structure  networks  neural-networks  nibble  nitty-gritty  nonlinearity  nonparametric  normal  normalization  numericalanalysis  objektbuch  opensource  optimisation  orders  orfe  overflow  papers  parametric  pdf  pedagogy  performance  persi.diaconis  personalities  phonology  phonotactics  physics  plots  policy  polisci  politics  posts  power-law  preprint  privacy  probabilisticprogramming  prog  programming  project  proofs  psychology  puzzles  python  q-n-a  qra  quant  quora  random-function  random-variables  read-later  recommendations  reference  religion  research  review  risk  science  silver  slides  social-science  social  sociology  sports  statistics  stats  stochastic-processes  structure  study  symmetry  teaching  techtariat  tensorflow  tidbits  time-complexity  tips-and-tricks  to-read  top-n  toread  toss  trivia  truly_awesome_articles  tutorial  uber  uncertainty  unit  videos  visualization  wiki  wikipedia  women  work  yoga  youtube 

Copy this bookmark: