nhaliday + pac   10

Applications of computational learning theory in the cognitive sciences - Psychology & Neuroscience Stack Exchange
1. Gold's theorem on the unlearnability in the limit of certain sets of languages, among them context-free ones.

2. Ronald de Wolf's master's thesis on the impossibility to PAC-learn context-free languages.

The first made quiet a stir in the poverty-of-the-stimulus debate, and the second has been unnoticed by cognitive science.
q-n-a  stackex  psychology  cog-psych  learning  learning-theory  machine-learning  PAC  lower-bounds  no-go  language  linguistics  models  fall-2015 
february 2019 by nhaliday
MinHash - Wikipedia
- goal: compute Jaccard coefficient J(A, B) = |A∩B| / |A∪B| in sublinear space
- idea: pick random injective hash function h, define h_min(S) = argmin_{x in S} h(x), and note that Pr[h_min(A) = h_min(B)] = J(A, B)
- reduce variance w/ Chernoff bound
algorithms  data-structures  sublinear  hashing  wiki  reference  random  tcs  nibble  measure  metric-space  metrics  similarity  PAC  intersection  intersection-connectedness 
february 2017 by nhaliday
Count–min sketch - Wikipedia
- estimates frequency vector (f_i)
- idea:
d = O(log 1/δ) hash functions h_j: [n] -> [w] (w = O(1/ε))
d*w counters a[r, c]
for each event i, increment counters a[1, h_1(i)], a[2, h_2(i)], ..., a[d, h_d(i)]
estimate for f_i is min_j a[j, h_j(i)]
- never underestimates but upward-biased
- pf: Markov to get constant probability of success, then exponential decrease with repetition
lecture notes: http://theory.stanford.edu/~tim/s15/l/l2.pdf
- note this can work w/ negative updates. just use median instead of min. pf still uses markov on the absolute value of error.
algorithms  data-structures  sublinear  hashing  wiki  reference  bias-variance  approximation  random  tcs  multi  stanford  lecture-notes  pdf  tim-roughgarden  nibble  pigeonhole-markov  PAC 
february 2017 by nhaliday

bundles : academeacmpatterns

related tags

acm  acmtariat  advanced  algorithms  applications  approximation  bayesian  bias-variance  bio  bioinformatics  books  cog-psych  compressed-sensing  concentration-of-measure  concept  course  data-structures  deep-learning  descriptive  dimensionality  embeddings  evolution  examples  expert  expert-experience  explanation  exposition  fall-2015  features  generalization  genetics  genomics  giants  gradient-descent  graphical-models  ground-up  hashing  heavyweights  heterodox  howto  ideas  init  interdisciplinary  intersection  intersection-connectedness  language  learning  learning-theory  lecture-notes  levers  linear-algebra  liner-notes  linguistics  lower-bounds  machine-learning  markov  measure  metabuch  methodology  metric-space  metrics  models  monte-carlo  multi  nibble  nlp  no-go  norms  off-convex  optimization  org:bleg  p:***  PAC  papers  pdf  pigeonhole-markov  princeton  priors-posteriors  proofs  psychology  q-n-a  random  reference  reinforcement  research  sample-complexity  sanjeev-arora  similarity  sparsity  stackex  stanford  sublinear  summary  synthesis  tcs  tim-roughgarden  toolkit  unit  valiant  vc-dimension  volo-avolo  wiki  wormholes  yoga  👳 

Copy this bookmark: