statistics:bayesian   45

Inferential Statistics is not Inferential – sci five | University of Basel – Medium
"The earth is flat (p > 0.05).

I confess. Throughout my scientific life, I have used a method that I knew or felt was deeply flawed. What’s more, I admit to have taught — and I do still teach — this method to my students. I have a number of questionable excuses for that. For example, because the method has shaped a big part of science in the last century, I think students ought to know about it.

But I have increasingly come to believe that science was and is largely a story of success in spite of, and not because of, the use of this method. The method is called inferential statistics. Or more precisely, hypothesis testing.
The method I consider flawed and deleterious involves taking sample data, then applying some mathematical procedure, and taking the result of that procedure as showing whether or not a hypothesis about a larger population is correct."
statistics:bayesian  statistics:frequentist  confidence_intervals  to_teach  regression  statistics  philosophy_of_science  philosophy_of_statistics  statistics:error_based 
3 days ago by hallucigenia
Bayesian Basics
nice bookdown, lets see how it reads
An introduction to Bayesian data analysis.
statistics:bayesian  nice-thinking 
march 2018 by mozzarella
mjskay/tidybayes: Bayesian analysis + tidy data + geoms (R package)
tidybayes is an R package that aims to make it easy to integrate popular Bayesian modelling methods into a tidy data + ggplot workflow.

Tidy data frames (one observation per row) are particularly convenient for use in a variety of R data manipulation and visualization packages. However, when using MCMC / Bayesian samplers like JAGS or Stan in R, we often have to translate this data into a form the sampler understands, and then after running the model, translate the resulting sample ...
statistics:bayesian  tidyverse  R  r_packages  visualization  to_try 
february 2018 by hallucigenia
How often does the best team win?A unified approach to understanding randomness in North American sport
In this manuscript, we develop Bayesian state-space models using betting
market data that can be uniformly applied across sporting organizations
to better understand the role of randomness in game outcomes.
These models can be used to extract estimates of team strength,
the between-season, within-season, and game-to-game variability of
team strengths, as well each team’s home advantage. We implement
our approach across a decade of play in each of the National Football
League (NFL), National Hockey League (NHL), National Basketball
Association (NBA), and Major League Baseball (MLB), finding that
the NBA demonstrates both the largest dispersion in talent and the
largest home advantage, while the NHL and MLB stand out for their
relative randomness in game outcomes.
nice-thinking  statistics  statistics:bayesian  basketball-reference 
january 2018 by mozzarella
VAST: Spatio-temporal analysis of univariate or multivariate data, e.g., standardizing data for multiple species or stage
VAST

Is an R package for implementing a spatial delta-generalized linear mixed model (delta-GLMM) for multiple categories (species, size, or age classes) when standardizing survey or fishery-dependent data.
Builds upon a previous R package SpatialDeltaGLMM (public available here), and has unit-testing to automatically confirm that VAST and SpatialDeltaGLMM give identical results (to the 3rd decimal place for parameter estimates) for several varied real-world case-study examples
Has built in diagnostic functions and model-comparison tools
Is intended to improve analysis speed, replicability, peer-review, and interpretation of index standardization methods
Background

This tool is designed to estimate spatial variation in density using spatially referenced data, with the goal of habitat associations (correlations among species and with habitat) and estimating total abundance for a target species in one or more years.
The model builds upon spatio-temporal delta-generalized linear mixed modelling techniques (Thorson Shelton Ward Skaug 2015 ICESJMS), which separately models the proportion of tows that catch at least one individual ("encounter probability") and catch rates for tows with at least one individual ("positive catch rates").
Submodels for encounter probability and positive catch rates by default incorporate variation in density among years (as a fixed effect), and can incorporate variation among sampling vessels (as a random effect, Thorson and Ward 2014) which may be correlated among categories (Thorson Fonner Haltuch Ono Winker In press).
Spatial and spatiotemporal variation are approximated as Gaussian Markov random fields (Thorson Skaug Kristensen Shelton Ward Harms Banante 2014 Ecology), which imply that correlations in spatial variation decay as a function of distance.
statistics:gams  statistics:time_series  statistics:fisheries  fisheries  fisheries:methods  statistics:bayesian  statistics:spatial  R_packages 
september 2017 by hallucigenia
bayes: a kinda-sorta masterpost
"I have written many many words about “Bayesianism” in this space over the years, but the closest thing to a comprehensive “my position on Bayes” post to date is this one from three years ago, which I wrote when I was much newer to this stuff. People sometimes link that post or ask me about it, which almost never happens with my other Bayes posts. So I figure I should write a more up-to-date “position post.”

I will try to make this at least kind of comprehensive, but I will omit many details and sometimes state conclusions without the corresponding arguments. Feel free to ask me if you want to hear more about something."
statistics:bayesian  philosophy_of_science  philosophy_of_statistics 
august 2017 by hallucigenia
xcelab.net
Statistical Rethinking is an introduction to applied Bayesian data analysis, aimed at PhD students and researchers in the natural and social sciences. This audience has had some calculus and linear algebra, and one or two joyless undergraduate courses in statistics. I've been teaching applied statistics to this audience for about a decade now, and this book has evolved from that experience.

The book teaches generalized linear multilevel modeling (GLMMs) from a Bayesian perspective, relying on a simple logical interpretation of Bayesian probability and maximum entropy. The book covers the basics of regression through multilevel models, as well as touching on measurement error, missing data, and Gaussian process models for spatial and network autocorrelation.

This is not a traditional mathematical statistics book. Instead the approach is computational, using complete R code examples, aimed at developing skilled and skeptical scientists. Theory is explained through simulation exercises, using R code. And modeling examples are fully worked, with R code displayed within the main text. Mathematical depth is given in optional "overthinking" boxes throughout.
statistics  statistics:bayesian  book  R 
june 2017 by sechilds
How Bayesian inference works
distinction between probabilities: conditional, joint, marginal
statistics  statistics:bayesian 
may 2017 by mozzarella

related tags

@followup  basketball-reference  book  books  confidence_intervals  datasci-v5  econometrics  fisheries  fisheries:methods  frequentist_statistics  gaussian_processes  gelman  graph_theory  library  math:dynamical_systems  model_testing  nice-thinking  non_parametrics  ordination  paper  philosophy  philosophy_of_science  philosophy_of_statistics  programming  programs_to_use  python  r  r_packages  regression  scipy  shrinkage_and_penalization  smoothing_and_penalization  statistical_clustering  statistical_methods  statistical_software  statistics  statistics:additive_models  statistics:distributions  statistics:error_based  statistics:fisheries  statistics:frequentist  statistics:gams  statistics:multivariate  statistics:networks  statistics:regression  statistics:spatial  statistics:time_series  tidyverse  to_read  to_teach  to_try  visualization 

Copy this bookmark:



description:


tags: