The table 2 fallacy: presenting and interpreting confounder and modifier coefficients. - PubMed - NCBI
"It is common to present multiple adjusted effect estimates from a single model in a single table. For example, a table might show odds ratios for one or more exposures and also for several confounders from a single logistic regression. This can lead to mistaken interpretations of these estimates. We use causal diagrams to display the sources of the problems. Presentation of exposure and confounder effect estimates from a single model may lead to several interpretative difficulties, inviting confusion of direct-effect estimates with total-effect estimates for covariates in the model. These effect estimates may also be confounded even though the effect estimate for the main exposure is not confounded. Interpretation of these effect estimates is further complicated by heterogeneity (variation, modification) of the exposure effect measure across covariate levels. We offer suggestions to limit potential misunderstandings when multiple effect estimates are presented, including precise distinction between total and direct effect measures from a single model, and use of multiple models tailored to yield total-effect estimates for covariates."
to:NB  causal_inference  regression  statistics  greenland.sander  to_teach:linear_models  to_teach:undergrad-ADA  re:ADAfaEPoV 
Among the Post-Liberals | Dissent Magazine
"We get innumerable denunciations of liberalism’s denial of truth and the good, but fewer suggestions of what their regime would actually do in power. Ban abortion, surely; ban homosexuality, maybe (although even here we can detect some newfound waffling); then what? If attempts to sketch a post-liberal program often bounce between rote communitarianism and empty theatrics, this may reflect the fact that the historical project that gave Louis IX’s France its coherence—what Jones calls “a sort of permanent crusade,” unapologetically wielding an entirely non-metaphorical sword against the heretic and the infidel—no longer seems attractive even to most self-styled reactionaries. No one really has the stomach to burn out the tongues of blasphemers anymore, even if some remain too ornery to admit it. Perhaps breaking free of liberalism is harder than it looks."
liberalism  running_dogs_of_reaction  attacks_on_liberalism 
2 days ago
The evolution of early symbolic behavior in Homo sapiens | PNAS
"How did human symbolic behavior evolve? Dating up to about 100,000 y ago, the engraved ochre and ostrich eggshell fragments from the South African Blombos Cave and Diepkloof Rock Shelter provide a unique window into presumed early symbolic traditions of Homo sapiens and how they evolved over a period of more than 30,000 y. Using the engravings as stimuli, we report five experiments which suggest that the engravings evolved adaptively, becoming better-suited for human perception and cognition. More specifically, they became more salient, memorable, reproducible, and expressive of style and human intent. However, they did not become more discriminable over time between or within the two archeological sites. Our observations provide support for an account of the Blombos and Diepkloof engravings as decorations and as socially transmitted cultural traditions. By contrast, there was no clear indication that they served as denotational symbolic signs. Our findings have broad implications for our understanding of early symbolic communication and cognition in H. sapiens."
to:NB  cultural_evolution  human_evolution  epidemiology_of_representations 
3 days ago
Local dimension reduction of summary statistics for likelihood-free inference | SpringerLink
"Approximate Bayesian computation (ABC) and other likelihood-free inference methods have gained popularity in the last decade, as they allow rigorous statistical inference for complex models without analytically tractable likelihood functions. A key component for accurate inference with ABC is the choice of summary statistics, which summarize the information in the data, but at the same time should be low-dimensional for efficiency. Several dimension reduction techniques have been introduced to automatically construct informative and low-dimensional summaries from a possibly large pool of candidate summaries. Projection-based methods, which are based on learning simple functional relationships from the summaries to parameters, are widely used and usually perform well, but might fail when the assumptions behind the transformation are not satisfied. We introduce a localization strategy for any projection-based dimension reduction method, in which the transformation is estimated in the neighborhood of the observed data instead of the whole space. Localization strategies have been suggested before, but the performance of the transformed summaries outside the local neighborhood has not been guaranteed. In our localization approach the transformation is validated and optimized over validation datasets, ensuring reliable performance. We demonstrate the improvement in the estimation accuracy for localized versions of linear regression and partial least squares, for three different models of varying complexity."
to:NB  approximate_bayesian_computation  indirect_inference  dimension_reduction  sufficiency  statistics 
3 days ago
High-dimensional regression in practice: an empirical study of finite-sample prediction, variable selection and ranking | SpringerLink
"Penalized likelihood approaches are widely used for high-dimensional regression. Although many methods have been proposed and the associated theory is now well developed, the relative efficacy of different approaches in finite-sample settings, as encountered in practice, remains incompletely understood. There is therefore a need for empirical investigations in this area that can offer practical insight and guidance to users. In this paper, we present a large-scale comparison of penalized regression methods. We distinguish between three related goals: prediction, variable selection and variable ranking. Our results span more than 2300 data-generating scenarios, including both synthetic and semisynthetic data (real covariates and simulated responses), allowing us to systematically consider the influence of various factors (sample size, dimensionality, sparsity, signal strength and multicollinearity). We consider several widely used approaches (Lasso, Adaptive Lasso, Elastic Net, Ridge Regression, SCAD, the Dantzig Selector and Stability Selection). We find considerable variation in performance between methods. Our results support a “no panacea” view, with no unambiguous winner across all scenarios or goals, even in this restricted setting where all data align well with the assumptions underlying the methods. The study allows us to make some recommendations as to which approaches may be most (or least) suitable given the goal and some data characteristics. Our empirical results complement existing theory and provide a resource to compare methods across a range of scenarios and metrics."
to:NB  regression  prediction  statistics  high-dimensional_statistics  lasso  to_teach:linear_models  re:TALR  variable_selection 
3 days ago
Stochastic Analysis of Minimal Automata Growth for Generalized Strings | SpringerLink
"Generalized strings describe various biological motifs that arise in molecular and computational biology. In this manuscript, we introduce an alternative but efficient algorithm to construct the minimal deterministic finite automaton (DFA) associated with any generalized string. We exploit this construction to characterize the typical growth of the minimal DFA (i.e., with the least number of states) associated with a random generalized string of increasing length. Even though the worst-case growth may be exponential, we characterize a point in the construction of the minimal DFA when it starts to grow linearly and conclude it has at most a polynomial number of states with asymptotically certain probability. We conjecture that this number is linear."
to:NB  automata_theory  re:AoS_project 
3 days ago
[2002.05193] A Hierarchy of Limitations in Machine Learning
""All models are wrong, but some are useful", wrote George E. P. Box (1979). Machine learning has focused on the usefulness of probability models for prediction in social systems, but is only now coming to grips with the ways in which these models are wrong---and the consequences of those shortcomings. This paper attempts a comprehensive, structured overview of the specific conceptual, procedural, and statistical limitations of models in machine learning when applied to society. Machine learning modelers themselves can use the described hierarchy to identify possible failure points and think through how to address them, and consumers of machine learning models can know what to question when confronted with the decision about if, where, and how to apply machine learning. The limitations go from commitments inherent in quantification itself, through to showing how unmodeled dependencies can lead to cross-validation being overly optimistic as a way of assessing model performance."
in_NB  to_read  prediction  data_mining  malik.momin_m.  kith_and_kin 
7 days ago
[2002.06870] Consistency of the PLFit estimator for power-law data
"We prove the consistency of the Power-Law Fit PLFit method proposed by Clauset et al.(2009) to estimate the power-law exponent in data coming from a distribution function with regularly-varying tail. In the complex systems community, PLFit has emerged as the method of choice to estimate the power-law exponent. Yet, its mathematical properties are still poorly understood.
"The difficulty in PLFit is that it is a minimum-distance estimator. It first chooses a threshold that minimizes the Kolmogorov-Smirnov distance between the data points larger than the threshold and the Pareto tail, and then applies the Hill estimator to this restricted data. Since the number of order statistics used is random, the general theory of consistency of power-law exponents from extreme value theory does not apply. Our proof consists in first showing that the Hill estimator is consistent for general intermediate sequences for the number of order statistics used, even when that number is random. Here, we call a sequence intermediate when it grows to infinity, while remaining much smaller than the sample size. The second, and most involved, step is to prove that the optimizer in PLFit is with high probability an intermediate sequence, unless the distribution has a Pareto tail above a certain value. For the latter special case, we give a separate proof."
heavy_tails  statistics  self-centered  we_told_you_so 
8 days ago
Pinboard on Twitter: "A fun thought experiment is to imagine if natural uranium had a different isotope ratio, so that atomic weapons did not require enrichment, and whichever Hapsburg prince controlled the Bohemian mines that were its only known source f
"A fun thought experiment is to imagine if natural uranium had a different isotope ratio, so that atomic weapons did not require enrichment, and whichever Hapsburg prince controlled the Bohemian mines that were its only known source for centuries had been able to nuke his rivals."

--- Who can we get to write this? Ada Palmer? Walter Jon Williams? Zombie Karel Capek?
10 days ago
High-dimensional Time Series Clustering via Cross-Predictability
"The key to time series clustering is how to characterize the similarity between any two time series. In this paper, we explore a new similarity metric called “cross-predictability”: the degree to which a future value in each time series is predicted by past values of the others. However, it is challenging to estimate such cross-predictability among time series in the high-dimensional regime, where the number of time series is much larger than the length of each time series. We address this challenge with a sparsity assumption: only time series in the same cluster have significant cross-predictability with each other. We demonstrate that this approach is computationally attractive, and provide a theoretical proof that the proposed algorithm will identify the correct clustering structure with high probability under certain conditions. To the best of our knowledge, this is the first practical high-dimensional time series clustering algorithm with a provable guarantee. We evaluate with experiments on both synthetic data and real-world data, and results indicate that our method can achieve more than 80% clustering accuracy on real-world data, which is 20% higher than the state-of-art baselines."

--- But, but, but... Schreiber (1997)! I repeat, (1997)!
to:NB  time_series  clustering  statistics  via:vaguery  to_be_shot_after_a_fair_trial 
10 days ago
The Transmission Dynamics of Human Immunodeficiency Virus (HIV) [and Discussion] (May and Anderson, 1988)
"The paper first reviews data on HIV infections and AIDS disease among homosexual men, heterosexuals, intravenous (IV) drug abusers and children born to infected mothers, in both developed and developing countries. We survey such information as is currently available about the distribution of incubation times that elapse between HIV infection and the appearance of AIDS, about the fraction of those infected with HIV who eventually go on to develop AIDS, about time-dependent patterns of infectiousness and about distributions of rates of acquiring new sexual or needle-sharing partners. With this information, models for the transmission dynamics of HIV are developed, beginning with deliberately oversimplified models and progressing - on the basis of the understanding thus gained - to more complex ones. Where possible, estimates of the model's parameters are derived from the epidemiological data, and predictions are compared with observed trends. We also combine these epidemiological models with demographic considerations to assess the effects that heterosexually-transmitted HIV/AIDS may eventually have on rates of population growth, on age profiles and on associated economic and social indicators, in African and other countries. The degree to which sexual or other habits must change to bring the `basic reproductive rate', R0, of HIV infections below unity is discussed. We conclude by outlining some research needs, both in the refinement and development of models and in the collection of epidemiological data."

--- This is (apparently) the first paper which considered degree heterogeneity as a factor in determining the epidemic threshold in an SIR model (section 4.1), while admitting that the uncorrelated degree assumption is inaccurate (*)

*: "By assuming that partners are chosen randomly (apart from the activity levels characterized by the weighting factor $i$), we may be overestimating the contacts of less active individuals with those in more active categories, and thus overestimating the spread of infection among such less active sub-groups. Conversely, the transmission probability $\beta$ may be higher for longer-lasting partnerships (despite the data in figure 4), so that use of a constant $\beta$ may tend to underestimate the spread of infection among less active people. The net effect of these countervailing refinements is hard to guess." [pp. 583--584]
in_NB  epidemics_on_networks  epidemic_models  epidemiology  aids  may.robert_m.  have_read 
11 days ago
[cond-mat/0205439] Epidemic threshold in structured scale-free networks
"We analyze the spreading of viruses in scale-free networks with high clustering and degree correlations, as found in the Internet graph. For the Suscetible-Infected-Susceptible model of epidemics the prevalence undergoes a phase transition at a finite threshold of the transmission probability. Comparing with the absence of a finite threshold in networks with purely random wiring, our result suggests that high clustering and degree correlations protect scale-free networks against the spreading of viruses. We introduce and verify a quantitative description of the epidemic threshold based on the connectivity of the neighborhoods of the hubs."

--- Initially, I found the focus on the average degree of a node's neighbors, their <k^nn>, very puzzling --- as a measure of how many secondary infections you could produce, this would seem to involve a lot of over-counting when the network is clustered. But looking at their figure 2 clarifies: <k^nn|k>, the average degree of neighbors conditional on ego's degree, is a _decreasing_ function of ego's degree in their model. (If ego's degree is 1 or 2, it looks like the average degree of ego's neighbors is in the 100s [!], while as ego's degree goes to infinity, <k^nn> tends to a constant _smaller_ than the average degree.) So this is a hub-and-spoke system where each hub has a huge number of ties to very low-degree nodes, but there are enough non-hubs tied to multiple hubs, or hub-hub ties, to keep things connected. And then it makes sense that the crucial step in an epidemic is what happens once a hub is infected.

ETA: In fact, Moreno and Vazquez (cond-mat/0210362) observe that the model generates, basically, a linear chain of stars (lovely phrase!), and that's where all this weird behavior comes from.

(Despite the last tag, I think this model is so anti-social that it's not worth mentioning in the paper with DA and HF, but maybe if I ever write that review...)
in_NB  have_read  epidemics_on_networks  networks  re:do-institutions-evolve 
11 days ago
Influential node ranking via randomized spanning trees - ScienceDirect
"Networks portraying a diversity of interactions among individuals serve as the substrates(media) of information dissemination. One of the most important problems is to identify the influential nodes for the understanding and controlling of information diffusion and disease spreading. However, most existing works on identification of efficient nodes for influence minimization focused on centrality measures. In this work, we capitalize on the structural properties of a random spanning forest to identify the influential nodes. Specifically, the node importance is simply ranked by the aggregated degree of a node in the spanning forest, which reveals both local and global connection patterns. Our analysis on real networks indicates that manipulating the nodes with high aggregated degrees in the random spanning forest shows better performance in controlling spreading processes, compared to previously used importance criteria, including degree centrality, betweenness centrality, and random walk based indices, leading to less influenced population. We further show the characteristics of the proposed measure and the comparison with benchmarks."

--- Degree in a random (depth-first) spanning tree is a cute centrality measure, but it's got to be strongly related to eigenvector centrality (which they only mention in the last sentence). Last tag is because "what is this a Monte Carlo estimate of?" might make a good project...
in_NB  have_read  epidemics_on_networks  network_data_analysis  to_teach:baby-nets 
11 days ago
[cond-mat/0007048] Resilience of the Internet to random breakdowns
"A common property of many large networks, including the Internet, is that the connectivity of the various nodes follows a scale-free power-law distribution, P(k)=ck^-a. We study the stability of such networks with respect to crashes, such as random removal of sites. Our approach, based on percolation theory, leads to a general condition for the critical fraction of nodes, p_c, that need to be removed before the network disintegrates. We show that for a<=3 the transition never takes place, unless the network is finite. In the special case of the Internet (a=2.5), we find that it is impressively robust, where p_c is approximately 0.99."
in_NB  networks  have_read  re:do-institutions-evolve 
11 days ago
Immunization and epidemic dynamics in complex networks | SpringerLink
"We study the behavior of epidemic spreading in networks, and, in particular, scale free networks. We use the Susceptible-Infected-Removed (SIR) epidemiological model. We give simulation results for the dynamics of epidemic spreading. By mapping the model into a static bond-percolation model we derive analytical results for the total number of infected individuals. We study this model with various immunization strategies, including random, targeted and acquaintance immunization."
in_NB  have_read  epidemics_on_networks  re:do-institutions-evolve  re:do_not_adjust_your_receiver 
11 days ago
Unification of theoretical approaches for epidemic spreading on complex networks - IOPscience
"Models of epidemic spreading on complex networks have attracted great attention among researchers in physics, mathematics, and epidemiology due to their success in predicting and controlling scenarios of epidemic spreading in real-world scenarios. To understand the interplay between epidemic spreading and the topology of a contact network, several outstanding theoretical approaches have been developed. An accurate theoretical approach describing the spreading dynamics must take both the network topology and dynamical correlations into consideration at the expense of increasing the complexity of the equations. In this short survey we unify the most widely used theoretical approaches for epidemic spreading on complex networks in terms of increasing complexity, including the mean-field, the heterogeneous mean-field, the quench mean-field, dynamical message-passing, link percolation, and pairwise approximation. We build connections among these approaches to provide new insights into developing an accurate theoretical approach to spreading dynamics on complex networks."
to:NB  epidemics_on_networks  stanley.h._eugene  re:do-institutions-evolve 
11 days ago
Spreading dynamics in complex networks - IOPscience
"Searching for influential spreaders in complex networks is an issue of great significance for applications across various domains, ranging from epidemic control, innovation diffusion, viral marketing, and social movement to idea propagation. In this paper, we first display some of the most important theoretical models that describe spreading processes, and then discuss the problem of locating both the individual and multiple influential spreaders respectively. Recent approaches in these two topics are presented. For the identification of privileged single spreaders, we summarize several widely used centralities, such as degree, betweenness centrality, PageRank, k-shell, etc. We investigate the empirical diffusion data in a large scale online social community—LiveJournal. With this extensive dataset, we find that various measures can convey very distinct information of nodes. Of all the users in the LiveJournal social network, only a small fraction of them are involved in spreading. For the spreading processes in LiveJournal, while degree can locate nodes participating in information diffusion with higher probability, k-shell is more effective in finding nodes with a large influence. Our results should provide useful information for designing efficient spreading strategies in reality."

--- Eh, the measure of "influence" is just the size of the reachable set. (They don't actually track the dynamics of anything.)
in_NB  networks  social_influence  have_read  re:do-institutions-evolve 
11 days ago
Rumor propagation with heterogeneous transmission in social networks - IOPscience
"Rumor models consider that information transmission occurs with the same probability between each pair of nodes. However, this assumption is not observed in social networks, which contain influential spreaders. To overcome this limitation, we assume that central individuals have a higher capacity to convince their neighbors than peripheral subjects. From extensive numerical simulations we find that spreading is improved in scale-free networks when the transmission probability is proportional to the PageRank, degree, and betweenness centrality. In addition, the results suggest that spreading can be controlled by adjusting the transmission probabilities of the most central nodes. Our results provide a conceptual framework for understanding the interplay between rumor propagation and heterogeneous transmission in social networks."

--- Preferentially suppressing the infectiousness of central nodes is very effective, whether we measure centrality by betweenness, degree or pagerank (and in particular pagerank looks a bit more effective than degree but not much)
in_NB  epidemics_on_networks  have_read  re:do-institutions-evolve 
11 days ago
Stochastic Rumours (Daley and Kendall, 1965)
"The superficial similarity between rumours and epidemics breaks down on closer scrutiny; a feature peculiar to the rumour-spreading situation leads to striking qualitative differences in the behaviour of the two phenomena whether one uses a stochastic model or the associated deterministic model. A preliminary account is given here of a new procedure, “the principle of the diffusion of arbitrary constants”, which can be used to study the variance of the fluctuations of the sample trajectory in the stochastic model about the unique trajectory in the associated deterministic approximation. Numerical evidence (based on Monte Carlo and other calculations) is given to illustrate the effectiveness of the “principle” in the present application."

--- The difference in the model is that they assume people stop spreading the rumor on encountering someone who's already heard it (i.e., they add reactions I+I -> 2R, I+R -> 2R). This makes it really hard for the rumor to ever reach _everyone_. I am not sure that this makes sense for all rumors, but someone version of "why say something everyone knows?" is sensible for a lot of cultural transmission.
in_NB  epidemiology_of_representations  epidemic_models  have_read  stochastic_processes 
11 days ago
A comparative analysis of approaches to network-dismantling | Scientific Reports
"Estimating, understanding, and improving the robustness of networks has many application areas such as bioinformatics, transportation, or computational linguistics. Accordingly, with the rise of network science for modeling complex systems, many methods for robustness estimation and network dismantling have been developed and applied to real-world problems. The state-of-the-art in this field is quite fuzzy, as results are published in various domain-specific venues and using different datasets. In this study, we report, to the best of our knowledge, on the analysis of the largest benchmark regarding network dismantling. We reimplemented and compared 13 competitors on 12 types of random networks, including ER, BA, and WS, with different network generation parameters. We find that network metrics, proposed more than 20 years ago, are often non-dominating competitors, while many recently proposed techniques perform well only on specific network types. Besides the solution quality, we also investigate the execution time. Moreover, we analyze the similarity of competitors, as induced by their node rankings. We compare and validate our results on real-world networks. Our study is aimed to be a reference for selecting a network dismantling method for a given network, considering accuracy requirements and run time constraints."
in_NB  networks  have_read  re:do-institutions-evolve 
11 days ago
Phys. Rev. E 65, 056109 (2002) - Attack vulnerability of complex networks
"We study the response of complex networks subject to attacks on vertices and edges. Several existing complex network models as well as real-world networks of scientific collaborations and Internet traffic are numerically investigated, and the network performance is quantitatively measured by the average inverse geodesic length and the size of the largest connected subgraph. For each case of attacks on vertices and edges, four different attacking strategies are used: removals by the descending order of the degree and the betweenness centrality, calculated for either the initial network or the current network during the removal procedure. It is found that the removals by the recalculated degrees and betweenness centralities are often more harmful than the attack strategies based on the initial network, suggesting that the network structure changes as important vertices or edges are removed. Furthermore, the correlation between the betweenness centrality and the degree in complex networks is studied."

--- And by "have read", I mean "have memories of this paper that are themselves old enough to vote..."
in_NB  have_read  epidemics_on_networks  re:do-institutions-evolve 
11 days ago
Phys. Rev. E 84, 061911 (2011) - Suppressing epidemics with a limited amount of immunization units
"The way diseases spread through schools, epidemics through countries, and viruses through the internet is crucial in determining their risk. Although each of these threats has its own characteristics, its underlying network determines the spreading. To restrain the spreading, a widely used approach is the fragmentation of these networks through immunization, so that epidemics cannot spread. Here we develop an immunization approach based on optimizing the susceptible size, which outperforms the best known strategy based on immunizing the highest-betweenness links or nodes. We find that the network's vulnerability can be significantly reduced, demonstrating this on three different real networks: the global flight network, a school friendship network, and the internet. In all cases, we find that not only is the average infection probability significantly suppressed, but also for the most relevant case of a small and limited number of immunization units the infection probability can be reduced by up to 55% ."

--- The improvements look small (but non-spurious) for the real-world networks, they get the biggest improvement for Erdos-Renyi. This suggests to me that high betweenness centrality is pretty good after all...
in_NB  have_read  epidemics_on_networks  re:do-institutions-evolve 
11 days ago
Phys. Rev. E 87, 022813 (2013) - Vaccination intervention on epidemic dynamics in networks
"Vaccination is an important measure available for preventing or reducing the spread of infectious diseases. In this paper, an epidemic model including susceptible, infected, and imperfectly vaccinated compartments is studied on Watts-Strogatz small-world, Barabási-Albert scale-free, and random scale-free networks. The epidemic threshold and prevalence are analyzed. For small-world networks, the effective vaccination intervention is suggested and its influence on the threshold and prevalence is analyzed. For scale-free networks, the threshold is found to be strongly dependent both on the effective vaccination rate and on the connectivity distribution. Moreover, so long as vaccination is effective, it can linearly decrease the epidemic prevalence in small-world networks, whereas for scale-free networks it acts exponentially. These results can help in adopting pragmatic treatment upon diseases in structured populations."

Ungated: http://arxiv.org/abs/1302.5979

--- "Vaccination" modeled as transitioning randomly from a high susceptibility state (ordinary S) to a low-susceptibility state (V) and back again.
in_NB  epidemics_on_networks  have_skimmed  re:do-institutions-evolve 
11 days ago
[1012.1974] Epidemic prediction and control in clustered populations
"There has been much recent interest in modelling epidemics on networks, particularly in the presence of substantial clustering. Here, we develop pairwise methods to answer questions that are often addressed using epidemic models, in particular: on the basis of potential observations early in an outbreak, what can be predicted about the epidemic outcomes and the levels of intervention necessary to control the epidemic? We find that while some results are independent of the level of clustering (early growth predicts the level of `leaky' vaccine needed for control and peak time, while the basic reproductive ratio predicts the random vaccination threshold) the relationship between other quantities is very sensitive to clustering."

--- Unfortunately assumes a constant (!) degree distribution, so as not to confound effects of clustering with effects of degree heterogeneity. I get why, but not useful for my purposes.
in_NB  epidemics_on_networks  have_skimmed 
11 days ago
The Curious Case of America's Suicide Crisis
More willingness on the part of coroners/medical examiners to record deaths as suicides? (Hard to explain the steady rise that way, though, and the I have trouble imagining they used to record _suffocations_ as accidents to be kind to families.)
suicide  demography  our_decrepit_institutions  puzzles  via:gabriel_rossman  sociology 
11 days ago
University Press of Colorado - The Kiss of Death: Contagion, Contamination, and Folklore
"Disease is a social issue, not just a medical issue. Using examples of specific legends and rumors, The Kiss of Death explores the beliefs and practices that permeate notions of contagion and contamination. Author Andrea Kitta offers new insight into the nature of vernacular conceptions of health and sickness and how medical and scientific institutions can use cultural literacy to better meet their communities’ needs.
"Using ethnographic, media, and narrative analysis, this book explores the vernacular explanatory models used in decisions concerning contagion to better understand the real fears, risks, concerns, and doubts of the public. Kitta explores immigration and patient zero, zombies and vampires, Slender Man, HPV, and the kiss of death legend, as well as systematic racism, homophobia, and misogyny in North American culture, to examine the nature of contagion and contamination.
"Conversations about health and risk cannot take place without considering positionality and intersectionality. In The Kiss of Death, Kitta isolates areas that require better communication and greater cultural sensitivity in the handling of infectious disease, public health, and other health-related disciplines and industries."

(Found while double-checking bibliographic details on I. Kiss et al.'s monograph about epidemic models on networks...)
in_NB  books:noted  plagues_and_peoples  mythology  folklore  contagion 
11 days ago
Anne Thériault: May I Suggest an Alternative to Valentine's Day?
"No commercialism. No heartbreak. No expensive restaurants where you’ll probably embarrass yourself. Just men running naked around a hill while women watch, as the gods and goddesses intended."
funny:geeky  practices_relating_to_the_transmission_of_genetic_information 
12 days ago
Measuring Culture | Columbia University Press
"Social scientists seek to develop systematic ways to understand how people make meaning and how the meanings they make shape them and the world in which they live. But how do we measure such processes? Measuring Culture is an essential point of entry for both those new to the field and those who are deeply immersed in the measurement of meaning. Written collectively by a team of leading qualitative and quantitative sociologists of culture, the book considers three common subjects of measurement—people, objects, and relationships—and then discusses how to pivot effectively between subjects and methods. Measuring Culture takes the reader on a tour of the state of the art in measuring meaning, from discussions of neuroscience to computational social science. It provides both the definitive introduction to the sociological literature on culture as well as a critical set of case studies for methods courses across the social sciences."
to:NB  books:noted  social_science_methodology  social_measurement  to_teach:statistics_of_inequality_and_discrimination 
12 days ago
People Are Less Gullible Than You Think – Reason.com
"We aren't gullible: By default we veer on the side of being resistant to new ideas. In the absence of the right cues, we reject messages that don't fit with our preconceived views or pre-existing plans. To persuade us otherwise takes long-established, carefully maintained trust, clearly demonstrated expertise, and sound arguments. Science, the media, and other institutions that spread accurate but often counterintuitive messages face an uphill battle, as they must transmit these messages and keep them credible along great chains of trust and argumentation. Quasi-miraculously, these chains connect us to the latest scientific discoveries and to events on the other side of the planet. We can only hope for new means of extending and strengthening these ever-fragile links."
have_read  mercier.hugo  cognition  collective_cognition  persuasion  via:? 
14 days ago
Thinking clearly about causal inferences of politically motivated reasoning: why paradigmatic study designs often undermine causal inference - ScienceDirect
"A common inference in behavioral science is that people’s motivation to reach a politically congenial conclusion causally affects their reasoning—known as politically motivated reasoning. Often these inferences are made on the basis of data from randomized experiments that use one of two paradigmatic designs: Outcome Switching, in which identical methods are described as reaching politically congenial versus uncongenial conclusions; or Party Cues, in which identical information is described as being endorsed by politically congenial versus uncongenial sources. Here we argue that these designs often undermine causal inferences of politically motivated reasoning because treatment assignment violates the excludability assumption. Specifically, assignment to treatment alters variables alongside political motivation that affect reasoning outcomes, rendering the designs confounded. We conclude that distinguishing politically motivated reasoning from these confounds is important both for scientific understanding and for developing effective interventions; and we highlight those designs better placed to causally identify politically motivated reasoning."
to:NB  political_science  cognition  heuristics  causal_inference  experimental_psychology  to_read  via:? 
14 days ago
After Carbon Democracy | Dissent Magazine
"Capitalism is at the heart of the climate challenge."
No, no, no.
(1) Look at the environmental record of the USSR, or of pre-Deng China. Soviet Earth would be facing ~ as big a climate crisis as Neoliberal Earth (only with Comrade Mann in the roll of Sakharov at best).
(2) Maintaining our _current_ sized economies _without current technologies_ would get us cooked, so it's not _economic growth_ that's the problem.

Purdy knows better.
climate_change  environmentalism  progressive_forces  have_read  honestly_disappointed 
14 days ago
PsyArXiv Preprints | Collective Problem-Solving of Groups Across Tasks of Varying Complexity
"As organizations gravitate to group-based structures, the problem of improving performance through judicious selection of group members has preoccupied scientists and managers alike. However, it remains poorly understood under what conditions groups outperform comparable individuals, which individual attributes best predict group performance, or how task complexity mediates these relationships. Here we describe a novel two-phase experiment in which individuals were evaluated on a series of tasks of varying complexity; then randomly assigned to solve similar tasks either in groups of different compositions or as individuals. We describe two main sets of findings. First, while groups are more efficient than individuals and comparable “nominal group” when the task is complex, this relationship is reversed when the task is simple. Second, we find that average skill level dominates all other factors combined, including social perceptiveness, skill diversity, and diversity of cognitive style. Our findings illustrate the utility of a “solution-oriented” approach to identifying principles of collective performance."
to:NB  problem_solving  experimental_psychology  experimental_sociology  collective_cognition  watts.duncan  re:democratic_cognition  to_read 
14 days ago
[1706.01418] Learning Whenever Learning is Possible: Universal Learning under General Stochastic Processes
"This work initiates a general study of learning and generalization without the i.i.d. assumption, starting from first principles. While the standard approach to statistical learning theory is based on assumptions chosen largely for their convenience (e.g., i.i.d. or stationary ergodic), in this work we are interested in developing a theory of learning based only on the most fundamental and natural assumptions implicit in the requirements of the learning problem itself. We specifically study universally consistent function learning, where the objective is to obtain low long-run average loss for any target function, when the data follow a given stochastic process. We are then interested in the question of whether there exist learning rules guaranteed to be universally consistent given only the assumption that universally consistent learning is possible for the given data process. The reasoning that motivates this criterion emanates from a kind of optimist's decision theory, and so we refer to such learning rules as being optimistically universal. We study this question in three natural learning settings: inductive, self-adaptive, and online. Remarkably, as our strongest positive result, we find that optimistically universal learning rules do indeed exist in the self-adaptive learning setting. Establishing this fact requires us to develop new approaches to the design of learning algorithms. Along the way, we also identify concise characterizations of the family of processes under which universally consistent learning is possible in the inductive and self-adaptive settings. We additionally pose a number of enticing open problems, particularly for the online learning setting."
to:NB  learning_theory  hanneke.steve  now_there_is_a_name_i_havent_heard_in_a_long_time  via:rvenkat 
14 days ago
Income Inequality: Economic Disparities and the Middle Class in Affluent Countries | Edited by Janet C. Gornick and Markus Jäntti
"This state-of-the-art volume presents comparative, empirical research on a topic that has long preoccupied scholars, politicians, and everyday citizens: economic inequality. While income and wealth inequality across all populations is the primary focus, the contributions to this book pay special attention to the middle class, a segment often not addressed in inequality literature.
"Written by leading scholars in the field of economic inequality, all 17 chapters draw on microdata from the databases of LIS, an esteemed cross-national data center based in Luxembourg. Using LIS data to structure a comparative approach, the contributors paint a complex portrait of inequality across affluent countries at the beginning of the 21st century. The volume also trail-blazes new research into inequality in countries newly entering the LIS databases, including Japan, Iceland, India, and South Africa."
to:NB  books:noted  inequality  economics  sociology  to_teach:statistics_of_inequality_and_discrimination 
16 days ago
Education and Intergenerational Social Mobility in Europe and the United States | Edited by Richard Breen and Walter Müller
"This volume examines the role of education in shaping rates and patterns of intergenerational social mobility among men and women during the twentieth century. Focusing on the relationship between a person's social class and the social class of his or her parents, each chapter looks at a different country—the United States, Sweden, Germany, France, the Netherlands, Italy, Spain, and Switzerland. Contributors examine change in absolute and relative mobility and in education across birth cohorts born between the first decade of the twentieth century and the early 1970s. They find a striking similarity in trends across all countries, and in particular a contrast between the fortunes of people born before the 1950s, those who enjoyed increasing rates of upward mobility and a decline in the strength of the link between class origins and destinations, and later generations who experienced more downward mobility and little change in how origins and destinations are linked. This volume uncovers the factors that drove these shifts, revealing education as significant in promoting social openness. It will be an invaluable source for anyone who wants to understand the evolution of mobility and inequality in the contemporary world."
to:NB  books:noted  inequality  transmission_of_inequality  education  economics  sociology  to_teach:statistics_of_inequality_and_discrimination 
16 days ago
Should We Trust Algorithms? · Harvard Data Science Review
"There is increasing use of algorithms in the health care and criminal justice systems, and corresponding increased concern with their ethical use. But perhaps a more basic issue is whether we should believe what we hear about them and what the algorithm tells us. It is illuminating to distinguish between the trustworthiness of claims made about an algorithm, and those made by an algorithm, which reveals the potential contribution of statistical science to both evaluation and ‘intelligent transparency.’ In particular, a four-phase evaluation structure is proposed, parallel to that adopted for pharmaceuticals."
to:NB  algorithmic_fairness  statistics  data_mining  spiegelhalter.david  to_teach:data-mining  to_teach:statistics_of_inequality_and_discrimination 
16 days ago
Uses and Abuses of Ideology in Political Psychology - Kalmoe - - Political Psychology - Wiley Online Library
"Ideology is a central construct in political psychology. Even so, the field's strong claims about an ideological public rarely engage evidence of enormous individual differences: a minority with real ideological coherence and weak to nonexistent political belief organization for everyone else. Here, I bridge disciplinary gaps by showing the limits of mass political ideology with several popular measures and components—self‐identification, core political values (egalitarian and traditionalism's resistance to change), and policy indices—in representative U.S. surveys across four decades (Ns ~ 13 k–37 k), plus panel data testing stability. Results show polar, coherent, stable, and potent ideological orientations only among the most knowledgeable 20–30% of citizens. That heterogeneity means full‐sample tests overstate ideology for most people but understate it for knowledgeable citizens. Whether through top‐down opinion leadership or bottom‐up ideological reasoning, organized political belief systems require political attention and understanding to form. Finally, I show that convenience samples make trouble for ideology generalizations. I conclude by proposing analytic best practices to help avoid overclaiming ideology in the public. Taken together, what first looks like strong and broad ideology is actually ideological innocence for most and meaningful ideology for a few."
to:NB  ideology  surveys  us_politics  political_science  social_measurement  public_opinion  re:democratic_cognition 
16 days ago
There Really Was A Liberal Media Bubble | FiveThirtyEight
This is pretty good, but really makes me wish that someone other than Surowicki would write accessibly about collective cognition. (His account makes it a mystery how communication could ever lead to _better_ ideas.)
why_oh_why_cant_we_have_a_better_press_corps  silver.nate  social_life_of_the_mind 
16 days ago
The Fall of Rome by W. H. Auden - Poems | Academy of American Poets
"The piers are pummelled by the waves;
In a lonely field the rain
Lashes an abandoned train;
Outlaws fill the mountain caves.

"Fantastic grow the evening gowns;
Agents of the Fisc pursue
Absconding tax-defaulters through
The sewers of provincial towns.

"Private rites of magic send
The temple prostitutes to sleep;
All the literati keep
An imaginary friend.

"Cerebrotonic Cato may
Extol the Ancient Disciplines,
But the muscle-bound Marines
Mutiny for food and pay.

"Caesar's double-bed is warm
As an unimportant clerk
On a pink official form.

"Unendowed with wealth or pity,
Little birds with scarlet legs,
Sitting on their speckled eggs,
Eye each flu-infected city.

"Altogether elsewhere, vast
Herds of reindeer move across
Miles and miles of golden moss,
Silently and very fast."
poetry  auden.w.h.  our_decrepit_institutions  they_were_never_wrong_the_old_masters  via:henry_farrell 
19 days ago
Planning and Anarchy | South Atlantic Quarterly | Duke University Press
"Debates about planning on both the left and the right tend to misconstrue as problems of calculation what are, in fact, problems of control. The new powers of computing and communication developed in the twenty-first century do not, therefore, render the central planning of the twentieth century newly feasible. Rather, they merely make it possible to see as properly political problems that were once thought to be entirely technical. Restoring missing historical context to the socialist calculation debate, “Planning and Anarchy” discloses the blind spots in contemporary discussions of planning and offers an alternate vision of emancipation and planning, no longer dependent upon the tools of coercion inherited from capitalism."
to:NB  to_read  to_be_shot_after_a_fair_trial  re:in_soviet_union_optimization_problem_solves_you 
19 days ago
[1212.5608] A Theological Argument for an Everett Multiverse
"Science looks for the simplest hypotheses to explain observations. Starting with the simple assumption that {\em the actual world is the best possible world}, I sketch an {\it Optimal Argument for the Existence of God}, that the sufferings in our universe would not be consistent with its being alone the best possible world, but the total world could be the best possible if it includes an omnipotent, omniscient, omnibenevolent God who experiences great value in creating and knowing a universe with great mathematical elegance, even though such a universe has suffering.
"God seems loathe to violate elegant laws of physics that He has chosen to use in His creation, such as Maxwell's equations for electromagnetism or Einstein's equations of general relativity for gravity within their classical domains of applicability, even if their violation could greatly reduce human suffering (e.g., from falls). If indeed God is similarly loathe to violate quantum unitarity (though such violations by judicious collapses of the wavefunction could greatly reduce human suffering by always choosing only favorable outcomes), the resulting unitary evolution would lead to an Everett multiverse of `many worlds', meaning many different quasiclassical histories beyond the quasiclassical history that each of us can observe over his or her lifetime. This is a theological argument for one reason why God might prefer to create a multiverse much broader than what one normally thinks of for a history of the universe."
physics  quantum_mechanics  theology  utter_stupidity  psychoceramica  jesus_loves_you_but_he_loves_unitary_time_evolution_more  via:mejn  via_is_indirect 
22 days ago
Do Police Brutality Stories Reduce 911 Calls? Reassessing an Important Criminological Finding - Michael Zoorob,
"This comment reassesses the prominent claim from Desmond, Papachristos, and Kirk (2016) (DPK) that 911 calls plummeted—and homicides surged—because of a police brutality story in Milwaukee (the Jude story). The results in DPK depend on a substantial outlier 47 weeks after the Jude story, the final week of data. Identical analyses without the outlier final week show that the Jude story had no statistically significant effect on either total 911 calls or violent crime 911 calls. Modeling choices that do not extrapolate from data many weeks after the Jude story—including an event study and “regression discontinuity in time”—also find no evidence that calls declined, a consistent result across predominantly black neighborhoods, predominantly white neighborhoods, and citywide. Finally, plotting the raw data demonstrates stable 911 calls in the weeks around the Jude story. Overall, the existing empirical evidence does not support the theory that publishing brutality stories decreases crime reporting and increases murders."
to:NB  causal_inference  to_read  statistics  police  crime  via:kjhealy  via:rvenkat 
23 days ago
[1902.10288] Clustering, factor discovery and optimal transport
"The clustering problem, and more generally, latent factor discovery --or latent space inference-- is formulated in terms of the Wasserstein barycenter problem from optimal transport. The objective proposed is the maximization of the variability attributable to class, further characterized as the minimization of the variance of the Wasserstein barycenter. Existing theory, which constrains the transport maps to rigid translations, is extended to affine transformations. The resulting non-parametric clustering algorithms include k-means as a special case and exhibit more robust performance. A continuous version of these algorithms discovers continuous latent variables and generalizes principal curves. The strength of these algorithms is demonstrated by tests on both artificial and real-world data sets."
to:NB  clustering  factor_analysis  inference_to_latent_objects  statistics 
27 days ago
The Engineering Tools that Shaped the Rational Expectations Revolution by Marcel J. Boumans :: SSRN
"The rational expectations revolution was not only based on the introduction of Muth’s idea of rational expectations to macroeconomics; the introduction of Muth’s hypothesis cannot explain the more drastic change of the mathematical toolbox and concepts, research strategies, vocabulary, and questions since the 1980s. The main claim is that the shift from “Keynesian economics” to “new classical economics” is based on a shift from a control engineering approach to an information engineering methodology. The paper even shows that the “revolution” was more radical. The change of engineering tools has changed macroeconomics more deeply, not only its methodology but also its epistemology and ontology."
to:NB  economics  history_of_economics  optimization 
27 days ago
[1912.03800] Sequential Estimation of Network Cascades
"We consider the problem of locating the source of a network cascade, given a noisy time-series of network data. We assume that at time zero, the cascade starts with one unknown vertex and spreads deterministically at each time step. The goal is to find a sequential estimation procedure for the source that outputs an estimate for the cascade source as fast as possible, subject to a bound on the estimation error. For general graphs that satisfy a symmetry property, we show that matrix sequential probability ratio tests (MSPRTs) are first-order asymptotically optimal up to a constant factor as the estimation error tends to zero. We apply our results to lattices and regular trees, and show that MSPRTs are asymptotically optimal for regular trees. We support our theoretical results with simulations."
to:NB  network_data_analysis  contagion  statistics 
27 days ago
[1605.04565] Hierarchical Models for Independence Structures of Networks
"We introduce a new family of network models, called hierarchical network models, that allow us to represent in an explicit manner the stochastic dependence among the dyads (random ties) of the network. In particular, each member of this family can be associated with a graphical model defining conditional independence clauses among the dyads of the network, called the dependency graph. Every network model with dyadic independence assumption can be generalized to construct members of this new family. Using this new framework, we generalize the Erdös-Rényi and beta-models to create hierarchical Erdös-Rényi and beta-models. We describe various methods for parameter estimation as well as simulation studies for models with sparse dependency graphs."
to:NB  statistics  network_data_analysis  rinaldo.alessandro  kith_and_kin 
27 days ago
[1911.12198] Strong structure recovery for partially observed discrete Markov random fields on graphs
"We propose a penalized maximum likelihood criterion to estimate the graph of conditional dependencies in a discrete Markov random field, that can be partially observed. We prove the almost sure convergence of the estimator in the case of a finite or countable infinite set of variables. In the finite case, the underlying graph can be recovered with probability one, while in the countable infinite case we can recover any finite subgraph with probability one, by allowing the candidate neighborhoods to grow with the sample size n. Our method requires minimal assumptions on the probability distribution and contrary to other approaches in the literature, the usual positivity condition is not needed."
to:NB  random_fields  markov_models  statistics 
27 days ago
[2001.03039] Minimax Optimal Conditional Independence Testing
"We consider the problem of conditional independence testing of X and Y given Z where X,Y and Z are three real random variables and Z is continuous. We focus on two main cases -- when X and Y are both discrete, and when X and Y are both continuous. In view of recent results on conditional independence testing (Shah and Peters 2018), one cannot hope to design non-trivial tests, which control the type I error for all absolutely continuous conditionally independent distributions, while still ensuring power against interesting alternatives. Consequently, we identify various, natural smoothness assumptions on the conditional distributions of X,Y|Z=z as z varies in the support of Z, and study the hardness of conditional independence testing under these smoothness assumptions. We derive matching lower and upper bounds on the critical radius of separation between the null and alternative hypotheses in the total variation metric. The tests we consider are easily implementable and rely on binning the support of the continuous variable Z. To complement these results, we provide a new proof of the hardness result of Shah and Peters and show that in the absence of smoothness assumptions conditional independence testing remains difficult even when X,Y are discrete variables of finite (and not scaling with the sample-size) support."
to:NB  kith_and_kin  dependence_measures  hypothesis_testing  statistics  wasserman.larry  balakrishnan.siva  neykov.matey 
27 days ago
PsyArXiv Preprints | Measurement Schmeasurement: Questionable Measurement Practices and How to Avoid Them
"In this paper we define questionable measurement practices (QMPs) as undisclosed decisions researchers make that leave questions about the measurements in a study unanswered. This makes it impossible to evaluate a wide range of potential validity threats to the conclusions of a study. We demonstrate that psychology is plagued by a measurement schmeasurement attitude: QMPs are common, hide a stunning source of researcher degrees of freedom, pose a serious threat to cumulative psychological science, but are largely ignored. We address these challenges by providing a set of questions that researchers and consumers of scientific research can consider to identify and avoid QMPs. Transparent answers to these measurement questions promote rigorous research, allow for thorough evaluations of a study’s inferences, and are necessary for meaningful replication studies."
to:NB  psychology  psychometrics  measurement  social_measurement  to_teach:statistics_of_inequality_and_discrimination 
27 days ago
Decision Makers · Chris Hayes
16 years later, the bellicose isolationists are in the saddle and ride mankind.
us_politics  democracy  hayes.chris  re:democratic_cognition 
29 days ago
[2001.06974] Investigation of Patient-sharing Networks Using a Bayesian Network Model Selection Approach for Congruence Class Models
"A Bayesian approach to conduct network model selection is presented for a general class of network models referred to as the congruence class models (CCMs). CCMs form a broad class that includes as special cases several common network models, such as the Erdős-Rényi-Gilbert model, stochastic block model and many exponential random graph models. Due to the range of models able to be specified as a CCM, investigators are better able to select a model consistent with generative mechanisms associated with the observed network compared to current approaches. In addition, the approach allows for incorporation of prior information. We utilize the proposed Bayesian network model selection approach for CCMs to investigate several mechanisms that may be responsible for the structure of patient-sharing networks, which are associated with the cost and quality of medical care. We found evidence in support of heterogeneity in sociality but not selective mixing by provider type nor degree."

--- Assumes a finite set of sufficient statistics (which defines the "congruence class", i.e., all networks with the same value of the statistics), but not an exponential form. Of course from Lauritzen's general results we know that sufficiency implies either an exponential form or a generalization thereof...
to:NB  exponential_family_random_graphs  network_data_analysis  statistics  model_selection 
4 weeks ago
Reconstructing science networks from the past | Journal of Historical Network Research
"Reconstructing scientific networks from the past can be a difficult process. In this paper, we argue that eponyms are a promising way to explore historic relationships between natural scientists using taxonomy. Our empirical case is the emerging community of malacologists in the 19th century. Along the lines of pivotal concepts of social network analysis we interpret eponyms as immaterial goods that resemble the proporties of regular social contacts. Utilising Exponential Random Graph Models reveals that the social exchange underlying eponyms follows similar rules as other social relationships such as friendships or collaborations. It is generally characterized by network endogenous structures and homophily. Interestingly, the productivity of authors seems to be well recognised among contemporary researchers and increases the probability of a tie within the network significantly. In addition, we observe an epistemological divide in the malacological research community. Thus even in the 19th century, at a time when science was just emerging as a differentiated social system, epistemological distinctions have been a defining concept for scientific contacts."
to:NB  exponential_family_random_graphs  sociology_of_science  history_of_science  social_networks  to_read  to_teach:baby-nets 
4 weeks ago
Generalized Network Psychometrics: Combining Network and Latent Variable Models | SpringerLink
"We introduce the network model as a formal psychometric model, conceptualizing the covariance between psychometric indicators as resulting from pairwise interactions between observable variables in a network structure. This contrasts with standard psychometric models, in which the covariance between test items arises from the influence of one or more common latent variables. Here, we present two generalizations of the network model that encompass latent variable structures, establishing network modeling as parts of the more general framework of structural equation modeling (SEM). In the first generalization, we model the covariance structure of latent variables as a network. We term this framework latent network modeling (LNM) and show that, with LNM, a unique structure of conditional independence relationships between latent variables can be obtained in an explorative manner. In the second generalization, the residual variance–covariance structure of indicators is modeled as a network. We term this generalization residual network modeling (RNM) and show that, within this framework, identifiable models can be obtained in which local independence is structurally violated. These generalizations allow for a general modeling framework that can be used to fit, and compare, SEM models, network models, and the RNM and LNM generalizations. This methodology has been implemented in the free-to-use software package lvnet, which contains confirmatory model testing as well as two exploratory search algorithms: stepwise search algorithms for low-dimensional datasets and penalized maximum likelihood estimation for larger datasets. We show in simulation studies that these search algorithms perform adequately in identifying the structure of the relevant residual or latent networks. We further demonstrate the utility of these generalizations in an empirical example on a personality inventory dataset."
to:NB  factor_analysis  graphical_models  borsboom.denny  psychometrics  inference_to_latent_objects  statistics  to_read  re:g_paper  re:major_depression_qu'est-ce_que_c'est 
4 weeks ago
Hollywood’s Next Great Studio Head Will Be a Computer
Evidence that data-mining social media is actually better at prediction than 1930s-vintage audience research is conspicuously absent from this.
Also, it misses the equilibrium point: suppose data-analytics firm X can improve predictions about how popular a film will be, and this would be worth $Y to a studio. A risk-neutral studio will pay up to $Y-\epsilon for this information, and be no better off. (And, of course, predictions are _also_ an experience good..)
movies  marketing  data_mining  have_read  shot_after_a_fair_trial 
4 weeks ago
Do Coercive Reeducation Technologies Actually Work? – BLARB
--- The "technology" aspect here is, if you'll forgive the expression, a red
herring. It's at most just about _finding_ people; the re-education is, as described, Good Old Fashioned Communism. Once you explained that a VPN was a way of corresponding with people in the imperialist camp, there is literally nothing here that F. E. Dzerzhinsky didn't understand (or do; cf. "decossackization").
xinjiang  china:prc  surveillance 
4 weeks ago
Reasoning From Unfamiliar Premises: A Study With Unschooled Adults - Maria Dias, Antonio Roazzi, Paul L. Harris, 2005
"A long tradition of research initiated by Luria in the 1930s has established that unschooled adults perform poorly on reasoning tasks. Particularly when the premises are unfamiliar, they adopt an inappropriate empirical bias. However, recent findings show that young children with little or no schooling reason competently if prompted to think of the unfamiliar premises as pertaining to a distant planet. We tested two groups of adults: illiterate, unschooled adults and adults with limited schooling. Both groups received problems that included either a premise with unknown content or a premise contradicting their everyday experience. When given a minimal prompt, both groups manifested the customary empirical bias. By contrast, when explicitly prompted to think of the unfamiliar premises as pertaining to a distant planet, they reasoned accurately and appropriately justified their conclusions in terms of the supplied premises."
to:NB  psychology  cognitive_science  luria.a.r. 
4 weeks ago
« earlier      
academia adversarial_examples afghanistan agent-based_models american_history ancient_history anthropology archaeology art bad_data_analysis bayesian_consistency bayesianism biochemical_networks biology blogged book_reviews books:noted books:owned books:recommended bootstrap causal_discovery causal_inference causality central_asia central_limit_theorem class_struggles_in_america classifiers climate_change clustering cognitive_science collective_cognition community_discovery computational_statistics confidence_sets corruption coveted crime cross-validation cultural_criticism cultural_evolution cultural_exchange data_analysis data_mining debunking decision-making decision_theory delong.brad democracy density_estimation dimension_reduction distributed_systems downloaded dynamical_systems ecology econometrics economic_history economic_policy economics education empirical_processes ensemble_methods entropy_estimation epidemics_on_networks epidemiology_of_representations epistemology ergodic_theory estimation evisceration evolutionary_biology experimental_psychology factor_analysis finance financial_crisis_of_2007-- financial_speculation fmri food fraud funny funny:geeky funny:laughing_instead_of_screaming funny:malicious graph_theory graphical_models have_read heard_the_talk heavy_tails high-dimensional_statistics hilbert_space history_of_ideas history_of_science human_genetics hypothesis_testing ideology imperialism in_nb inequality inference_to_latent_objects information_theory institutions kernel_methods kith_and_kin krugman.paul large_deviations lasso learning_theory linguistics literary_criticism machine_learning macro_from_micro macroeconomics market_failures_in_everything markov_models mathematics mixing model_selection modeling modern_ruins monte_carlo moral_psychology moral_responsibility mortgage_crisis natural_history_of_truthiness network_data_analysis networked_life networks neural_data_analysis neural_networks neuroscience non-equilibrium nonparametrics optimization our_decrepit_institutions philosophy philosophy_of_science photos physics political_economy political_philosophy political_science prediction pretty_pictures principal_components probability programming progressive_forces psychoceramics psychology r racism random_fields re:adafaepov re:almost_none re:aos_project re:democratic_cognition re:do-institutions-evolve re:g_paper re:homophily_and_confounding re:network_differences re:smoothing_adjacency_matrices re:your_favorite_dsge_sucks recipes regression running_dogs_of_reaction science_as_a_social_process science_fiction simulation social_influence social_life_of_the_mind social_media social_networks social_science_methodology sociology something_about_america sparsity spatial_statistics state-space_models statistical_inference_for_stochastic_processes statistical_mechanics statistics stochastic_processes text_mining the_american_dilemma the_continuing_crises time_series to:blog to:nb to_be_shot_after_a_fair_trial to_read to_teach:baby-nets to_teach:complexity-and-inference to_teach:data-mining to_teach:data_over_space_and_time to_teach:statcomp to_teach:undergrad-ada track_down_references us_politics utter_stupidity vast_right-wing_conspiracy via:? via:henry_farrell via:jbdelong visual_display_of_quantitative_information whats_gone_wrong_with_america why_oh_why_cant_we_have_a_better_academic_publishing_system why_oh_why_cant_we_have_a_better_press_corps

Copy this bookmark: