models-and-modes   130

« earlier    

The Only Woman to Win the Nobel Prize in Economics Also Debunked the Orthodoxy - Evonomics
I mention Lloyd’s essay to illustrate how ridiculous yet persistent the misconceptions about the “tragedy” dynamic truly are. Commons scholar Lewis Hyde dryly notes, “Just as Hardin proposes a herdsman whose reason is unable to encompass the common good, so Lloyd supposes persons who have no way to speak with each other or make joint decisions. Both writers inject laissez-faire individualism into an old agrarian village and then gravely announce that the commons is dead. From the point of view of such a village, Lloyd’s assumptions are as crazy as asking us to ‘suppose a man to have a purse to which his left and right hand may freely resort, each unaware of the other’.”

This absurdity, unfortunately, is the basis for a large literature of “prisoner’s dilemma” experiments that purport to show how “rational individuals” behave when confronted with “social dilemmas,” such as how to allocate a limited resource. Should the “prisoner” cooperate with other potential claimants and share the limited rewards? Or should he or she defect by grabbing as much for himself as possible?
economics  ideology  public-policy  models-and-modes  commons  to-write-about  theory-and-practice-sitting-in-a-tree  libertarianism  assumptions 
4 weeks ago by Vaguery
Props in Network Theory | Azimuth
We start with circuits made solely of ideal perfectly conductive wires. Then we consider circuits with passive linear components like resistors, capacitors and inductors. Finally we turn on the power and consider circuits that also have voltage and current sources.

And here’s the cool part: each kind of circuit corresponds to a prop that pure mathematicians would eventually invent on their own! So, what’s good for engineers is often mathematically natural too.

commentary: while abstract, it might be worth trying to understand this stuff
network-theory  abstraction  rather-interesting  models-and-modes  circles-and-arrows  bond-diagrams  to-write-about  to-understand  functional-programming  category-theory  via:Vaguery 
7 weeks ago by WMTrenfield
Props in Network Theory | Azimuth
We start with circuits made solely of ideal perfectly conductive wires. Then we consider circuits with passive linear components like resistors, capacitors and inductors. Finally we turn on the power and consider circuits that also have voltage and current sources.

And here’s the cool part: each kind of circuit corresponds to a prop that pure mathematicians would eventually invent on their own! So, what’s good for engineers is often mathematically natural too.
network-theory  abstraction  rather-interesting  models-and-modes  circles-and-arrows  bond-diagrams  to-write-about  to-understand  functional-programming  category-theory 
7 weeks ago by Vaguery
The meaning of model equivalence: Network models, latent variables, and the theoretical space in between | Psych Networks
Recently, an important set of equivalent representations of the Ising model was published by Joost Kruis and Gunter Maris in Scientific Reports. The paper constructs elegant representations of the Ising model probability distribution in terms of a network model (which consists of direct relations between observables), a latent variable model (which consists of relations between a latent variable and observables, in which the latent variable acts as a common cause), and a common effect model (which also consists of relations between a latent variable and observables, but here the latent variable acts as a common effect). The latter equivalence is a novel contribution to the literature and a quite surprising finding, because it means that a formative model can be statistically equivalent to a reflective model, which one may not immediately expect (do note that this equivalence need not maintain dimensionality, so a model with a single common effect may translate in a higher-dimensional latent variable model).

However, the equivalence between the ordinary (reflective) latent variable models and network models has been with us for a long time, and I therefore was rather surprised at some people’s reaction to the paper and the blog post that accompanies it. Namely, it appears that some think that (a) the fact that network structures can mimic reflective latent variables and vice versa is a recent discovery, that (b) somehow spells trouble for the network approach itself (because, well, what’s the difference?). The first of these claims is sufficiently wrong to go through the trouble of refuting it, if only to set straight the historical record; the second is sufficiently interesting to investigate it a little more deeply. Hence the following notes.
dynamical-systems  models  models-and-modes  representation  philosophy-of-science  (in-practice)  to-write-about  via:several 
11 weeks ago by Vaguery
Church vs Curry Types - LispCast
My ambitious hope is that this perspective will quiet a lot of the fighting as people recognize that they are just perpetuating a rift in the field of mathematics that happened a long time ago. The perspectives are irreconcilable now, but that could change. A paper called Church and Curry: Combining Intrinsic and Extrinsic Typing builds a language with both kinds of types. And Gradual Typing and Blame Calculus are investigating the intersection of static and dynamic typing. Let’s stop fighting, make some cool tools and use them well.
type-theory  computer-science  models-and-modes  dichotomy-or-not?  to-write-about 
march 2018 by Vaguery
Opinion | Corporate America Is Suppressing Wages for Many Workers - The New York Times
For a long time, economists believed that labor-market monopsony rarely existed, at least outside old-fashioned company towns where a single factory employs most of the residents. But in recent decades, several compelling studies have revealed that monopsony is omnipresent. Professionals like doctors and nurses, workers in factories and meat processing plants, and sandwich makers and other low-skill workers earn far less — thousands of dollars less — than they would if employers did not dominate labor markets.

The studies show that common features of the labor market give enormous bargaining advantages to employers. Because most people sink roots in their communities, they are reluctant to quit their job and move to a job that is far away. Because workplaces differ in terms of their location and conditions, people have trouble comparing them, which means that one cannot easily “comparison shop” for jobs. And thanks to a wave of consolidation, industries are increasingly dominated by a small number of huge companies, which means that workers have fewer choices among employers in their area.

When employers exercise monopsonistic power, wages are suppressed, jobs are left unfilled, and economic growth suffers. Unions used to offset employer monopsony power, but unions now represent only 7 percent of private sector workers, down from a peak of 35 percent in the 1950s. Combating the practices that employers use to monopsonize the labor market can lead to higher wages, more jobs and faster economic growth.
worklife  economics  models-and-modes  public-policy  power-relations  to-write-about  capitalism 
march 2018 by Vaguery
[1802.02627] Going Deeper in Spiking Neural Networks: VGG and Residual Architectures
Over the past few years, Spiking Neural Networks (SNNs) have become popular as a possible pathway to enable low-power event-driven neuromorphic hardware. However, their application in machine learning have largely been limited to very shallow neural network architectures for simple problems. In this paper, we propose a novel algorithmic technique for generating an SNN with a deep architecture, and demonstrate its effectiveness on complex visual recognition problems such as CIFAR-10 and ImageNet. Our technique applies to both VGG and Residual network architectures, with significantly better accuracy than the state-of-the-art. Finally, we present analysis of the sparse event-driven computations to demonstrate reduced hardware overhead when operating in the spiking domain.
via:?  to-write-about  to-read  neural-networks  representation  models-and-modes  machine-learning  simulation 
february 2018 by Vaguery
[1703.10651] Reliable Decision Support using Counterfactual Models
Making a good decision involves considering the likely outcomes under each possible action. For example, would drug A or drug B lead to a better outcome for this patient? Ideally, we answer these questions using an experiment, but this is not always possible (e.g., it may be unethical). As an alternative, we can use non-experimental data to learn models that make counterfactual predictions of what we would observe had we run an experiment. To learn such models for decision-making problems, we propose the use of counterfactual objectives in lieu of classical supervised learning objectives. We implement this idea in a challenging and frequently occurring context, and propose the counterfactual GP (CGP), a counterfactual model of continuous-time trajectories (time series) under sequences of actions taken in continuous-time. We develop our model within the potential outcomes framework of Neyman and Rubin. The counterfactual GP is trained using a joint maximum likelihood objective that adjusts for dependencies between observed actions and outcomes in the training data. We report two sets of experimental results. First, we show that the CGP's predictions are reliable; they are stable to changes in certain characteristics of the training data that are not relevant to the decision-making problem. Predictive models trained using classical supervised learning objectives, however, are not stable to such perturbations. In the second experiment, we use data from a real intensive care unit (ICU) and qualitatively demonstrate how the CGP's ability to answer "What if?" questions offers medical decision-makers a powerful new tool for planning treatment.
machine-learning  models-and-modes  rather-interesting  to-write-about  consider:symbolic-regression 
january 2018 by Vaguery
[1710.03453] The Sparse Multivariate Method of Simulated Quantiles
In this paper the method of simulated quantiles (MSQ) of Dominicy and Veredas (2013) and Dominick et al. (2013) is extended to a general multivariate framework (MMSQ) and to provide a sparse estimator of the scale matrix (sparse-MMSQ). The MSQ, like alternative likelihood-free procedures, is based on the minimisation of the distance between appropriate statistics evaluated on the true and synthetic data simulated from the postulated model. Those statistics are functions of the quantiles providing an effective way to deal with distributions that do not admit moments of any order like the α-Stable or the Tukey lambda distribution. The lack of a natural ordering represents the major challenge for the extension of the method to the multivariate framework. Here, we rely on the notion of projectional quantile recently introduced by Hallin etal. (2010) and Kong Mizera (2012). We establish consistency and asymptotic normality of the proposed estimator. The smoothly clipped absolute deviation (SCAD) ℓ1--penalty of Fan and Li (2001) is then introduced into the MMSQ objective function in order to achieve sparse estimation of the scaling matrix which is the major responsible for the curse of dimensionality problem. We extend the asymptotic theory and we show that the sparse-MMSQ estimator enjoys the oracle properties under mild regularity conditions. The method is illustrated and its effectiveness is tested using several synthetic datasets simulated from the Elliptical Stable distribution (ESD) for which alternative methods are recognised to perform poorly. The method is then applied to build a new network-based systemic risk measurement framework. The proposed methodology to build the network relies on a new systemic risk measure and on a parametric test of statistical dominance.
statistics  reinventing-the-wheel  how-is-this-not-constrained-symbolic-regression?  algorithms  models-and-modes  to-understand  inference 
november 2017 by Vaguery
[1703.04977] What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?
There are two major types of uncertainty one can model. Aleatoric uncertainty captures noise inherent in the observations. On the other hand, epistemic uncertainty accounts for uncertainty in the model -- uncertainty which can be explained away given enough data. Traditionally it has been difficult to model epistemic uncertainty in computer vision, but with new Bayesian deep learning tools this is now possible. We study the benefits of modeling epistemic vs. aleatoric uncertainty in Bayesian deep learning models for vision tasks. For this we present a Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty. We study models under the framework with per-pixel semantic segmentation and depth regression tasks. Further, our explicit uncertainty formulation leads to new loss functions for these tasks, which can be interpreted as learned attenuation. This makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks.
computer-vision  machine-learning  models-and-modes  uncertainty  deep-learning  rather-interesting  define-your-terms  representation  nudge-targets  to-wrt 
may 2017 by Vaguery
[1312.7604] Probabilistic Archetypal Analysis
Archetypal analysis represents a set of observations as convex combinations of pure patterns, or archetypes. The original geometric formulation of finding archetypes by approximating the convex hull of the observations assumes them to be real valued. This, unfortunately, is not compatible with many practical situations. In this paper we revisit archetypal analysis from the basic principles, and propose a probabilistic framework that accommodates other observation types such as integers, binary, and probability vectors. We corroborate the proposed methodology with convincing real-world applications on finding archetypal winter tourists based on binary survey data, archetypal disaster-affected countries based on disaster count data, and document archetypes based on term-frequency data. We also present an appropriate visualization tool to summarize archetypal analysis solution better.
machine-learning  dimension-reduction  rather-interesting  models-and-modes  to-understand  to-write-about 
february 2017 by Vaguery
The Archdruid Report: Perched on the Wheel of Time
In the final chapters of his second volume, for example, Spengler noted that civilizations in the stage ours was about to reach always end up racked by conflicts that pit established hierarchies against upstart demagogues who rally the disaffected and transform them into a power base. Looking at the trends visible in his own time, he sketched out the most likely form those conflicts would take in the Winter phase of our civilization. Modern representative democracy, he pointed out, has no effective defenses against corruption by wealth, and so could be expected to evolve into corporate-bureaucratic plutocracies that benefit the affluent at the expense of everyone else. Those left out in the cold by these transformations, in turn, end up backing what Spengler called Caesarism—the rise of charismatic demagogues who challenge and eventually overturn the corporate-bureaucratic order.

These demagogues needn’t come from within the excluded classes, by the way. Julius Caesar, the obvious example, came from an old upper-class Roman family and parlayed his family connections into a successful political career. Watchers of the current political scene may be interested to know that Caesar during his lifetime wasn’t the imposing figure he became in retrospect; he had a high shrill voice, his morals were remarkably flexible even by Roman standards—the scurrilous gossip of his time called him “every man’s wife and every woman’s husband”—and he spent much of his career piling up huge debts and then wriggling out from under them. Yet he became the political standardbearer for the plebeian classes, and his assassination by a conspiracy of rich Senators launched the era of civil wars that ended the rule of the old elite once and for all.
history  political-economy  philosophy  models-and-modes  argumentation 
february 2017 by Vaguery
[1612.02483] High Dimensional Consistent Digital Segments
We consider the problem of digitalizing Euclidean line segments from ℝd to ℤd. Christ {\em et al.} (DCG, 2012) showed how to construct a set of {\em consistent digital segment} (CDS) for d=2: a collection of segments connecting any two points in ℤ2 that satisfies the natural extension of the Euclidean axioms to ℤd. In this paper we study the construction of CDSs in higher dimensions.
We show that any total order can be used to create a set of {\em consistent digital rays} CDR in ℤd (a set of rays emanating from a fixed point p that satisfies the extension of the Euclidean axioms). We fully characterize for which total orders the construction holds and study their Hausdorff distance, which in particular positively answers the question posed by Christ {\em et al.}.
approximation  computational-geometry  performance-measure  rather-interesting  mathematics  consistency  models-and-modes  constructive-geometry  nudge-targets  consider:representation  consider:looking-to-see 
january 2017 by Vaguery
[1508.05837] Hydroassets Portfolio Management for Intraday Electricity Trading in a Discrete Time Stochastic Optimization Perspective
Hydro storage system optimization is becoming one of the most challenging task in Energy Finance. Following the Blomvall and Lindberg (2002) interior point model, we set up a stochastic multiperiod optimization procedure by means of a "bushy" recombining tree that provides fast computational results. Inequality constraints are packed into the objective function by the logarithmic barrier approach and the utility function is approximated by its second order Taylor polynomial. The optimal solution for the original problem is obtained as a diagonal sequence where the first diagonal dimension is the parameter controlling the logarithmic penalty and the second is the parameter for the Newton step in the construction of the approximated solution. Optimal intraday electricity trading and water values for hydroassets as shadow prices are computed. The algorithm is implemented in Mathematica.
portfolio-theory  operations-research  financial-engineering  time-series  prediction  models-and-modes  nudge-targets  consider:performance-measures  consider:metaheuristics 
january 2017 by Vaguery
[1604.04647] Sheaf and duality methods for analyzing multi-model systems
There is an interplay between models, specified by variables and equations, and their connections to one another. This dichotomy should be reflected in the abstract as well. Without referring to the models directly -- only that a model consists of spaces and maps between them -- the most readily apparent feature of a multi-model system is its topology. We propose that this topology should be modeled first, and then the spaces and maps of the individual models be specified in accordance with the topology. Axiomatically, this construction leads to sheaves. Sheaf theory provides a toolbox for constructing predictive models described by systems of equations. Sheaves are mathematical objects that manage the combination of bits of local information into a consistent whole. The power of this approach is that complex models can be assembled from smaller, easier-to-construct models. The models discussed in this chapter span the study of continuous dynamical systems, partial differential equations, probabilistic graphical models, and discrete approximations of these models.
category-theory  to-understand  models-and-modes  rather-interesting  no-really-I-think-I-need-to-understand-this-thread 
january 2017 by Vaguery
Genotypic complexity of Fisher's geometric model | bioRxiv
Fisher's geometric model was originally introduced to argue that complex adaptations must occur in small steps because of pleiotropic constraints. When supplemented with the assumption of additivity of mutational effects on phenotypic traits, it provides a simple mechanism for the emergence of genotypic epistasis from the nonlinear mapping of phenotypes to fitness. Of particular interest is the occurrence of sign epistasis, which is a necessary condition for multipeaked genotypic fitness landscapes. Here we compute the probability that a pair of randomly chosen mutations interacts sign-epistatically, which is found to decrease algebraically with increasing phenotypic dimension n, and varies non-monotonically with the distance from the phenotypic optimum. We then derive asymptotic expressions for the mean number of fitness maxima in genotypic landscapes composed of all combinations of L random mutations. This number increases exponentially with L, and the corresponding growth rate is used as a measure of the complexity of the genotypic landscape. The dependence of the complexity on the parameters of the model is found to be surprisingly rich, and three distinct phases characterized by different landscape structures are identified. The complexity generally decreases with increasing phenotypic dimension, but a non-monotonic dependence on n is found in certain regimes. Our results inform the interpretation of experiments where the parameters of Fisher's model have been inferred from data, and help to elucidate which features of empirical fitness landscapes can (or cannot) be described by this model.
population-biology  theoretical-biology  theory-and-practice-sitting-in-a-tree  fitness-landscapes  models-and-modes  to-write-about  nudge-targets  consider:rediscovery  consider:robustness  consider:multiobjective-versions 
january 2017 by Vaguery
[1612.02540] City traffic forecasting using taxi GPS data: A coarse-grained cellular automata model
City traffic is a dynamic system of enormous complexity. Modeling and predicting city traffic flow remains to be a challenge task and the main difficulties are how to specify the supply and demands and how to parameterize the model. In this paper we attempt to solve these problems with the help of large amount of floating car data. We propose a coarse-grained cellular automata model that simulates vehicles moving on uniform grids whose size are much larger compared with the microscopic cellular automata model. The car-car interaction in the microscopic model is replaced by the coupling between vehicles and coarse-grained state variables in our model. To parameterize the model, flux-occupancy relations are fitted from the historical data at every grids, which serve as the coarse-grained fundamental diagrams coupling the occupancy and speed. To evaluate the model, we feed it with the historical travel demands and trajectories obtained from the floating car data and use the model to predict road speed one hour into the future. Numerical results show that our model can capture the traffic flow pattern of the entire city and make reasonable predictions. The current work can be considered a prototype for a model-based forecasting system for city traffic.
agent-based  cellular-automata  rather-interesting  traffic  models-and-modes  to-write-about  representation 
december 2016 by Vaguery
On Computational Explanations/ Synthese (in press) DOI 10.1007/s11229-016-1101-5 (PDF Download Available)
Computational explanations focus on information processing required in specific cognitive capacities, such as perception, reasoning or decision-making. These explanations specify the nature of the information processing task, what information needs to be represented, and why it should be operated on in a particular manner. In this article, the focus is on three questions concerning the nature of computational explanations: (1) What type of explanations they are, (2) in what sense computational explanations are explanatory and (3) to what extent they involve a special, “independent” or “autonomous” level of explanation. In this paper, we defend the view computational explanations are genuine explanations, which track non-causal/formal dependencies. Specifically, we argue that they do not provide mere sketches for explanation, in contrast to what for example Piccinini and Craver (Synthese 183(3):283–311, 2011) suggest. This view of computational explanations implies some degree of “autonomy” for the computational level. However, as we will demonstrate that does not make this view “computationally chauvinistic” in a way that Piccinini (Synthese 153:343–353, 2006b) or Kaplan (Synthese 183(3):339–373, 2011) have charged it to be.
via:cshalizi  philosophy-of-science  explanation  models-and-modes  to-read  to-write-about 
december 2016 by Vaguery
[1608.05226] A tale of a Principal and many many Agents
In this paper, we investigate a moral hazard problem in finite time with lump-sum and continuous payments, involving infinitely many Agents, with mean field type interactions, hired by one Principal. By reinterpreting the mean-field game faced by each Agent in terms of a mean field FBSDE, we are able to rewrite the Principal's problem as a control problem for McKean-Vlasov SDEs. We review two general approaches to tackle it: the first one introduced recently in [2, 66, 67, 68, 69] using dynamic programming and Hamilton-Jacobi-Bellman equations, the second based on the stochastic Pontryagin maximum principle, which follows [16]. We solve completely and explicitly the problem in special cases, going beyond the usual linear-quadratic framework. We finally show in our examples that the optimal contract in the N-players' model converges to the mean-field optimal contract when the number of agents goes to +∞.
probability-theory  options  optimization  models-and-modes  rather-interesting  agent-based  nudge-targets  consider:looking-to-see  to-write-about 
august 2016 by Vaguery
[1607.06274] Topological Data Analysis with Bregman Divergences
Given a finite set in a metric space, the topological analysis generalizes hierarchical clustering using a 1-parameter family of homology groups to quantify connectivity in all dimensions. The connectivity is compactly described by the persistence diagram. One limitation of the current framework is the reliance on metric distances, whereas in many practical applications objects are compared by non-metric dissimilarity measures. Examples are the Kullback-Leibler divergence, which is commonly used for comparing text and images, and the Itakura-Saito divergence, popular for speech and sound. These are two members of the broad family of dissimilarities called Bregman divergences.
We show that the framework of topological data analysis can be extended to general Bregman divergences, widening the scope of possible applications. In particular, we prove that appropriately generalized Cech and Delaunay (alpha) complexes capture the correct homotopy type, namely that of the corresponding union of Bregman balls. Consequently, their filtrations give the correct persistence diagram, namely the one generated by the uniformly growing Bregman balls. Moreover, we show that unlike the metric setting, the filtration of Vietoris-Rips complexes may fail to approximate the persistence diagram. We propose algorithms to compute the thus generalized Cech, Vietoris-Rips and Delaunay complexes and experimentally test their efficiency. Lastly, we explain their surprisingly good performance by making a connection with discrete Morse theory.
data-analysis  topology  metrics  to-understand  algorithms  representation  statistics  probability-theory  models-and-modes 
august 2016 by Vaguery

« earlier    

related tags

(in-practice)  abstraction  academia-doesn't-guarantee-acuity  academic-culture  admittedly-time-vs-size  aesthetics  affordances  agent-based  agents  algorithms  all-that-is-solid-melts-into-air  also-about-damned-time  amusing  an-interesting-statement  analogy  approximation  argumentation  art-criticism  art-history  art  artificial-intelligence  artificial-life  assumptions  back-testing  bankers-should-start-avoiding-lampposts-right-about-now  behavioral-economics  benchmarking  bioinformatics-ain't-databases  bioinformatics  biology  biophysics  bond-diagrams  boolean-networks  bounded-rationality  bulls-grabbed-by-horns  but-just-that-trade  but-still  but-what-of-lyapunov-and-what-of-lyapunov?  capitalism  category-theory  causality  cause-and-effect  cell-biology  cellular-automata  chaos  circles-and-arrows  citation  classification  cognition  cold-war  combinatorics  commons  communication  comparison  complex-systems  complexology  composition  computational-geometry  computer-science  computer-vision  concurrency  consider:architecture-refactoring  consider:case-study  consider:corrections  consider:feature-discovery  consider:general-rules  consider:good-old-gp  consider:looking-to-see  consider:metaheuristics  consider:multiobjective-search  consider:multiobjective-stances  consider:multiobjective-versions  consider:participation-loops  consider:performance-measures  consider:rediscovery  consider:representation  consider:representations  consider:robustness  consider:stress-testing  consider:stress  consider:symbolic-regression  consistency  constructive-geometry  contingency-of-models  control-theory  cooperation  counterexamples  coupled-oscillators  cp-snow-in-action  criticism  crystals  cultural-assumptions  cultural-dynamics  cultural-norms  data-analysis  data-balancing  data-cleaning  data-pageant  data-science-as-she-is-spoke  databases  deep-learning  define-your-terms  design  diagnosis  dichotomy-or-not?  did-this-and-it-was-a-good-idea  dimension-reduction  disintermediation-in-action  dynamical-systems  ecology  economics  education  efficiency-is-on-the-table-now-i-see  elegant-demonstrations  energy-landscapes  engineering-design  epidemiology  esoterica  evolutionary-algorithms  explanation  fascinating  fashion-and-nostalgia  fashion  feature-extraction  finance  financial-crisis  financial-engineering  fitness-landscapes  formal-logic  formalization-and-then-jamming-the-formalization-into-the-ground-and-snapping-it-off  formalization  freakonomics  free-market-superstitions  functional-programming  fuzzy-numbers  genetic-programming-it-ain't  grammar  group-theory  h-index  health  hey-i-know-that-guy  hillclimbing?  history-of-art  history-of-science  history-writ-in-the-present  history  hollandism  horse-races  how-is-this-not-constrained-symbolic-regression?  ideology  image-analysis  individuation  inference  infighting  information-architecture  information-theory  interesting  interpretability  it's-crowded-in-there  it's-more-complicated-than-you-think  kauffmania  learning  libertarianism  library  linguistics  literary-criticism  machine-learning  malcolm-gladwell  markets  materials-science  mathematics  meta-modeling  metrics  misrepresentations  modeling-is-not-mathematics  modeling  models  modernism  molecular-design  multiobjective-optimization  music  network-theory  neural-networks  nice  no-really-i-think-i-need-to-understand-this-thread  noise  nonlinear-dynamics  not-doing-it-for-me  notable-datasets  nudge-targets  numerical-methods  obesity  oh-economics  oh-physicists  on-the-falls-of-academies  online-learning  open-source  operational-conditioning  operations-research  optimization  options  out-of-the-box  oversimplification  paradigm-shifts  parsimony  pedagogy  perception  percolation  performance-measure  performance-network  peyrard-bishoqp-dauxois  philosophy-of-art  philosophy-of-engineering  philosophy-of-science  philosophy  physic!  physics  physiology  political-economy  politics  popularization  population-biology  portfolio-theory  power-laws  power-relations  prediction  probability-theory  progress-in-action  psychology  public-policy  quantums  quotes  racism  randianism  rather-interesting  rather-odd  reaction-networks  really?  regression  reinventing-the-wheel  representation  review  rule-extraction  science  scientific-communication  selection  self-definition  selfish-genes  sentence-diagramming  signal-processing  simulation  smh  snps  social-networks  social-norms  sociology-of-science  sociology  sparse-representations  statistical-mechanics  statisticl-mechanics  statistics  stochastic-systems  structural-biology  summary-statistics  supervised-learning  system-of-professions  systems-biology  systems-thinking  the-fiction-hidden-and-the-facts-promoted  the-mangle-in-practice  the-meaning-of-books  the-voice  theoretical-biology  theory-and-practice-sitting-in-a-tree  tiling  time-series  to-change  to-consider  to-fix  to-read  to-understand  to-write-about  to-wrt  topology  traffic  translation  tricks-of-the-trade  turbulence  type-theory  uncertainty  useful  user-centric-design  user-experience  user-interface  utility  variable-selection  visualization  web-design  what-is-this-i-don't-even  with-apologies  wolframism  worklife  writing 

Copy this bookmark:



description:


tags: