11127
Tail Call Optimization in Ruby
Tail call optimization is an optimization where tail recursive functions are transformed into loops by the compiler. A tail recursive function is one where the final statement is a call to the same method. In this post, we will look at what tail recursive functions look like, how tail call optimization helps them, and how to enable TCO in Ruby.
ruby  software-development  huh  recursion  interpreter  rather-interesting 
3 days ago
Stuck on one idea of truth or beauty? Rhizomes can help | Aeon Ideas
The wrong lesson of the Sokal controversy, however, is that people in the humanities or social sciences must always use familiar language. In Deleuze and Guattari’s work, technical terms and neologisms almost always have precise etymologies and convey clear images. At its best, their philosophical language helps us to perceive the elusive factors of reality that affect what we can more easily see and measure.

As a general rule, scholars should speak as clearly and as simply as they can, without compromising the integrity of their ideas and enquiries. Few people complain when scientists or mathematicians use a technical vocabulary, and that courtesy should be extended to scholars in the humanities and the social sciences. Many common-sense terms today – such as paradigm shifts and electoral realignments – started off as jargon, and we could anticipate and welcome other scholarly terms making that transition. Rhizomes.
two-or-more-cultures  academic-culture  jargon  language  the-right-tools-for-the-job  to-write-about 
4 days ago
The Teaching Class – Guernica
When Andrew Scott, a composition instructor in Indianapolis, explained adjuncting to some of his students, he wound up being called into his supervisor’s office for a scolding. A group of his students at the private university where he was adjuncting (he also had a full-time position at Ball State) had arrived early for class, and were talking in the hallway. When one student mentioned a history teacher who seemed eager to get the students to like her, and whose class didn’t have a lot of work, Scott explained how her work situation was involved: “I knew the instructor was an adjunct, and that she taught at several places to cobble together a living. I told the students that she was an adjunct, and that the class was easy because she was afraid of losing her job.” Adjuncts are often evaluated solely based on student evaluations. As Rebecca Schuman put it in her Slate article “Confessions of a Grade Inflator,” “popularity is the only thing keeping them employed.”

Scott had this conversation with his students outside of class, because the students had brought it up, and because he considered it “a teachable moment.” But it still got him into trouble, probably because of this comparison: “I said that the university pays the janitor who scrapes the gum off their desks more per year than me and most of the people who teach their first-year classes. My private university students couldn’t believe that, but it was true. Even a low estimate shows how that’s true. Ten bucks per hour for forty hours a week equals an annual salary of $20,800.” One year Scott taught seven courses at that college, and made under $15,000 for that work.
contingent-labor  academic-culture  worklife  class  corporatism  to-write-about 
4 days ago
Descartes was wrong: ‘a person is a person through other persons’ | Aeon Ideas
The emerging fields of embodied and enactive cognition have started to take dialogic models of the self more seriously. But for the most part, scientific psychology is only too willing to adopt individualistic Cartesian assumptions that cut away the webbing that ties the self to others. There is a Zulu phrase, ‘Umuntu ngumuntu ngabantu’, which means ‘A person is a person through other persons.’ This is a richer and better account, I think, than ‘I think, therefore I am.’
cognition  philosophy  Cartesianism  context  social-construction-of-self  to-write-about 
4 days ago
[1706.04119] From MEGATON to RASCAL: Surfing the Parameter Space of Evolutionary Algorithms
The practice of evolutionary algorithms involves a mundane yet inescapable phase, namely, finding parameters that work well. How big should the population be? How many generations should the algorithm run? What is the (tournament selection) tournament size? What probabilities should one assign to crossover and mutation? All these nagging questions need good answers if one is to embrace success. Through an extensive series of experiments over multiple evolutionary algorithm implementations and problems we show that parameter space tends to be rife with viable parameters. We aver that this renders the life of the practitioner that much easier, and cap off our study with an advisory digest for the weary.
hey-I-know-this-guy  machine-learning  meta-optimization  evolutionary-algorithms  metaheuristics  hyperheuristics  to-write-about 
5 days ago
How to Build an Impossible Polyhedron
Using stiff paper and transparent tape, Craig Kaplan assembles a beautiful roundish shape that looks like a Buckminster Fuller creation or a fancy new kind of soccer ball. It consists of four regular dodecagons (12-sided polygons with all angles and sides the same) and 12 decagons (10-sided), with 28 little gaps in the shape of equilateral triangles. There’s just one problem. This figure should be impossible. That set of polygons won’t meet at the vertices. The shape can’t close up.
via:my  markup  mathematics  approximation  rather-interesting  number-theory  open-questions  to-write-about  theory-and-practice-sitting-in-a-tree  philosophy-of-science 
8 days ago
The Pitfalls of "Objectivity" | Just Visiting
In this context, “objectivity” is not a value, but a pose, and one that’s usually sussed out by students as phony. They easily recognize it as a confidence game because it’s a game they’d previously been trying to practice, and during that practice they knew it was a pose.
When we encounter claims of “objectivity,” I think of one of two things; either the person claiming objectivity is kidding themselves, or they’re trying to put one over on me.
If we want students well-armed for the world, they need not objectivity, but a "critical sensibility" that has been tested and remains adaptable to new situations and demands. Students shouldn't feel like they have to have all the answers. Rather, they should have the skills, knowledge, and experiences that allows them to tackle new and different questions.
humanities  objectivity  academic-culture  pedagogy  to-write-about  criticism 
8 days ago
[1704.00347] Self-organization of charged particles in circular geometry
The basic principles of self-organization of one-component charged particles, confined in disk and circular parabolic potentials, are proposed. A system of equations is derived, that allows us to determine equilibrium configurations for an arbitrary, but finite, number of charged particles that are distributed over several rings. Our approach reduces significantly the computational effort in minimizing the energy of equilibrium configurations and demonstrates a remarkable agreement with the values provided by molecular dynamics calculations. With the increase of particle number n>180 we find a steady formation of a centered hexagonal lattice that smoothly transforms to valence circular rings in the ground state configurations for both potentials.
self-organization  physics  experiment  simulation  simplicity  rather-interesting  to-write-about  pattern-formation  stamp-collecting 
8 days ago
[1705.08429] The chemical bond as an emergent phenomenon
We first argue that the covalent bond and the various closed-shell interactions can be thought of as symmetry broken versions of one and the same interaction, viz., the multi-center bond. We use specially chosen molecular units to show that the symmetry breaking is controlled by density and electronegativity variation. We show that the bond order changes with bond deformation but in a step-like fashion, regions of near constancy separated by electronic localization transitions. These will often cause displacive transitions as well so that the bond strength, order, and length are established self-consistently. We further argue on the inherent relation of the covalent, closed-shell, and multi-center interactions with ionic and metallic bonding. All of these interactions can be viewed as distinct sectors on a phase diagram with density and electronegativity variation as control variables; the ionic and covalent/secondary sectors are associated with on-site and bond-order charge density wave respectively, the metallic sectorwith an electronic fluid. While displaying a contiguity at low densities, the metallic and ionic interactions represent distinct phases separated by discontinuous transitions at sufficiently high densities. Multi-center interactions emerge as a hybrid of the metallic and ionic bond that results from spatial coexistence of delocalized and localized electrons. In the present description, the issue of the stability of a compound is that of mutual miscibility of electronic fluids with distinct degrees of electron localization, supra-atomic ordering in complex inorganic compounds comes about naturally. The notions of electronic localization advanced hereby suggest a high throughput, automated procedure for screening candidate compounds and structures with regard to stability, without the need for computationally costly geometric optimization.
chemistry  rather-interesting  theoretical-chemistry  emergence  experiment  to-write-about  philosophy-of-science  ontology 
8 days ago
Earning My Turns: A (computational) linguistic farce in three acts
The empiricist invaders were in their way heirs to Shannon, Turing, Kullback, I.J. Good who had been plying an effective if secretive trade at IDA and later at IBM and Bell Labs looking at speech recognition and translation as cryptanalysis problems (The history of the road from Bletchley Park to HMMs to IBM Model 2 is still buried in the murk of not fully declassified materials, but it would be awesome to write — I just found this about the early steps that could be a lot of fun). They convinced funders, especially at DARPA, that the rationalist empire was hollow and that statistical metrics on (supposedly) realistic tasks were needed to drive computational language work to practical success, as had been happening in speech recognition (although by the light of today, that speech recognition progress was less impressive than it seemed then). It did not hurt the campaign that many of the invaders were closer to the DoD in their backgrounds, careers, and outlooks than egghead computational linguists (another story that could be expanded, but might make some uncomfortable). Anyway, I was there in meetings where the empiricist invaders allied with funders increasingly laid down the new rules of the game. Like in the Norman invasion of England, a whole new vocabulary took over quickly with the new aristocracy.
natural-language-processing  history-of-science  system-of-professions  essay  have-read  the-objective-truth-oh-right  theory-and-practice-sitting-in-a-tree 
8 days ago
[1703.00565] Scattertext: a Browser-Based Tool for Visualizing how Corpora Differ
Scattertext is an open source tool for visualizing linguistic variation between document categories in a language-independent way. The tool presents a scatterplot, where each axis corresponds to the rank-frequency a term occurs in a category of documents. Through a tie-breaking strategy, the tool is able to display thousands of visible term-representing points and find space to legibly label hundreds of them. Scattertext also lends itself to a query-based visualization of how the use of terms with similar embeddings differs between document categories, as well as a visualization for comparing the importance scores of bag-of-words features to univariate metrics.
natural-language-processing  text-mining  feature-extraction  rather-interesting  programming  library  to-write-about 
8 days ago
ISTE | Learning happens in a zigzag – and that’s OK
Abumrad has said that “Radiolab” was a product of his tinkering with an idea for a show that would include dialogue, sound effects, music and interviews. Educators might call it inductive learning, and the idea of tinkering certainly connects with today’s makerspaces.

But what Abumrad knows for sure is that tinkering can help learners externalize their thinking “by fumbling around to find the other piece of your idea.”

He recalls tinkering around to solve the problem of defining a radio show without using a theme song, but instead by using sound. He spent hours creating 25 versions of layered voices and glitchy edits that would play at the top of “Radiolab.” It became a signature of the show.

“I feel a lot of my own development has been like that. The material somehow teaches you something and you keep tinkering until it feels good,” Abumrad explained.
play  ludics  pedagogy  to-write-about 
9 days ago
Winning the Battle for Riddler Nation; An Agent-Based Modelling Approach to the Solution | Curtis Miller's Personal Website
Oliver Roeder runs a column on FiveThirtyEight called “The Riddler,” where he proposes logical and mathematical puzzles for readers to solve. On February 3rd of this year, he posted in Riddler Classic the problem, “Can You Rule Riddler Nation?” Here is the description:
mathematical-recreations  puzzles  learning-by-doing  to-write-about 
9 days ago
On leaving precarious adjuncting – autosociologist
I’m not sure what my role is, going forward. I don’t think there’s a happy ending, though my fingers are crossed that there’s a new chapter ahead.  I mean, first things first I just need to find a way not to become homeless.  I will scramble for a job first and foremost.  I just pray that I get to work somewhere where I get to impact the common good – someplace where I will never EVER forget the humans that hold me up, and I them.  But that’s the thing: as a sociologist I know that not ONE of those humans is really to be blamed. The Dean, the Chair, me….  We’re all just players in the bureaucracy of academia, and society more broadly.  So I can’t escape this dehumanization, and I’ll either work or be fired, and the Dean and the Chair, they chose to work, and me, one day I’ll choose to work, and someone will be fired at my expense…  We’re powerless to the neoliberal institution and our own survival/greed. You are too, so let’s all look away.
academic-culture  Precariat  corporatism 
14 days ago
Does aligning phenotypic and genotypic modularity improve the evolution of neural networks? | Evolving AI Lab
Many argue that to evolve artificial intelligence that rivals that of natural animals, we need to evolve neural networks that are structurally organized in that they exhibit modularity, regularity, and hierarchy. It was recently shown that a cost for network connections, which encourages the evolution of modularity, can be combined with an indirect encoding, which encourages the evolution of regularity, to evolve networks that are both modular and regular. However, the bias towards regularity from indirect encodings may prevent evolution from independently optimizing different modules to perform different functions, unless modularity in the phenotype is aligned with modularity in the genotype. We test this hypothesis on two multi-modal problems—a pattern recognition task and a robotics task—that each require different phenotypic modules. In general, we find that performance is improved only when genotypic and phenotypic modularity are encouraged simultaneously, though the role of alignment remains unclear. In addition, intuitive manual decompositions fail to provide the performance benefits of automatic methods on the more challenging robotics problem, emphasizing the importance of automatic, rather than manual, decomposition methods. These results suggest encouraging modularity in both the genotype and phenotype as an important step towards solving large-scale multi-modal problems, but also indicate that more research is required before we can evolve structurally organized networks to solve tasks that require multiple, different neural modules.
hey-I-know-this-guy  evolutionary-algorithms  performance-measure  metrics  to-write-about  to-do 
14 days ago
[1602.02830] Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1
We introduce a method to train Binarized Neural Networks (BNNs) - neural networks with binary weights and activations at run-time. At training-time the binary weights and activations are used for computing the parameters gradients. During the forward pass, BNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations, which is expected to substantially improve power-efficiency. To validate the effectiveness of BNNs we conduct two sets of experiments on the Torch7 and Theano frameworks. On both, BNNs achieved nearly state-of-the-art results over the MNIST, CIFAR-10 and SVHN datasets. Last but not least, we wrote a binary matrix multiplication GPU kernel with which it is possible to run our MNIST BNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The code for training and running our BNNs is available on-line.
neural-networks  out-of-the-box  machine-learning  representation  rather-interesting  to-write-about  to-do 
14 days ago
[1705.00924] Split Packing: Algorithms for Packing Circles with Optimal Worst-Case Density
In the circle packing problem for triangular containers, one asks whether a given set of circles can be packed into a given triangle. Packing problems like this have been shown to be 𝖭𝖯-hard. In this paper, we present a new sufficient condition for packing circles into any right or obtuse triangle using only the circles' combined area: It is possible to pack any circle instance whose combined area does not exceed the triangle's incircle. This area condition is tight, in the sense that for any larger area, there are instances which cannot be packed.
A similar result for square containers has been established earlier this year, using the versatile, divide-and-conquer-based Split Packing algorithm. In this paper, we present a generalized, weighted version of this approach, allowing us to construct packings of circles into asymmetric triangles. It seems crucial to the success of these results that Split Packing does not depend on an orthogonal subdivision structure. Beside realizing all packings below the critical density bound, our algorithm can also be used as a constant-factor approximation algorithm when looking for the smallest non-acute triangle of a given side ratio in which a given set of circles can be packed.
An interactive visualization of the Split Packing approach and other related material can be found at this https URL
packing  computational-complexity  computational-geometry  experiment  rather-interesting  to-write-about 
14 days ago
[1701.00994] Dynamics on expanding spaces: modeling the emergence of novelties
Novelties are part of our daily lives. We constantly adopt new technologies, conceive new ideas, meet new people, experiment with new situations. Occasionally, we as individuals, in a complicated cognitive and sometimes fortuitous process, come up with something that is not only new to us, but to our entire society so that what is a personal novelty can turn into an innovation at a global level. Innovations occur throughout social, biological and technological systems and, though we perceive them as a very natural ingredient of our human experience, little is known about the processes determining their emergence. Still the statistical occurrence of innovations shows striking regularities that represent a starting point to get a deeper insight in the whole phenomenology. This paper represents a small step in that direction, focusing on reviewing the scientific attempts to effectively model the emergence of the new and its regularities, with an emphasis on more recent contributions: from the plain Simon's model tracing back to the 1950s, to the newest model of Polya's urn with triggering of one novelty by another. What seems to be key in the successful modelling schemes proposed so far is the idea of looking at evolution as a path in a complex space, physical, conceptual, biological, technological, whose structure and topology get continuously reshaped and expanded by the occurrence of the new. Mathematically it is very interesting to look at the consequences of the interplay between the "actual" and the "possible" and this is the aim of this short review.
innovation  Kauffmania  adjacent-possible  exploration  agent-based  power-laws  rather-interesting  to-write-about  to-do 
14 days ago
Dynamics Robustness of Cascading Systems | bioRxiv
A most important property of biochemical systems is robustness. Static robustness, e.g., homeostasis, is the insensitivity of a state against perturbations, whereas dynamics robustness, e.g., homeorhesis, is the insensitivity of a dynamic process. In contrast to the extensively studied static robustness, dynamics robustness, i.e., how a system creates an invariant temporal profile against perturbations, is little explored despite transient dynamics being crucial for cellular fates and are reported to be robust experimentally. For example, the duration of a stimulus elicits different phenotypic responses, and signaling networks process and encode temporal information. Hence, robustness in time courses will be necessary for functional biochemical networks. Based on dynamical systems theory, we uncovered a general mechanism to achieve dynamics robustness. Using a three-stage linear signaling cascade as an example, we found that the temporal profiles and response duration post-stimulus is robust to perturbations against certain parameters. Then analyzing the linearized model, we elucidated the criteria of how such dynamics robustness emerges in signaling networks. We found that changes in the upstream modules are masked in the cascade, and that the response duration is mainly controlled by the rate-limiting module and organization of the cascade's kinetics. Specifically, we found two necessary conditions for dynamics robustness in signaling cascades: 1) Constraint on the rate-limiting process: The phosphatase activity in the perturbed module is not the slowest. 2) Constraints on the initial conditions: The kinase activity needs to be fast enough such that each module is saturated even with fast phosphatase activity and upstream information is attenuated. We discussed the relevance of such robustness to several biological examples and the validity of the above conditions therein. Given the applicability of dynamics robustness to a variety of systems, it will provide a general basis for how biological systems function dynamically.
robustness  systems-biology  self-organization  complexology  nonlinear-dynamics  emergent-design  to-write-about  nudge-targets  consider:feature-discovery 
14 days ago
[1512.05259] Dynamical criticality: overview and open questions
Systems that exhibit complex behaviours are often found in a particular dynamical condition, poised between order and disorder. This observation is at the core of the so-called criticality hypothesis, which states that systems in a dynamical regime between order and disorder attain the highest level of computational capabilities and achieve an optimal trade-off between robustness and flexibility. Recent results in cellular and evolutionary biology, neuroscience and computer science have revitalised the interest in the criticality hypothesis, emphasising its role as a viable candidate general law in adaptive complex systems. In this paper we provide an overview of the works on dynamical criticality that are -to the best of our knowledge- particularly relevant for the criticality hypothesis. We review the main contributions concerning dynamics and information processing at the edge of chaos, and we illustrate the main achievements in the study of critical dynamics in biological systems. Finally, we discuss open questions and propose an agenda for future work.
edge-of-chaos  complexology  boolean-networks  Kauffmania  self-organization  to-write-about  to-do 
14 days ago
[1503.04069] LSTM: A Search Space Odyssey
Several variants of the Long Short-Term Memory (LSTM) architecture for recurrent neural networks have been proposed since its inception in 1995. In recent years, these networks have become the state-of-the-art models for a variety of machine learning problems. This has led to a renewed interest in understanding the role and utility of various computational components of typical LSTM variants. In this paper, we present the first large-scale analysis of eight LSTM variants on three representative tasks: speech recognition, handwriting recognition, and polyphonic music modeling. The hyperparameters of all LSTM variants for each task were optimized separately using random search and their importance was assessed using the powerful fANOVA framework. In total, we summarize the results of 5400 experimental runs (about 15 years of CPU time), which makes our study the largest of its kind on LSTM networks. Our results show that none of the variants can improve upon the standard LSTM architecture significantly, and demonstrate the forget gate and the output activation function to be its most critical components. We further observe that the studied hyperparameters are virtually independent and derive guidelines for their efficient adjustment.
oh-Schmidhuber  machine-learning  neural-networks  representation 
14 days ago
Ending the Curse of Remedial Math - The New York Times
The CUNY Start model is different. Full-time students are exclusively in Start classes for 25 hours a week — substantially more than the usual course load — for one semester. The focus is on thinking, not memorization. “Math isn’t just memorization,” Ms. Fells told me. “I teach them how to investigate problems — how to think. The first sentence on the first day is a question. We start by making a connection to real life and slowly build a foundation of knowledge for more abstract algebraic problems. I never say you are right or wrong. The answers come from them.”
learning-by-doing  pedagogy  public-policy  to-write-about 
14 days ago
[1706.00531] PixelGAN Autoencoders
In this paper, we describe the "PixelGAN autoencoder", a generative autoencoder in which the generative path is a convolutional autoregressive neural network on pixels (PixelCNN) that is conditioned on a latent code, and the recognition path uses a generative adversarial network (GAN) to impose a prior distribution on the latent code. We show that different priors result in different decompositions of information between the latent code and the autoregressive decoder. For example, by imposing a Gaussian distribution as the prior, we can achieve a global vs. local decomposition, or by imposing a categorical distribution as the prior, we can disentangle the style and content information of images in an unsupervised fashion. We further show how the PixelGAN autoencoder with a categorical prior can be directly used in semi-supervised settings and achieve competitive semi-supervised classification results on the MNIST, SVHN and NORB datasets.
deep-learning  neural-networks  autoencoder  machine-learning  self-organization  unsupervised-learning  to-write-about  to-understand 
14 days ago
It's time to reboot the relationship between expertise and democracy | Aeon Essays
Young Rebecca, however, did what a sensible person would: she started looking through databases of old newspapers. She found the signs, as the Daily Beast later reported, ‘collecting a handful of examples, then dozens, then more. She went to as many newspaper databases as she could. Then she thought, somebody had to have done this before, right?’ As it turned out, neither Jensen nor anyone else had apparently bothered to do this basic fact-checking. Miss Fried has now entered high school with a published piece in the Journal of Social History, and she is not alone in overturning the status quo.
expertise  academic-culture  another-way  collaboration  social-networks  collective-intelligence  to-write-about 
14 days ago
Magic graph - Wikipedia
A magic graph is a graph whose edges are labelled by positive integers, so that the sum over the edges incident with any vertex is the same, independent of the choice of vertex; or it is a graph that has such a labelling. If the integers are the first q positive integers, where q is the number of edges, the graph and the labelling are called supermagic.
magic-squares  to-write-about  mathematical-recreations 
14 days ago
[math/0608131] Many sets have more sums than differences
Since addition is commutative but subtraction is not, the sumset S+S of a finite set S is predisposed to be smaller than the difference set S-S. In this paper, however, we show that each of the three possibilities (|S+S|>|S-S|, |S+S|=|S-S|, |S+S|<|S-S|) occur for a positive proportion of the subsets of {0, 1, ..., n-1}. We also show that the difference |S+S| - |S-S| can take any integer value, and we show that the expected number of omitted differences is asymptotically 6 while the expected number of missing sums is asymptotically 10. Other data and conjectures on the distribution of these quantities are also given.
number-theory  rather-interesting  to-write-about  experiment  nudge-targets  consider:feature-discovery 
14 days ago
[math/0608148] Sets with more sums than differences
Let A be a finite subset of the integers or, more generally, of any abelian group, written additively. The set A has "more sums than differences" if |A+A|>|A-A|. A set with this property is called an MSTD set. This paper gives explicit constructions of families of MSTD sets of integers.
number-theory  enumeration  combinatorics  rather-interesting  to-write-about  nudge-targets  consider:feature-discovery 
14 days ago
[1108.4500] Generalized More Sums Than Differences Sets
A More Sums Than Differences (MSTD, or sum-dominant) set is a finite set A⊂ℤ such that |A+A|<|A−A|. Though it was believed that the percentage of subsets of {0,...,n} that are sum-dominant tends to zero, in 2006 Martin and O'Bryant \cite{MO} proved a positive percentage are sum-dominant. We generalize their result to the many different ways of taking sums and differences of a set. We prove that |ϵ1A+...+ϵkA|>|δ1A+...+δkA| a positive percent of the time for all nontrivial choices of ϵj,δj∈{−1,1}. Previous approaches proved the existence of infinitely many such sets given the existence of one; however, no method existed to construct such a set. We develop a new, explicit construction for one such set, and then extend to a positive percentage of sets.
We extend these results further, finding sets that exhibit different behavior as more sums/differences are taken. For example, notation as above we prove that for any m, |ϵ1A+...+ϵkA|−|δ1A+...+δkA|=m a positive percentage of the time. We find the limiting behavior of kA=A+...+A for an arbitrary set A as k→∞ and an upper bound of k for such behavior to settle down. Finally, we say A is k-generational sum-dominant if A, A+A, ...,kA are all sum-dominant. Numerical searches were unable to find even a 2-generational set (heuristics indicate the probability is at most 10−9, and almost surely significantly less). We prove the surprising result that for any k a positive percentage of sets are k-generational, and no set can be k-generational for all k.
number-theory  rather-interesting  to-write-about  enumeration  nudge-targets  consider:feature-discovery 
14 days ago
[1705.10831] The Self-Organization of Dragon Kings
Surprisingly common outliers of a distribution tail, known as Dragon Kings, are seen in many complex systems. It has been argued that the general conditions for Dragon Kings in self-organized systems are high system coupling and low heterogeneity. In this Letter, we introduce a novel mechanism of Dragon Kings by discussing two closely-related stylized models of cascading failures. Although the first variant (based on simple contagion spreading and inoculation) exhibits well-studied self-organized criticality, the second one (based on both simple and complex contagion spreading) creates self-organized Dragon Kings in the failure size distribution. Next, we begin to understand the mechanistic origin of these Dragon Kings by mapping the probability of an initial cascade to a generalized birthday problem, which helps demonstrate that the Dragon King cascade is due to initial failures whose size exceeds a threshold that is infinitesimal compared to the size of the network. We use this finding to predict the onset of Dragon Kings with high accuracy using only logistic regression. Finally, we devise a simple control strategy that can decrease the frequency of Dragon Kings by orders of magnitude. We conclude with remarks on the applicability of both models to natural and engineered systems.
self-organization  complex-systems  power-laws  rather-interesting  simulation  to-write-about 
14 days ago
Silicon Valley Still Doesn't Care About Work-Life Balance | WIRED
Silicon Valley’s emphasis on work-life balance may be evolving, but its priesthood still values a very particular kind of grit. This ideological tension came to blows earlier this week in a marathon Twitter fight that started on Memorial Day, with anecdotal evidence and closing arguments still trickling in days later.

The dialogue began innocently enough when Blake Robbins, a tech investor who has worked or interned for companies like Google, Nest, and SpaceX, deployed a flurry of tweets about his philosophy on work-life balance. “When I first got into tech. I thought it was ‘cool’ to work on the weekends or holidays. I quickly realized that’s a recipe for disaster,” Robbins wrote. “Not hanging with friends and family because you’re working isn’t ‘cool.’ Burning out isn’t ‘cool.’ I promise you…your competition isn’t beating you because they are working more hours than you. It’s because they are working smarter.”

But the mood quickly turned. “Totally false,” venture capitalist Keith Rabois tweeted back at Robbins. “Read a bio of Elon [Musk]. Or about Amazon. Or about the first 4 years of FB. Or PayPal. Or Bill Bellichick [sic]. It is pure arrogance to believe you can outsmart other talented people.”
worklife  assholes-in-charge  corporatism  propserity-gospel  social-psychology  entrepreneurship-as-pathology 
21 days ago
Mob Programming
There are many reasons why we like mob programming. First of all, getting the best of the team is better than getting the most out of the team (as per this great animation). We focus on getting the most important thing done rather than keeping everyone maximally busy.

Because we discuss our work all day, communication is improved within the team and there is less need for meetings. Our code quality is improved because we get the best ideas from the whole team, and we continually review our work. This leads naturally to collective code ownership.

Because everyone is on the same page, we avoid delays caused by team members being unavailable (whether due to meetings, sickness, vacation, or other). We aren't slowed down by merge conflicts from other members of our same team. And we generally don't get stuck on a problem because there are lots of eyes and minds working on the problem. This all leads to more consistency in both speed of delivery and how we write out code.

Mobbing allows us to learn from each other all the time. New or junior developers are able to come up to speed quickly because there is always someone there to help them. If there is a need to research or experiment, an individual can disengage from the mob for a while to learn.

The flexibility for someone to step away from the mob also provides slack time to deal with other concerns (like email), or just to take a break. Social coding can be exhausting, especially when you are new to it and haven't developed the associated skills. This can especially help more introverted team members.

If you have a team you enjoy being with, mob programming can also be a lot of fun!
mob-programming  software-development-is-not-programming  software-development  teams 
22 days ago
Surprise Search | Autonomous Computational Game Designers
Surprise search is a novel algorithm that takes inspiration from the notion of surprise for unconventional discovery in computational creativity.

The algorithm mimics the self-surprise cognitive process of creativity and equips computational creators with the ability to search for outcomes that deviate from the algorithm’s expected behavior or process.
generative-art  generative-models  novelty-search  machine-learning  algorithms  to-write-about  to-do 
22 days ago
Recurrent generative auto-encoders and novelty search | ARAYA Inc.
This post summarizes a bunch of connected tricks and methods I explored with the help of my co-authors. Following the previous post, above the stability properties of GANs, the overall aim was to improve our ability to train generative models stably and accurately, but we went through a lot of variations and experiments with different methods on the way. I’ll try to explain why I think these things worked, but we’re still exploring it ourselves as well.

The basic problem is that generative neural network models seem to either be stable but fail to properly capture higher-order correlations in the data distribution (which manifests as blurriness in the image domain), or they are very unstable to train due to having to learn both the distribution and the loss function at the same time, leading to issues like non-stationarity and positive feedbacks. The way GANs capture higher order correlations is to say ‘if there’s any distinguishable statistic from real examples, the discriminator will exploit that’. That is, they try to make things individually indistinguishable from real examples, rather than in the aggregate. The cost of that is the instability arising from not having a joint loss function – the discriminator can make a move that disproportionately harms the generator, and vice versa.
novelty-search  generative-models  machine-learning  algorithms  rather-interesting  to-write-about  nudge-targets 
22 days ago
Musical Novelty Search – samim – Medium
While Novelty Search is useful for musical discovery, it is clearly only part of a bigger puzzle. In the past years, researchers have come up with many improvements, extensions and new ideas going beyond pure novelty search. Here is a selection of related research, worth checking out:
novelty-search  machine-learning  algorithms  generative-models  to-write-about 
22 days ago
The zombie robot argument lurches on: There is no evidence that automation leads to joblessness or inequality | Economic Policy Institute
What is remarkable about this media narrative is that there is a strong desire to believe it despite so little evidence to support these claims. There clearly are serious problems in the labor market that have suppressed job and wage growth for far too long; but these problems have their roots in intentional policy decisions regarding globalization, collective bargaining, labor standards, and unemployment levels, not technology.

This report highlights the paucity of the evidence behind the alleged robot apocalypse, particularly as mischaracterized in the media coverage of the 2017 Acemoglu and Restrepo (A&R) report. Yes, automation has led to job displacements in particular occupations and industries in the past, but there is no basis for claiming that automation has led—or will lead—to increased joblessness, unemployment, or wage stagnation overall. We argue that the current excessive media attention to robots and automation destroying the jobs of the past and leaving us jobless in the future is a distraction from the main issues that need to be addressed: the poor wage growth and inequality caused by policies that have shifted economic power away from low- and moderate-wage workers. It is also the case that, as Atkinson and Wu (2017) argue, our productivity growth is too low, not too high.

Rather than debating possible problems that are more than a decade way, policymakers need to focus on addressing the decades-long crisis of wage stagnation by creating good jobs and supporting wage growth. And as it turns out, policies to expand good jobs and increase wages are the same measures needed to ensure that workers potentially displaced by automation have good jobs to transition to. For these workers, the education and training touted as solutions in the mainstream robot narrative will be inadequate, just as they were not adequate to help displaced manufacturing workers over the last few decades.
public-policy  looking-to-see  economics  automation 
22 days ago
Abstract simplicial complex - Wikipedia
In mathematics, an abstract simplicial complex is a purely combinatorial description of the geometric notion of a simplicial complex, consisting of a family of non-empty finite sets closed under the operation of taking non-empty subsets.[1] In the context of matroids and greedoids, abstract simplicial complexes are also called independence systems.[2]
topology  data-analysis  data-structures  nudge  consider:adding-as-primitive  hypergraphs  to-write-about 
22 days ago
Tutorial · appliedtopology/javaplex Wiki
Javaplex is a Java software package for computing the persistent homology of filtered simplicial complexes (or more generally, filtered chain complexes), with special emphasis on applications arising in topological data analysis (Tausz et al. 2014). The main author is Andrew Tausz. Javaplex is a re-write of the JPlex package, which was written by Harlan Sexton and Mikael Vejdemo-Johansson. The main motivation for the development of Javaplex was the need for a flexible platform that supported new directions of research in topological data analysis and computational persistent homology. The website for Javaplex is http://appliedtopology.github.io/javaplex/, the documentation overview is at https://github.com/appliedtopology/javaplex/wiki/Overview, and the javadoc tree for the library is at http://appliedtopology.github.io/javaplex/doc/.
topology  data-analysis  software  tutorial  feature-extraction 
22 days ago
Topological data analysis - Wikipedia
In applied mathematics, topological data analysis (TDA) is an approach to the analysis of datasets using techniques from topology. Extraction of information from datasets that are high-dimensional, incomplete and noisy is generally challenging. TDA provides a general framework to analyze such data in a manner that is insensitive to the particular metric chosen and provides dimensionality reduction and robustness to noise. Beyond this, it inherits functoriality, a fundamental concept of modern mathematics, from its topological nature, which allows it to adapt to new mathematical tools.

The initial motivation is to study the shape of data. TDA has combined algebraic topology and other tools from pure mathematics to allow mathematically rigorous study of "shape". The main tool is persistent homology, an adaptation of homology to point cloud data. Persistent homology has been applied to many types of data across many fields. Moreover, its mathematical foundation is also of theoretical importance. The unique features of TDA make it a promising bridge between topology and geometry.
data-analysis  reference  to-write-about  topology  pattern-discovery 
22 days ago
T3 Keynote - YouTube
Robin DeRosa kicks off our conference by helping us look at how working “open” expands the kinds of domains that we might engage in as instructors. Through her personal experiences, she will help us conceptualize “open” as a holistic way to think about the teaching and learning cycle, while offering examples of tools and techniques that can help faculty design courses and programs that expand access to higher education, empower learners, and connect students to the world outside the college.
publishing  disintermediation-in-action  academic-culture  pedagogy  textbooks  to-watch 
22 days ago
There are bots. Look around.
So we’re at a point in which our marketplace of ideas bears striking resemblance to the financial markets in the early days of HFT: deliberate manipulation, unanticipated feedback loops, and malicious algorithms are poisoning the ecosystem, introducing fragility and destroying confidence. But unlike in finance, it’s no one’s job to be looking at this. It’s no one’s job to regulate this.
social-networks  social-engineering  propaganda  bots  politics  truthiness 
22 days ago
BIRDS-Lab
Professor Revzen and his team at the Biologically Inspired Robotics and Dynamical Systems (BIRDS) Lab are working on discovering, modeling, and reproducing the strategies animals use when interacting with physical objects. This work consists of collaboration with biomechanists to analyze experimental data, developing new mathematical tools for modeling and estimation of model parameters, and construction of robots which employ the new principles.
hey-I-know-this-guy  robotics  engineering-design  engineering  nudge-targets  to-write-about  local 
4 weeks ago
[1703.01350] Approximate Convex Hulls
We investigate the PPI algorithm as a means for computing ap- proximate convex hull. We explain how the algorithm computes the curvature of points and prove consistency and convergence. We also extend the algorithm to compute approximate convex hulls described in terms of hyperplanes.
computational-geometry  approximation  algorithms  rather-interesting  to-write-about 
4 weeks ago
[1504.01426] Juggling card sequences
Juggling patterns can be described by a sequence of cards which keep track of the relative order of the balls at each step. This interpretation has many algebraic and combinatorial properties, with connections to Stirling numbers, Dyck paths, Narayana numbers, boson normal ordering, arc-labeled digraphs, and more. Some of these connections are investigated with a particular focus on enumerating juggling patterns satisfying certain ordering constraints, including where the number of crossings is fixed.
combinatorics  representation  enumeration  rather-interesting  to-write-about  mathematical-recreations 
4 weeks ago
[1501.04067] Partition and sum is fast
We consider the following "partition and sum" operation on a natural number: Treating the number as a long string of digits insert several plus signs in between some of the digits and carry out the indicated sum. This results in a smaller number and repeated application can always reduce the number to a single digit. We show that surprisingly few iterations of this operation are needed to get down to a single digit.
number-theory  algorithms  rather-interesting  rather-odd  to-write-about  mathematical-recreations 
4 weeks ago
[1702.05808v2] Enumerating multiplex juggling patterns
Mathematics has been used in the exploration and enumeration of juggling patterns. In the case when we catch and throw one ball at a time the number of possible juggling patterns is well-known. When we are allowed to catch and throw any number of balls at a given time (known as multiplex juggling) the enumeration is more difficult and has only been established in a few special cases. We give a method of using cards related to "embeddings" of ordered partitions to enumerate the count of multiplex juggling sequences, determine these counts for small cases, and establish some combinatorial properties of the set of cards needed.
mathematical-recreations  combinatorics  enumeration  representation  to-write-about  rather-interesting  consider:generalizations 
4 weeks ago
[1412.8533] The mathematics of the flip and horseshoe shuffles
We consider new types of perfect shuffles wherein a deck is split in half, one half of the deck is "reversed", and then the cards are interlaced. Flip shuffles are when the reversal comes from flipping the half over so that we also need to account for face-up/face-down configurations while horseshoe shuffles are when the order of the cards are reversed but all cards still face the same direction. We show that these shuffles are closely related to faro shuffling and determine the order of the associated shuffling groups.
mathematical-recreations  rather-interesting  to-write-about  nudge-targets  consider:looking-to-see  consider:novelty-search 
4 weeks ago
[1410.3564v1] Randomized Triangle Algorithms for Convex Hull Membership
We present randomized versions of the {\it triangle algorithm} introduced in \cite{kal14}. The triangle algorithm tests membership of a distinguished point p∈ℝm in the convex hull of a given set S of n points in ℝm. Given any {\it iterate} p′∈conv(S), it searches for a {\it pivot}, a point v∈S so that d(p′,v)≥d(p,v). It replaces p′ with the point on the line segment p′v closest to p and repeats this process. If a pivot does not exist, p′ certifies that p∉conv(S). Here we propose two random variations of the triangle algorithm that allow relaxed steps so as to take more effective steps possible in subsequent iterations. One is inspired by the {\it chaos game} known to result in the Sierpinski triangle. The incentive is that randomized iterates together with a property of Sierpinski triangle would result in effective pivots. Bounds on their expected complexity coincides with those of the deterministic version derived in \cite{kal14}.
computational-geometry  algorithms  convex-hulls  rather-interesting  nudge-targets  consider:representation  consider:looking-to-see 
4 weeks ago
SMALL | Mathematics & Statistics
The SMALL Undergraduate Research Project is a nine-week residential summer program in which undergraduates investigate open research problems in mathematics. One of the largest programs of its kind in the country, SMALL is supported in part by a National Science Foundation grant for Research Experiences for Undergraduates and by the Science Center of Williams College. Around 500 students have participated in the project since its inception in 1988.

Students work in small groups directed by individual faculty members. Many participants have published papers and presented talks at research conferences based on work done in SMALL. Some have gone on to complete PhD’s in Mathematics. During off hours, students enjoy the many attractions of summer in the Berkshires: hiking, biking, plays, concerts, etc. Weekly lunches, teas, and casual sporting events bring SMALL students together with faculty and other students spending the summer doing research at Williams College.
open-questions  mathematics  rather-interesting  academic-culture  collaboration 
4 weeks ago
About | Code Ocean
Code Ocean is a cloud-based computational reproducibility platform that provides researchers and developers an easy way to share, discover and run code published in academic journals and conferences.
More and more of today's research includes software code, statistical analysis and algorithms that are not included in traditional publishing. But they are often essential to reproducing the research results and reusing them in a new product or research. This creates a major roadblock for researchers, one that inspired the first steps of Code Ocean as part of the 2014 Runway Startup Postdoc Program at the Jacobs Technion Cornell Institute. Today, the company employs more than 10 people and officially launched the product in February 2017.
For the first time, researchers, engineers, developers and scientists can upload code and data in 10 programming languages and link working code in a computational environment with the associated article for free. We assign a Digital Object Identifier (DOI) to the algorithm, providing correct attribution and a connection to the published research.
software-development-is-not-programming  open-access  open-source  academic-culture  reproducibility 
4 weeks ago
Impact of Social Sciences – Addicted to the brand: The hypocrisy of a publishing academic
If, as principal investigator, I were to advise the PhD students and postdocs in the group here at Nottingham that, in line with the three principles above, they should publish all of their work in the Beilstein J. Nanotech., it would be career suicide for them. To hammer this point home, here’s the advice from one referee of a paper we recently submitted:

“I recommend re-submission of the manuscript to the Beilstein Journal of Nanotechnology, where works of similar quality can be found. The work is definitively well below the standards of [Journal Name].”
There is very clearly a well-established hierarchy here. Journal ‘branding’, and, worse, journal impact factor, remain exceptionally important in (falsely) establishing the perceived quality of a piece of research, despite many efforts to counter this perception, including, most notably, DORA. My hypocritical approach to publishing research stems directly from this perception. I know that if I want the researchers in my group to stand a chance of competing with their peers, we have to target “those” journals. The same is true for all the other PIs out there. While we all complain bitterly about the impact factor monkey on our back, we’re locked into the addiction to journal brand.
academic-culture  publishing  disintermediation-in-action 
4 weeks ago
Education Technology as 'The New Normal'
This “new normal” does not simply argue that governmental regulations impede innovation. It posits government itself as an obstacle to change. It embraces libertarianism; it embraces “free markets.” It embraces a neoliberalism that calls for shrinking budgets for public services, including education – a shifting of dollars to private industry.
Education needs to change, we have long been told. It is outmoded. Inefficient. And this “new normal” – in an economic sense much more than a pedagogical one – has meant schools have been tasked to “do more with less” and specifically to do more with new technologies which promise greater efficiency, carrying with them the values of business and markets rather than the values of democracy or democratic education.
These new technologies, oriented towards consumers and consumption, privilege an ideology of individualism. In education technology, as in advertising, this is labeled “personalization.” The flaw of traditional education systems, we are told, is that they focus too much on the group, the class, the collective. So we see education being reframed as a technologically-enhanced series of choices – consumer choices. Technologies monitor and extract data in order to maximize “engagement” and entertainment.
I fear that new normal, what it might really mean for teaching, for learning, for scholarship.
neoliberalism  individualism  pedagogy  cultural-norms  corporatism  the-bad-future  essay  to-write 
4 weeks ago
Magic Carpets
Abstract: A set-theoretic structure, the magic carpet, is defined and some of its combinatorial properties explored. The magic carpet is a generalization and abstraction of labeled diagrams such as magic squares and magic graphs, in which certain configurations of points on the diagram add to the same value. Some basic definitions and theorems are presented as well as computer-generated enumerations of small non-isomorphic magic carpets of various kinds.
combinatorics  to-read  to-understand  to-write-about  magic-squares  mathematical-recreations  generalization  nudge-targets 
4 weeks ago
[1609.00206] Optimal point sets determining few distinct triangles
We generalize work of Erdos and Fishburn to study the structure of finite point sets that determine few distinct triangles. Specifically, we ask for a given t, what is the maximum number of points that can be placed in the plane to determine exactly t distinct triangles? Denoting this quantity by F(t), we show that F(1)=4, F(2)=5, and F(t)<48(t+1) for all t. We also completely characterize the optimal configurations for t=1,2.
optimization  plane-geometry  nudge-targets  Erdös  to-write-about  benchmarks 
4 weeks ago
[1610.07836v1] Classification of crescent configurations
Let n points be in crescent configurations in ℝd if they lie in general position in ℝd and determine n−1 distinct distances, such that for every 1≤i≤n−1 there is a distance that occurs exactly i times. Since Erd\H{o}s' conjecture in 1989 on the existence of N sufficiently large such that no crescent configurations exist on N or more points, he, Pomerance, and Pal\'asti have given constructions for n up to 8 but nothing is yet known for n≥9. Most recently, Burt et. al. had proven that a crescent configuration on n points exists in ℝn−2 for n≥3. In this paper, we study the classification of these configurations on 4 and 5 points through graph isomorphism and rigidity. Our techniques, which can be generalized to higher dimensions, offer a new viewpoint on the problem through the lens of distance geometry and provide a systematic way to construct crescent configurations.
number-theory  to-write-about  nudge-targets 
4 weeks ago
[1705.02584] On the structure of large sum-free sets of integers
A set of integers is called sum-free if it contains no triple of elements x,y,z with x+y=z. In this paper we provide a structural characterisation of sum-free subsets of {1,2,…,n} of density at least 2/5−c, where c is an absolute positive constant. As an application we derive a stability version of Hu's theorem [Proc. Amer. Math. Soc. 80 (1980), 711-712] about partitioning sets of integers into two sum-free subsets. We then use this result to show that the number of subsets of {1,2,…,n} which can be partitioned into two sum-free sets is Θ(24n/5), confirming a conjecture of Hancock, Staden and Treglown [arXiv:1701.04754].
number-theory  to-write-about  nudge-targets  consider:classification 
4 weeks ago
[1201.1317v1] On sets of integers which are both sum-free and product-free
We consider sets of positive integers containing no sum of two elements in the set and also no product of two elements. We show that the upper density of such a set is strictly smaller than 1/2 and that this is best possible. Further, we also find the maximal order for the density of such sets that are also periodic modulo some positive integer.
number-theory  to-write-about  nudge-targets 
4 weeks ago
[math/0609373] Lowest Terms Revisited
In the September 1994 issue of Math Horizons the following problem is given in the Problem Section (p. 33, Problem 5): Lowest Terms - what fraction has the smallest denominator in the interval (19/94, 17/76)? In this paper we develop a general algorithm for solving problems of this type.
mathematical-recreations  to-write-about  number-theory  nudge-targets 
4 weeks ago
[1303.0605v1] Explicit Constructions of Large Families of Generalized More Sums Than Differences Sets
A More Sums Than Differences (MSTD) set is a set of integers A contained in {0, ..., n-1} whose sumset A+A is larger than its difference set A-A. While it is known that as n tends to infinity a positive percentage of subsets of {0, ..., n-1} are MSTD sets, the methods to prove this are probabilistic and do not yield nice, explicit constructions. Recently Miller, Orosz and Scheinerman gave explicit constructions of a large family of MSTD sets; though their density is less than a positive percentage, their family's density among subsets of {0, ..., n-1} is at least C/n^4 for some C>0, significantly larger than the previous constructions, which were on the order of 1 / 2^{n/2}. We generalize their method and explicitly construct a large family of sets A with |A+A+A+A| > |(A+A)-(A+A)|. The additional sums and differences allow us greater freedom than in Miller, Orosz and Scheinerman, and we find that for any epsilon>0 the density of such sets is at least C / n^epsilon. In the course of constructing such sets we find that for any integer k there is an A such that |A+A+A+A| - |A+A-A-A| = k, and show that the minimum span of such a set is 30.
number-theory  open-questions  generators  algorithms  to-write-about 
4 weeks ago
[1512.01727v3] Solving the Subset Sum Problem with Heap-Ordered Subset Trees
In the field of algorithmic analysis, one of the more well-known exercises is the subset sum problem. That is, given a set of integers, determine whether one or more integers in the set can sum to a target value. Aside from the brute-force approach of verifying all combinations of integers, several solutions have been found, ranging from clever uses of various data structures to computationally-efficient approximation solutions. In this paper, a unique approach is discussed which builds upon the existing min-heap solution for positive integers, introducing a tree-based data structure influenced by the binomial heap. Termed the subset tree, this data structure solves the subset sum problem for all integers in time O(N3klogk), where N is the length of the set and k is the index of the list of subsets that is being searched.
algorithms  number-theory  rather-interesting  to-write-about  hard-problems  nudge-targets 
4 weeks ago
[math/0608148v2] Sets with more sums than differences
Let A be a finite subset of the integers or, more generally, of any abelian group, written additively. The set A has "more sums than differences" if |A+A|>|A-A|. A set with this property is called an MSTD set. This paper gives explicit constructions of families of MSTD sets of integers.
number-theory  open-questions  to-write-about  nudge-targets  consider:classification 
4 weeks ago
[1608.03256v1] When Sets Can and Cannot Have MSTD Subsets
A finite set of integers A is a More Sums Than Differences (MSTD) set if |A+A|>|A−A|. While almost all subsets of {0,…,n} are not MSTD, interestingly a small positive percentage are. We explore sufficient conditions on infinite sets of positive integers such that there are either no MSTD subsets, at most finitely many MSTD subsets, or infinitely many MSTD subsets. In particular, we prove no subset of the Fibonacci numbers is an MSTD set, establish conditions such that solutions to a recurrence relation have only finitely many MSTD subsets, and show there are infinitely many MSTD subsets of the primes.
number-theory  proof  to-write-about  open-questions  nudge-targets 
4 weeks ago
[1702.04166v1] A set of 12 numbers is not determined by its set of 4-sums
We present two sets of 12 integers that have the same sets of 4-sums. The proof of the fact that a set of 12 numbers is uniquely determined by the set of its 4-sums published 50 years ago is wrong, and we demonstrate an incorrect calculation in it.
number-theory  open-questions  to-write-about  puzzles  nudge-targets 
4 weeks ago
[1201.6339] Epidemics on Interconnected Networks
Populations are seldom completely isolated from their environment. Individuals in a particular geographic or social region may be considered a distinct network due to strong local ties, but will also interact with individuals in other networks. We study the susceptible-infected-recovered (SIR) process on interconnected network systems, and find two distinct regimes. In strongly-coupled network systems, epidemics occur simultaneously across the entire system at a critical infection strength βc, below which the disease does not spread. In contrast, in weakly-coupled network systems, a mixed phase exists below βc of the coupled network system, where an epidemic occurs in one network but does not spread to the coupled network. We derive an expression for the network and disease parameters that allow this mixed phase and verify it numerically. Public health implications of communities comprising these two classes of network systems are also mentioned.
epidemiology  network-theory  agent-based  rather-interesting  to-write-about  simulation 
5 weeks ago
[1212.0404] Explosive transitions to synchronization in networked phase oscillators
We introduce a condition for an ensemble of networked phase oscillators to feature an abrupt, first-order phase transition from an unsynchronized to a synchronized state. This condition is met in a very wide spectrum of situations, and for various oscillators' initial frequency distributions. We show that the occurrence of such transitions is always accompanied by the spontaneous emergence of frequency-degree correlations in random network architectures. We also discuss ways to relax the condition, and to further extend the possibility for the first-order transition to occur, and illustrate how to engineer magnetic-like states of synchronization. Our findings thus indicate how to search for abrupt transitions in real-world applications.
coupled-oscillators  nonlinear-dynamics  to-do  to-write-about  rather-interesting 
5 weeks ago
[0711.1778] Modules identification by a Dynamical Clustering algorithm based on chaotic R"ossler oscillators
A new dynamical clustering algorithm for the identification of modules in complex networks has been recently introduced \cite{BILPR}. In this paper we present a modified version of this algorithm based on a system of chaotic Roessler oscillators and we test its sensitivity on real and computer generated networks with a well known modular structure.
community-detection  dynamical-systems  coupled-oscillators  feature-construction  rather-interesting  nonlinear-dynamics  to-write-about  nudge-targets  consider:variant-overlays 
5 weeks ago
[1611.02617] Color-avoiding percolation
Many real world networks have groups of similar nodes which are vulnerable to the same failure or adversary. Nodes can be colored in such a way that colors encode the shared vulnerabilities. Using multiple paths to avoid these vulnerabilities can greatly improve network robustness. Color-avoiding percolation provides a theoretical framework for analyzing this scenario, focusing on the maximal set of nodes which can be connected via multiple color-avoiding paths. In this paper we extend the basic theory of color-avoiding percolation that was published in [Krause et. al., Phys. Rev. X 6 (2016) 041022]. We explicitly account for the fact that the same particular link can be part of different paths avoiding different colors. This fact was previously accounted for with a heuristic approximation. We compare this approximation with a new, more exact theory and show that the new theory is substantially more accurate for many avoided colors. Further, we formulate our new theory with differentiated node functions, as senders/receivers or as transmitters. In both functions, nodes can be explicitly trusted or avoided. With only one avoided color we obtain standard percolation. With one by one avoiding additional colors, we can understand the critical behavior of color avoiding percolation. For heterogeneous color frequencies, we find that the colors with the largest frequencies control the critical threshold and exponent. Colors of small frequencies have only a minor influence on color avoiding connectivity, thus allowing for approximations.
security  graph-theory  network-theory  robustness  algorithms  nudge-targets  consider:higher-order-features 
5 weeks ago
The Omidyar Network and the (Neoliberal) Future of Education
So, while in the US we see neoliberalism pushing to dismantle public institutions and public funding for public institutions, in the Global South, these very forces are there touting the “power of markets” to make sure public institutions can never emerge or thrive in the first place. Investors like the Omidyar Network are poised to extract value from the very people they promise their technologies and businesses are there to help.
Conveniently, the Omidyar Network’s investment portfolio also includes journalistic and research organizations that are too poised to promote and endorse the narratives that aggrandize these very technocratic, market-based solutions.
neoliberalism  education  corporatism  disintermediation-in-action  not-a-good-idea 
6 weeks ago
Who Really Makes Money Off of Bail Bonds? - The Atlantic
A new report from the nonprofit Color of Change and the American Civil Liberties Union (ACLU) sheds further light on the country’s bail system. The report finds that around 70 percent of those currently in jail have yet to be convicted of a crime. Not unrelated: Between 1990 and 2009, the share of people arrested who were required to post money bail grew from 37 percent to 61 percent, according to the report.
corporatism  police  politics  racism  legalism  public-policy  the-crises 
6 weeks ago
[1705.03394] That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox
If a civilization wants to maximize computation it appears rational to aestivate until the far future in order to exploit the low temperature environment: this can produce a 1030 multiplier of achievable computation. We hence suggest the "aestivation hypothesis": the reason we are not observing manifestations of alien civilizations is that they are currently (mostly) inactive, patiently waiting for future cosmic eras. This paper analyzes the assumptions going into the hypothesis and how physical law and observational evidence constrain the motivations of aliens compatible with the hypothesis.
aliens  Fermi-paradox  rather-interesting  to-write-about  via:absfac 
6 weeks ago
Trumpism: It’s Coming From the Suburbs | The Nation
If you’re looking for Trump’s implacable support, Texas trailer parks and Kentucky cabins are the wrong places to find it. Fascism develops over hands of poker in furnished basements, over the grill by the backyard pool, over beers on the commuter-rail ride back from the ball game—and in police stations and squad cars. 

fascism  cultural-norms 
6 weeks ago
[1609.07061] Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations
We introduce a method to train Quantized Neural Networks (QNNs) --- neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At train-time the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves 51% top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise operation. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The QNN code is available online.
representation  neural-networks  deep-learning  rather-interesting  to-write-about  consider:simple-examples 
6 weeks ago
[1705.00241] Dynamic interdependence and competition in multilayer networks
From critical infrastructure, to physiology and the human brain, complex systems rarely occur in isolation. Instead, the functioning of nodes in one system often promotes or suppresses the functioning of nodes in another. Despite advances in structural interdependence, modeling interdependence and other interactions between dynamic systems has proven elusive. Here we define a broadly applicable dynamic dependency link and develop a general framework for interdependent and competitive interactions between general dynamic systems. We apply our framework to studying interdependent and competitive synchronization in multi-layer oscillator networks and cooperative/competitive contagions in an epidemic model. Using a mean-field theory which we verify numerically, we find explosive transitions and rich behavior which is absent in percolation models including hysteresis, multi-stability and chaos. The framework presented here provides a powerful new way to model and understand many of the interacting complex systems which surround us.
dynamical-systems  coupled-oscillators  network-theory  rather-interesting  to-write-about  simulation  consider:simple-examples  nudge-targets  consider:control-theory  consider:feature-discovery 
6 weeks ago
[1702.08359] Dynamic Word Embeddings via Skip-Gram Filtering
We present a probabilistic language model for time-stamped text data which tracks the semantic evolution of individual words over time. The model represents words and contexts by latent trajectories in an embedding space. At each moment in time, the embedding vectors are inferred from a probabilistic version of word2vec [Mikolov, 2013]. These embedding vectors are connected in time through a latent diffusion process. We describe two scalable variational inference algorithms---skip-gram smoothing and skip-gram filtering---that allow us to train the model jointly over all times; thus learning on all data while simultaneously allowing word and context vectors to drift. Experimental results on three different corpora demonstrate that our dynamic model infers word embedding trajectories that are more interpretable and lead to higher predictive likelihoods than competing methods that are based on static models trained separately on time slices.
natural-language-processing  machine-learning  representation  to-understand  skip-grams  consider:looking-to-see  to-write-about 
6 weeks ago
[1703.01141] Dynamic State Warping
The ubiquity of sequences in many domains enhances significant recent interest in sequence learning, for which a basic problem is how to measure the distance between sequences. Dynamic time warping (DTW) aligns two sequences by nonlinear local warping and returns a distance value. DTW shows superior ability in many applications, e.g. video, image, etc. However, in DTW, two points are paired essentially based on point-to-point Euclidean distance (ED) without considering the autocorrelation of sequences. Thus, points with different semantic meanings, e.g. peaks and valleys, may be matched providing their coordinate values are similar. As a result, DTW is sensitive to noise and poorly interpretable. This paper proposes an efficient and flexible sequence alignment algorithm, dynamic state warping (DSW). DSW converts each time point into a latent state, which endows point-wise autocorrelation information. Alignment is performed by using the state sequences. Thus DSW is able to yield alignment that is semantically more interpretable than that of DTW. Using one nearest neighbor classifier, DSW shows significant improvement on classification accuracy in comparison to ED (70/85 wins) and DTW (74/85 wins). We also empirically demonstrate that DSW is more robust and scales better to long sequences than ED and DTW.
time-series  time-warping  representation  to-understand  algorithms  machine-learning  feature-construction 
6 weeks ago
[1704.00899] Dynamic Rank Maximal Matchings
We consider the problem of matching applicants to posts where applicants have preferences over posts. Thus the input to our problem is a bipartite graph G = (A U P,E), where A denotes a set of applicants, P is a set of posts, and there are ranks on edges which denote the preferences of applicants over posts. A matching M in G is called rank-maximal if it matches the maximum number of applicants to their rank 1 posts, subject to this the maximum number of applicants to their rank 2 posts, and so on.
We consider this problem in a dynamic setting, where vertices and edges can be added and deleted at any point. Let n and m be the number of vertices and edges in an instance G, and r be the maximum rank used by any rank-maximal matching in G. We give a simple O(r(m+n))-time algorithm to update an existing rank-maximal matching under each of these changes. When r = o(n), this is faster than recomputing a rank-maximal matching completely using a known algorithm like that of Irving et al., which takes time O(min((r + n, r*sqrt(n))m).
operations-research  optimization  rostering  scheduling  dynamic-optimization  to-write-about  consider:representation  consider:multiobjective-optimization 
6 weeks ago
[1704.00565] Dynamic Planar Embeddings of Dynamic Graphs
We present an algorithm to support the dynamic embedding in the plane of a dynamic graph. An edge can be inserted across a face between two vertices on the face boundary (we call such a vertex pair linkable), and edges can be deleted. The planar embedding can also be changed locally by flipping components that are connected to the rest of the graph by at most two vertices.
Given vertices u,v, linkable(u,v) decides whether u and v are linkable in the current embedding, and if so, returns a list of suggestions for the placement of (u,v) in the embedding. For non-linkable vertices u,v, we define a new query, one-flip-linkable(u,v) providing a suggestion for a flip that will make them linkable if one exists. We support all updates and queries in O(log2n) time. Our time bounds match those of Italiano et al. for a static (flipless) embedding of a dynamic graph.
Our new algorithm is simpler, exploiting that the complement of a spanning tree of a connected plane graph is a spanning tree of the dual graph. The primal and dual trees are interpreted as having the same Euler tour, and a main idea of the new algorithm is an elegant interaction between top trees over the two trees via their common Euler tour.
graph-layout  dynamical-systems  constraint-satisfaction  multiobjective-optimization  dynamic-optimization  to-write-about  consider:simple-examples  rather-interesting 
6 weeks ago
[1610.07631] Dynamic Phases of Active Matter Systems with Quenched Disorder
Depinning and nonequilibrium transitions within sliding states in systems driven over quenched disorder arise across a wide spectrum of size scales ranging from atomic friction at the nanoscale, flux motion in type-II superconductors at the mesoscale, colloidal motion in disordered media at the microscale, and plate tectonics at geological length scales. Here we show that active matter or self-propelled particles interacting with quenched disorder under an external drive represents a new class of system that can also exhibit pinning-depinning phenomena, plastic flow phases, and nonequilibrium sliding transitions that are correlated with distinct morphologies and velocity-force curve signatures. When interactions with the substrate are strong, a homogeneous pinned liquid phase forms that depins plastically into a uniform disordered phase and then dynamically transitions first into a moving stripe coexisting with a pinned liquid and then into a moving phase separated state at higher drives. We numerically map the resulting dynamical phase diagrams as a function of external drive, substrate interaction strength, and self-propulsion correlation length. These phases can be observed for active matter moving through random disorder. Our results indicate that intrinsically nonequilibrium systems can exhibit additional nonequilibrium transitions when subjected to an external drive.
active-matter  simulation  rather-interesting  to-write-about  physics!  emergent-design  phase-transitions 
6 weeks ago
[1610.09787] Edward: A library for probabilistic modeling, inference, and criticism
Probabilistic modeling is a powerful approach for analyzing empirical information. We describe Edward, a library for probabilistic modeling. Edward's design reflects an iterative process pioneered by George Box: build a model of a phenomenon, make inferences about the model given data, and criticize the model's fit to the data. Edward supports a broad class of probabilistic models, efficient algorithms for inference, and many techniques for model criticism. The library builds on top of TensorFlow to support distributed training and hardware such as GPUs. Edward enables the development of complex probabilistic models and their algorithms at a massive scale.
machine-learning  framework  rather-interesting  representation  probabilistic-languages 
6 weeks ago
[1703.08052] Dynamic Bernoulli Embeddings for Language Evolution
Word embeddings are a powerful approach for unsupervised analysis of language. Recently, Rudolph et al. (2016) developed exponential family embeddings, which cast word embeddings in a probabilistic framework. Here, we develop dynamic embeddings, building on exponential family embeddings to capture how the meanings of words change over time. We use dynamic embeddings to analyze three large collections of historical texts: the U.S. Senate speeches from 1858 to 2009, the history of computer science ACM abstracts from 1951 to 2014, and machine learning papers on the Arxiv from 2007 to 2015. We find dynamic embeddings provide better fits than classical embeddings and capture interesting patterns about how language changes.
natural-language-processing  representation  rather-interesting  to-write-about  nudge-targets  consider:representation 
6 weeks ago
[1611.05321] Bootstrap, Review, Decode: Using Out-of-Domain Textual Data to Improve Image Captioning
We propose a novel way of using out-of-domain textual data to enhance the performance of existing image captioning systems. We evaluate this learning approach on a newly designed model that uses - and improves upon - building blocks from state-of-the-art methods. This model starts from detecting visual concepts present in an image which are then fed to a reviewer-decoder architecture with an attention mechanism. Unlike previous approaches that encode visual concepts using word embeddings, we instead suggest using regional image features which capture more intrinsic information. The main benefit of this architecture is that it synthesizes meaningful thought vectors that capture salient image properties and then applies a soft attentive decoder to decode the thought vectors and generate image captions. We evaluate our model on both Microsoft COCO and Flickr30K datasets and demonstrate that this model combined with our bootstrap learning method can largely improve performance and help the model to generate more accurate and diverse captions.
data-fusion  image-processing  deep-learning  rather-interesting  to-write-about  nudge-targets  consider:feature-discovery 
6 weeks ago
« earlier      
academia academic-culture activism advice agent-based agility algorithms amusing ann-arbor approximation architecture archive art artificial-intelligence artificial-life bankers-should-start-avoiding-lampposts-right-about-now benchmarking bioinformatics biological-engineering blogging books bushism business business-culture business-model cellular-automata classification clustering collaboration collective-intelligence combinatorics commons communication community complex-systems complexology compressed-sensing computational-complexity computational-geometry computer-science conservatism consider:feature-discovery consider:looking-to-see consider:performance-measures consider:rediscovery consider:representation consider:stress-testing constraint-satisfaction copyright corporatism crowdsourcing cultural-assumptions cultural-dynamics cultural-norms data-analysis data-mining deep-learning design design-patterns development digital-humanities digitization disintermediation-in-action distributed-processing diversity dynamical-systems economics education emergence emergent-design engineering engineering-design evolutionary-algorithms evolutionary-economics experiment feature-construction feature-extraction finance financial-crisis fitness-landscapes formalization game-theory games generative-art genetic-programming geometry google government graph-theory graphic-design heuristics history horse-races humor image-analysis image-processing image-segmentation inference information-theory innovation intellectual-property interesting inverse-problems investment javascript kauffmania language law lawyers learning learning-by-doing learning-from-data library local machine-learning macos management marketing mathematical-recreations mathematics matrices media metaheuristics metrics michigan modeling models models-and-modes multiobjective-optimization music nanohistory nanotechnology natural-language-processing network-theory networks neural-networks nonlinear-dynamics nudge nudge-targets number-theory numerical-methods open-access open-questions open-source openness operations-research optimization pedagogy performance-measure philosophy philosophy-of-science photography physics planning politics prediction probability-theory programming project-management proof public-policy publishing rather-interesting representation research review robotics robustness ruby science self-organization signal-processing simulation social-dynamics social-engineering social-networks social-norms sociology software software-development statistics strings sustainability systems-biology technology the-mangle-in-practice theoretical-biology tiling time-series to-read to-understand to-write-about tools trading tutorial typography user-experience via:cshalizi video visualization web-design web2.0 worklife writing

Copy this bookmark:



description:


tags: