1069
Critique My Plan: API Key for Authentication | Hacker News
We hash passwords because passwords are valuable across sites; it's a big deal to compromise someone's password, even on a random low-value application. That's not true of API keys. If your database is compromised, the API keys don't matter anymore. Don't bother encrypting them.
programming  security  cryptography 
12 hours ago
Why Have We Soured on the ‘Devil’s Advocate’? - The New York Times
It’s often argued that this makes our conversations increasingly polarized, dogmatic, intolerant of complexity and logically sloppy. It’s less often pointed out that this might be because they aren’t really “conversations” in the first place.
internet 
6 days ago
Busy and distracted? Everybody has been, since at least 1710 | Aeon Essays
Some people think that our willpower is so weak because our brains have been damaged by digital noise. But blaming technology for the rise in inattention is misplaced. History shows that the disquiet is fuelled not by the next new thing but by the threat this thing – whatever it might be – poses to the moral authority of the day.

The recent decades have seen a dramatic reversal in the conceptualisation of inattention. Unlike in the 18th century when it was perceived as abnormal, today inattention is often presented as the normal state. The current era is frequently characterised as the Age of Distraction, and inattention is no longer depicted as a condition that afflicts a few. Nowadays, the erosion of humanity’s capacity for attention is portrayed as an existential problem, linked with the allegedly corrosive effects of digitally driven streams of information relentlessly flowing our way. ‘The net seizes our attention only to scatter it,’ contends Nicholas Carr in The Shallows: How the Internet is Changing the Way We Read, Think and Remember (2010). According to the US neuroscientist Daniel Levitin, the distractions of the modern world can literally damage our brains.

The sublimation of anxieties about moral authority through the fetish of technologically driven distraction has acquired pathological proportions in relation to children and young people. Yet as most sensible observers understand, children who are inattentive to their teachers are often obsessively attentive to the text messages that they receive. The constant lament about inattentive youth in the Anglo-American world could be interpreted as a symptom of problems related to the exercise of adult authority.
technology  history  internet 
6 days ago
The Tech Industry’s Gender-Discrimination Problem | The New Yorker
She told me that when she entered the industry, in the late nineteen-nineties, women were vastly outnumbered by men, but the atmosphere was not as aggressive or money-obsessed as it is today. She described many of the early investors and entrepreneurs as “dorks,” united by the fact that they “were all interested in technology.” The environment changed, she said, after the early venture-capital firms started investing in tech. “They happened to all be white guys who had graduated from the same handful of élite colleges,” she said. “And they tended to make investments in new firms started by people they knew, or by people who were like them.” This created a model of hiring and investing that some refer to as the “Gates, Bezos, Andreessen, or Google model,” which Melinda Gates recently characterized as, “white male nerds who’ve dropped out of Harvard or Stanford.” Little has improved over the years: two recent studies found that, in 2016, only seven per cent of the partners in venture-capital firms were women and just two per cent of venture-capital funding went to female founders.

Pao said that the change was reinforced by another event, in 2012: the initial public offering of Facebook, at well above a hundred billion dollars, which cemented Silicon Valley’s reputation as the place to make a quick fortune. Tech companies increasingly began competing with banks and hedge funds for the most ambitious college graduates. “Now you had the frat boys coming in, and that changed the culture,” Pao said. “It was just a different vibe. People were talking more about the cool things they had done than the products they were building.”

“All of the gains that have been made by the labor movement over the years are being slowly chipped away under this guise of ‘We’re in hip Northern California, and everything we do is so cool.’ ”

Earlier this year, the Department of Labor conducted an initial audit of Google’s pay practices, and found, according to court testimony in April, “systemic compensation disparities against women pretty much across the entire workforce,” showing, one official has said, six to seven standard deviations between pay for men and women in nearly every job category.
feminism  discrimination  sexism  tech  racism 
9 days ago
Donald Trump Is the First White President - The Atlantic
The foundation of Donald Trump’s presidency is the negation of Barack Obama’s legacy.
racism  politics  usa 
13 days ago
PyWren Web Scraping
I figured instead of scraping each page and then waiting and going onto the next, I could parallelize it. This was the perfect type of job for pywren, it's embarrassingly paralell and network constrained.

The best part is, this level of usage falls into AWS Lambda's free tier. I used roughly 73,000 GB/sec of execution. This falls well below the 400,000 GB/sec threshold for free tier. I was able to speed up the execution of my job from 8 hours to 4 minutes, for free. It's like having a superpower.
pywren  serverless 
16 days ago
Small Functions considered Harmful – Cindy Sridharan – Medium
The fabulous Rubyist Sandi Metz has a famous talk called All The Little Things, where she posits that “duplication is far cheaper than the wrong abstraction”, and thus to “prefer duplication over the wrong abstraction”.

The problem with “small functions” though, is that the quest for small functions ends up begetting even more small functions, all of which tend to be given extremely verbose names in the spirit of making code self documenting and eschewing comments.

Proponents of smaller functions also almost invariably tend to champion that fewer arguments be passed to the function.

The problem with fewer function arguments is that one runs the risk of not making dependencies explicit.
programming 
16 days ago
Statistical Machine Learning
Clear, concise notes from 2011/12 course taught from Elements of Statistical Learning
statistics  machinelearning  convexoptimization  neuralnetworks 
17 days ago
Review of Probability Theory Arian Maleki and Tom Do Stanford University
Probability theory is the study of uncertainty. Through this class, we will be relying on concepts from probability theory for deriving machine learning algorithms. These notes attempt to cover the basics of probability theory at a level appropriate for CS 229. The mathematical theory of probability is very sophisticated, and delves into a branch of analysis known as measure theory. In these notes, we provide a basic treatment of probability that does not address these finer details.
probability 
17 days ago
ADD / XOR / ROL: Two small notes on the "malicious use of AI" report
The most fascinating bit about the above is how fantastically presciently wrong Hardy was when speaking about the lack of war-like applications for number theory or relativity - RSA and nuclear weapons respectively. In a similar vein - I was in a relationship in the past with a woman who was a social anthropologist, and who often mocked my field of expertise for being close to the military funding agencies (this was in the early 2000s). The first thing that SecDef Gates did when he took his position was hire a bunch of social anthropologists to help DoD unravel the tribal structure in Iraq.

The point of this disgression is: It is impossible for any scientist to imagine future uses and abuses of his scientific work. You cannot choose to work on "safe" or "unsafe" science - the only choice you have is between relevant and irrelevant, and the militaries of this world *will* use whatever is relevant and use it to maximize their warfare capabilities.
ethics  machinelearning 
17 days ago
The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation
Digital security. The use of AI to automate tasks involved in carrying out cyberattacks will alleviate the existing tradeoff between the scale and efficacy of attacks. This may expand the threat associated with labor-intensive cyberattacks (such as spear phishing). We also expect novel attacks that exploit human vulnerabilities (e.g. through the use of speech synthesis for impersonation), existing software vulnerabilities (e.g. through automated hacking), or the vulnerabilities of AI systems (e.g. through adversarial examples and data poisoning).

Physical security. The use of AI to automate tasks involved in carrying out attacks with drones and other physical systems (e.g. through the deployment of autonomous weapons systems) may expand the threats associated with these attacks. We also expect novel attacks that subvert cyberphysical systems (e.g. causing autonomous vehicles to crash) or involve physical systems that it would be infeasible to direct remotely (e.g. a swarm of thousands of micro-drones).

Political security. The use of AI to automate tasks involved in surveillance (e.g. analysing mass-collected data), persuasion (e.g. creating targeted propaganda), and deception (e.g. manipulating videos) may expand threats associated with privacy invasion and social manipulation. We also expect novel attacks that take advantage of an improved capacity to analyse human behaviors, moods, and beliefs on the basis of available data. These concerns are most significant in the context of authoritarian states, but may also undermine the ability of democracies to sustain truthful public debates.
security  machinelearning 
17 days ago
The Effective Remote Developer
David Copeland talks about what one can do to be at their best as a remote team member, as well as what one needs from environment, team, and company. It's not about technical stuff—it's the human stuff. He also talks about how one can be present and effective when not physically there.
remote 
17 days ago
Programming Training Data: The New Interface Layer for ML · Stanford DAWN
Our system, Snorkel—which we report on in a new VLDB 2018 paper posted here—is one attempt to build a system around this new type of interaction with ML. In Snorkel, we use no hand-labeled training data, but instead ask users to write labeling functions (LFs), bits of black-box code which label subsets of unlabeled data.
machinelearning 
17 days ago
pixelmonkey » Software planning for skeptics
Realistic schedules are the key to creating good software. It forces you to do the best features first and allows you to make the right decisions about what to build. [Good schedules] make your product better, delight your customers, and — best of all — let you go home at five o’clock every day.

Rather than trying to control our product process and turn it into a factory floor for software features, we need to be ruthless about prioritization, ensure we have adequate capacity (through hiring), and let engineers focus maniacally on single projects while they carry them through delivery.
engineering  management 
17 days ago
How Bias Enters a Model
We’ve demonstrated that a model trained on correct labels, and with no directly access to a particular attribute, can be biased against members a group who deserve the preferred label, merely because that group has a higher incidence of the non-prefered label in the training data. Furthermore, this bias can be hidden, because it’s only revealed by comparing false positive rates at a fixed threshold, not other performance metrics.
statistics  machinelearning  bias 
17 days ago
Deep Learning is Easy - Learn Something Harder
Learn classic things like the EM algorithm, variational inference, unsupervised learning with linear Gaussian systems: PCA, factor analysis, Kalman filtering, slow feature analysis. I can also recommend Aapo Hyvarinen's work on ICA, pseudolikelihood. You should try to read (and understand) this seminal deep belief network paper.
machinelearning  deeplearning  probabilisticprogramming 
17 days ago
A (Relatively Easy To Understand) Primer on Elliptic Curve Cryptography
The gap between the difficulty of factoring large numbers and multiplying large numbers is shrinking as the number (i.e. the key's bit length) gets larger. As the resources available to decrypt numbers increase, the size of the keys need to grow even faster. This is not a sustainable situation for mobile and low-powered devices that have limited computational power. The gap between factoring and multiplying is not sustainable in the long term.

All this means is that RSA is not the ideal system for the future of cryptography. In an ideal Trapdoor Function, the easy way and the hard way get harder at the same rate with respect to the size of the numbers in question. We need a public key system based on a better Trapdoor.

The gap between the difficulty of factoring large numbers and multiplying large numbers is shrinking as the number (i.e. the key's bit length) gets larger. As the resources available to decrypt numbers increase, the size of the keys need to grow even faster. This is not a sustainable situation for mobile and low-powered devices that have limited computational power. The gap between factoring and multiplying is not sustainable in the long term.

The elliptic curve discrete logarithm is the hard problem underpinning elliptic curve cryptography. Despite almost three decades of research, mathematicians still haven't found an algorithm to solve this problem that improves upon the naive approach. In other words, unlike with factoring, based on currently understood mathematics there doesn't appear to be a shortcut that is narrowing the gap in a Trapdoor Function based around this problem. This means that for numbers of the same size, solving elliptic curve discrete logarithms is significantly harder than factoring.

To visualize how much harder it is to break, Lenstra recently introduced the concept of "Global Security." You can compute how much energy is needed to break a cryptographic algorithm, and compare that with how much water that energy could boil. This is a kind of cryptographic carbon footprint. By this measure, breaking a 228-bit RSA key requires less energy to than it takes to boil a teaspoon of water. Comparatively, breaking a 228-bit elliptic curve key requires enough energy to boil all the water on earth. For this level of security with RSA, you'd need a key with 2,380-bits.
cryptography 
17 days ago
How to Quit A Top Tier Tech Job
Believing - heck, even toying with the idea - that this is not the best place to work for everyone at all times may be emotionally painful, as it mentally discredits the internal measure of your social status and your own belief system.
professional 
17 days ago
Inverse Reinforcement Learning Tutorial | part I | thinking wires
In this blog post series we will take a closer look at inverse reinforcement learning (IRL) which is the field of learning an agent's objectives, values, or rewards by observing its behavior. For example, we might observe the behavior of a human in some specific task and learn which states of the environment the human is trying to achieve and what the concrete goals might be.
In most reinforcement learning tasks there is no natural source for the reward signal. Instead, it has to be hand-crafted and carefully designed to accurately represent the task. Often, engineers manually tweak the rewards of the RL agent until desired behavior is observed. A better way of finding a well fitting reward function for some objective might be to observe a (human) expert performing the task in order to then automatically extract the respective rewards from these observations.
If we look at successful RL applications right now, such as Alpha Go Zero for the game Go, most of them are games which naturally provide a reward signal (i.e. winning or loosing, or the score achieved)
reinforcementlearning 
17 days ago
How to Raise a Genius: Lessons from a 45-Year Study of Supersmart Children - Scientific American
Such results contradict long-established ideas suggesting that expert performance is built mainly through practice—that anyone can get to the top with enough focused effort of the right kind. SMPY, by contrast, suggests that early cognitive ability has more effect on achievement than either deliberate practice or environmental factors such as socio-economic status.

The SMPY data supported the idea of accelerating fast learners by allowing them to skip school grades. In a comparison of children who bypassed a grade with a control group of similarly smart children who didn't, the grade-skippers were 60% more likely to earn doctorates or patents and more than twice as likely to get a PhD in a STEM field. Acceleration is common in SMPY's elite 1-in-10,000 cohort, whose intellectual diversity and rapid pace of learning make them among the most challenging to educate. Advancing these students costs little or nothing, and in some cases may save schools money, says Lubinski. “These kids often don't need anything innovative or novel,” he says, “they just need earlier access to what's already available to older kids.”

Many educators and parents continue to believe that acceleration is bad for children—that it will hurt them socially, push them out of childhood or create knowledge gaps. But education researchers generally agree that acceleration benefits the vast majority of gifted children socially and emotionally, as well as academically and professionally.
education 
19 days ago
The Intellectual We Deserve | Current Affairs
But, having examined Peterson’s work closely, I think the “misinterpretation” of Peterson is only partially a result of leftists reading him through an ideological prism. A more important reason why Peterson is “misinterpreted” is that he is so consistently vague and vacillating that it’s impossible to tell what he is “actually saying.” People can have such angry arguments about Peterson, seeing him as everything from a fascist apologist to an Enlightenment liberal, because his vacuous words are a kind of Rorschach test onto which countless interpretations can be projected.

Orwell flat-out says that anybody who evaluates the merits of socialist policies by the personal qualities of socialists themselves is an idiot. Peterson concludes that Orwell thought socialist policies was flawed because socialists themselves were bad people. I don’t think there is a way of reading Peterson other than as extremely stupid or extremely dishonest, but one can be charitable and assume he simply didn’t read the book that supposedly gave him his grand revelation about socialism.

Peterson is popular partly because he criticizes social justice activists in a way many people find satisfying, and some of those criticisms have merit. He is popular partly because he offers adrift young men a sense of heroic purpose, and offers angry young men rationalizations for their hatreds. And he is popular partly because academia and the left have failed spectacularly at helping make the world intelligible to ordinary people, and giving them a clear and compelling political vision.
philosophy  politics  criticaltheory 
20 days ago
New Data Show Electric Vehicles Continue to Get Cleaner - Union of Concerned Scientists
The climate change emissions created by driving on electricity depend on where you live, but on average, an EV driving on electricity in the U.S. today is equivalent to a conventional gasoline car that gets 80 MPG, up from 73 MPG in our 2017 update.
energy  renewable 
22 days ago
Stupid Patent of the Month: Will Patents Slow Artificial Intelligence? | Electronic Frontier Foundation
In essence, Claim 1 of the patent amounts to ‘do machine learning on this particular type of application.’ More specifically, the patent follows Claim 1 with a variety of subsequent claims that amount to ‘When you’re doing that machine learning from Claim 1, use this particular well-known pre-existing machine learning algorithm.’ Indeed, in our opinion the patent reads like the table of contents of an intro to AI textbook. It covers using just about every standard machine learning technique you’d expect to learn in an intro to AI class—including linear and nonlinear regression, k-nearest neighbor, clustering, support vector machines, principal component analysis, feature selection using lasso or elastic net, Gaussian processes, and even decision trees—but applied to the specific example of proteins and data you can measure about them. Certainly, applying these techniques to proteins may be a worthwhile and time-consuming enterprise. But that does not mean it deserves a patent. A company should not get a multi-year monopoly on using well-known techniques in a particular domain where there was no reason to think the techniques couldn’t be used in that domain (even if they were the first to apply the techniques there). A patent like this doesn’t really bring any new technology to the table; it simply limits the areas in which an existing tool can be used. For this reason, we are declaring the ’834 patent our latest Stupid Patent of the Month.
patent  legal  machinelearning 
24 days ago
Lessons learned in Hell - Statistical Modeling, Causal Inference, and Social Science
I’m halfway through my third year as a consultant, after 25 years at a government research lab, and I just had a miserable five weeks finishing a project. The end product was fine — actually really good — but the process was horrible and I put in much more time than I had anticipated. I definitely do not want to experience anything like that again, so I’ve been thinking about what went wrong and what I should do differently in the future. It occurred to me that other people might also learn from my mistakes, so here’s my story.
consulting  datascience  statistics 
24 days ago
Slack is the opposite of organizational memory
It normalizes interruptions, multitasking, and distractions, implicitly permitting these things to happen IRL as well as online. It normalizes insanely short reply times for questions. In the slack world people can escalate from asking in a room to @person to @here in a matter of minutes. And they’re not wrong to – if your request isn’t handled in 5 minutes it’s as good as forgotten.

Remote work culture is a defense mechanism against the distracting open office, and slack is the end run around that defense mechanism. It abrades your team’s adrenal system and forces you to live in the present. Unlike email, it can’t delay messages. Chat makes ‘now or never’ your team’s reality.

I think most people agree that when knowledge workers work together on teams they need to use writing to agree on what to do. On slack the quality of that writing is plumbing new depths. There’s a world of difference between a well-considered G doc that has been edited by multiple people vs a stream of consciousness mixed in with people’s WFH announcements and ‘look what my cat did’.

24/7 reachability also hurts good docs practices. When people couldn’t get ahold of each other at all hours orgs had to design for redundancy, i.e. write things down such that they could be understood by someone else. But there’s a whole generation of workers and even companies that never experienced that.

Trello and Jira promote this icebox theory of design. What did a PM think of six months ago that hasn’t been started? Let’s do that. If your best people aren’t inventing and assigning projects, why should anyone bother showing up to work?
slack  professional 
24 days ago
William Davies · Why the Outrage?: Cambridge Analytica · LRB 5 April 2018
It’s sometimes said that data is the ‘oil’ of the digital economy, the resource that fuels everything else. A more helpful analogy is between oil and privacy, a concealed natural resource that is progressively plundered for private profit, with increasingly harmful consequences for society at large. If this analogy is correct, privacy and data protection laws won’t be enough to fight the tech giants with. Destroying privacy in ever more adventurous ways is what Facebook does.

Just as environmentalists demand that the fossil fuel industry ‘leave it in the ground,’ the ultimate demand to be levelled at Silicon Valley should be ‘leave it in our heads.’ The real villain here is an expansionary economic logic that insists on inspecting ever more of our thoughts, feelings and relationships. The best way to thwart this is the one Silicon Valley fears the most: anti-trust laws. Broken into smaller pieces, these companies would still be able to monitor us, but from disparate perspectives that couldn’t easily (or secretly) be joined up. Better a world full of snake-oil merchants like Cambridge Analytica, who eventually get caught out by their own bullshit, than a world of vast corporate monopolies such as Amazon and Facebook, gradually taking on the functions of government, while remaining eerily quiet about what they’re doing.
Surveillance  politics  usa  uk  brexit  trump  ethics 
24 days ago
Zero to JupyterHub — Zero to JupyterHub with Kubernetes 0.4 documentation
A tutorial to help install and manage JupyterHub with Kubernetes
jupyter  kubernetes  aws  gcp  azure 
24 days ago
hyperparameter.space/blog/when-not-to-use-deep-learning/
Deep learning can really work on small sample sizes
Deep learning is not the answer to everything
Deep learning is more than .fit()

So, when is deep learning not ideal for a task? From my perspective, these are the main scenarios where deep learning is more of a hinderance than a boon.

Low-budget or low-commitment problems
Interpreting and communicating model parameters/feature importances to a general audience
Establishing causal mechanisms
Learning from “unstructured” features
deeplearning  machinelearning 
24 days ago
Variational Coin Toss
The basic idea behind variational inference is to turn the inference problem into a kind of search problem: find the distribution q∗(z) that is the closest approximation of p(z|x). To do that we of course need some sort of definition of “closeness”. The classic one is the Kullback-Leibler divergence.

In many cases we have no real reason to choose one family of distributions over another, and then often end up with normal distributions - mostly because they are easy to work with.

Now all we have to do is vary q(z) until D_KL (q(z)||p(z|x)) reaches its minimum, and we will have found our best approximation q∗(z)!

Typically that’s done by choosing a parameterized family of probability distributions and then finding the optimal parameters with some sort of numerical optimization algorithm.

Variational inference is about optimization over quite hairy integrals. One thing you’ll hear a lot in this context is “we approximate the integral through Monte Carlo sampling”. What that means is essentially ... draw K i.i.d. samples xi from the probability distribution p(x) and compute the value of f(xi) for each one. We then take the average of that and call it our Monte Carlo approximation. Simple!

In many cases we have no real reason to choose one family of distributions over another, and then often end up with normal distributions - mostly because they are easy to work with. There’s however nothing in the variational framework that requires the prior p(z) and the variational posterior q(z) to come from the same family.

That perhaps doesn’t look like a step forward, but since we are now taking the expectation over a distribution that does not depend on θq we can safely exchange the order of the derivation and the expectation operators. This maneuver is what’s commonly referred to as the “reparametrization trick”.

After reparametrization we can approximate the gradient of the Kullback-Leibler divergence between our variational posterior and the true posterior with respect to the variational parameters θq using Monte Carlo sampling. The gradient there may look pretty ugly, but computing partial derivatives like that is what frameworks like Theano or TensorFlow do well.

Up until now we’ve been talking about minimizing D_KL ( q(z) || p(z|x) ), mostly because I feel that makes intuitive sense. But in the literature it’s much more common to talk about maximizing something called the evidence lower bound (ELBO). Thankfully the difference is minimal.

Observe that D_KL ( q(z) || p(z|x) ) must be positive or zero (because a Kullback-Leibler divergence always is). If we remove this term we thus get a lower bound on log p(x).
statistics  probabilisticprogramming  variationalinference  probability 
24 days ago
Remarks at the SASE Panel On The Moral Economy of Tech
First, programmers are trained to seek maximal and global solutions. Why solve a specific problem in one place when you can fix the general problem for everybody, and for all time? We don't think of this as hubris, but as a laudable economy of effort. And the startup funding culture of big risk, big reward encourages this grandiose mode of thinking. There is powerful social pressure to avoid incremental change, particularly any change that would require working with people outside tech and treating them as intellectual equals.

Second, treating the world as a software project gives us a rationale for being selfish. The old adage has it that if you are given ten minutes to cut down a tree, you should spend the first five sharpening your axe. We are used to the idea of bootstrapping ourselves into a position of maximum leverage before tackling a problem.

In the real world, this has led to a pathology where the tech sector maximizes its own comfort. You don't have to go far to see this. Hop on BART after the conference and take a look at Oakland, or take a stroll through downtown San Francisco and try to persuade yourself you're in the heart of a boom that has lasted for forty years. You'll see a residential theme park for tech workers, surrounded by areas of poverty and misery that have seen no benefit and ample harm from our presence. We pretend that by maximizing our convenience and productivity, we're hastening the day when we finally make life better for all those other people.

Fortunately we are smart people and have found a way out of this predicament. Instead of relying on algorithms, which we can be accused of manipulating for our benefit, we have turned to machine learning, an ingenious way of disclaiming responsibility for anything. Machine learning is like money laundering for bias. It's a clean, mathematical apparatus that gives the status quo the aura of logical inevitability. The numbers don't lie.

In our attempt to feed the world to software, techies have built the greatest surveillance apparatus the world has ever seen. Unlike earlier efforts, this one is fully mechanized and in a large sense autonomous. Its power is latent, lying in the vast amounts of permanently stored personal data about entire populations.

Even if you trust everyone spying on you right now, the data they're collecting will eventually be stolen or bought by people who scare you. We have no ability to secure large data collections over time.
Surveillance  tech  politics  ethics 
25 days ago
The Techies Who Said Sorry | Jacob Silverman
It wasn’t supposed to be this way. For years, tech executives and data scientists maintained the pose that a digital economy run almost exclusively on the parsing of personal data and sensitive information would not only be competitive and fair but would somehow lead to a more democratic society. Just let Facebook and Google, along with untold other players large and small, tap into the drip-drip of personal data following you around the internet, and in return you’ll get free personalized services and—through an alchemy that has never been adequately explained—a more democratized public sphere.
politics  tech  surveillance  datascience  ethics 
25 days ago
Unenlightened thinking: Steven Pinker’s embarrassing new book is a feeble sermon for rattled liberals
When Pinker touches on eugenics in a couple of paragraphs towards the end of the book, he blames it on socialism: “The most decisive repudiation of eugenics invokes classical liberal and libertarian principles: government is not an omnipotent ruler over human existence but an institution with circumscribed powers, and perfecting the genetic make-up of the human species is not among them.” But a theory of entropy provides no reason for limiting the powers of government any more than for helping the weak. Science cannot underwrite any political project, classical liberal or otherwise, because science cannot dictate human values.

Modern tyrannies must therefore be products of counter-Enlightenment ideologies – Romanticism, nationalism and the like. Enabling liberals to avoid asking difficult questions about why their values are in retreat, this is a popular view. Assessed in terms of historical evidence, it is also a myth.
Today, liberals have lost that always rather incredible faith. Faced with the political reversals of the past few years and the onward march of authoritarianism, they find their view of the world crumbling away. What they need at the present time, more than anything else, is some kind of intellectual anodyne that can soothe their nerves, still their doubts and stave off panic.

This is where Pinker comes in. Enlightenment Now is a rationalist sermon delivered to a congregation of wavering souls. To think of the book as any kind of scholarly exercise is a category mistake. Much of its more than 500 pages consists of figures aiming to show the progress that has been made under the aegis of Enlightenment ideals. Of course, these figures settle nothing. Like Pinker’s celebrated assertion that the world is becoming ever more peaceful – the statistical basis of which has been demolished by Nassim Nicholas Taleb – everything depends on what is included in them and how they are interpreted.
liberalism  scientism  atheism  enlightenment 
25 days ago
The Complete Guide to Working On A Remote Team – Megan Berry – Medium
Say hi. When asking someone about a task make sure you are also saying hi, asking how they are doing and generally acting like we are a team of humans, talking to humans.

Celebrate people’s victories with them. Victories in work or life. Props with growbot or fun animated GIFS are great for this!

Understand the “Why.” It can be easy to give a remote worker a task list without helping them understand the business goals behind what they’re doing. Everyone works better when they are in sync on the company vision and how their work ties into those goals. If they know the “Why” they can be a part of the creative process to help reach the company’s goals. On the flip side, if your manager gives you a task and you don’t understand why you are doing it or how it will help the company, ask!
remote  management 
26 days ago
The Social Graph Is Neither (Pinboard Blog)
Open data advocates tell us the answer is to reclaim this obsessive dossier for ourselves, so we can decide where to store it. But this misses the point of how stifling it is to have such a permanent record in the first place. Who does that kind of thing and calls it social?

Asking computer nerds to design social software is like hiring a Mormon bartender. Our industry abounds in people for whom social interaction has always been more of a puzzle to be reverse-engineered than a good time to be had, and the result is these vaguely Martian protocols.
socialmedia  tech  graph 
26 days ago
how to do nothing – Jenny Odell – Medium
Our required reading, Why Work Sucks and How to Fix it, by the creators of ROWE, intended to describe a merciful slackening of the “be in your chair from 9 to 5” model, but I was nonetheless troubled by how the work and non-work selves are completely conflated throughout the text. And so they write:

If you can have your time and work and live and be a person, then the question you’re faced with every day isn’t, Do I really have to go to work today? but, How do I contribute to this thing called life? What can I do today to benefit my family, my company, myself?

To me, “company” doesn’t belong in that sentence. Even if you love your job! Unless there’s something specifically about you or your job that requires it, there is nothing to be admired about being constantly connected, constantly potentially productive the second you open your eyes in the morning — and in my opinion, no one should accept this, not now, not ever. In the words of Othello: “Leave me but a little to myself.”

For anyone unfamiliar with Fiverr: It’s a microtasking site where individual “entrepreneurs” sell various tasks — basically, units of their time — for $5, whether that’s copy editing, filming a video of themselves doing something of your choice, or pretending to be your girlfriend on Facebook. Fiverr is the ultimate expression of Franco Berardi’s “fractals of time and pulsating cells of labor.” And here, the idea that you would even withhold some of that time to sustain yourself with food is essentially ridiculed. Yes, these people work from home, but unlike the man with the sandwich, they must work from home. Home is work; work is home.

Paradise Built in Hell, in which Rebecca Solnit examines and dispenses with the myth that people become desperate and selfish after disasters. From the 1906 earthquake to Hurricane Katrina, she gives detailed accounts of the surprising resourcefulness, empathy, and sometimes even humor that arise in dark circumstances. Several of her interviewees report feeling a strange nostalgia for the purposefulness and the connection they felt with their neighbors immediately following a disaster. Solnit writes:

When all the ordinary divides and patterns are shattered, people step up — not all, but the great preponderance — to become their brothers’ keepers. And that purposefulness and connectedness bring joy even amid death, chaos, fear, and loss. … Horrible in itself, disaster is sometimes a door back into paradise, the paradise at least in which we are who we hope to be, do the work we desire, and are each our sisters’ and brothers’ keeper.
work  professional  criticism  art  birds  oakland 
29 days ago
« earlier      
1970s 20c abtesting academia accommodation advertising aerospace ai algorithms amazon analytics anecdata anomaly antarctica api app apple architecture art arxiv astro async audio awk aws backpropagation backup bash bayes bias bigdata bitcoin book books brexit brooklyn business c calculus california capitalism car causality chrome cia climate clojure cloudera cloudfront communication computers concurrency conference convexoptimization cosmology crime cryptocurrency cryptography cs csv culture dask data database dataengineering datascience datastructures death deeplearning design devops differentialprivacy diversity dns docker economics education email engineering english equity espionage ethics etsy eu europe facebook family fatml federatedlearning feminism fiction film finance fintech fpga functional furniture fzf gans gaussianprocesses gawker gcp germany git github golang google gpu gradschool graph h1b hardware haskell health hiring history homomorphic housing http https humor humour image immigration infrastructure instagram internet interpretability interview investments ipython java job jobs journal journalism js json jupyter kdb keras kubernetes labour lambda language law lda legal liberalism linearalgebra linearprogramming linux literature losangeles lstm mac machinelearning macos make management map mapreduce maps marketing mars math maths me media medicine module money movies music name network neuralnetworks neuroscience newyork nlp notebook npr numpy nyc oop optimization os oxford package pandas philosophy phone photoshop physics politics polling poverty presentation privacy probabilisticprogramming probability product professional programming psephology publishing pymc3 pytest python pywren quant r race racism ransomware recipe recommendation reinforcementlearning religion remote republican research rest retirement review rnn route53 ruby russia rust s3 safety sanfrancisco scala science sciencewriting scientism scifi scikitlearn scipy search security sentiment serverless sexism shell siliconvalley slack social socialism socialmedia space spark sql sre ssh ssl stan startup statistics streaming style summarization supervised surveillance talk tax tech technology tensorflow testing text timemachine timeseries tmux topicmodelling translation transport travel trump tutorial tv twitter uber uk unix urban usa ux vc versioncontrol video vim visa visualization vpn web webdev word2vec writing

Copy this bookmark:



description:


tags: