7358
Twitter
Put my trip to San Francisco for 2 up on highlights!! Still missing that blue blue Californian sky…
from twitter_favs
2 days ago
Twitter
it's pretty clear he wanted young girls (or, as they say "an avid international traveler") and she wanted his money…
from twitter_favs
2 days ago
Twitter
Sahm is extremely common, but I wouldn't call them high end. Spiegelau is properly high end, but their g…
from twitter_favs
3 days ago
Twitter
Yeah ECS services are designed for long term durability. For fast iteration I use a tool like this one:…
from twitter_favs
3 days ago
Twitter
$79 RockPro64 4GB
$79 ASUS Tinker Board S
$35 Raspberry Pi 3 B+
from twitter_favs
3 days ago
Datasette Publish
You can try it out with your own CSV files without installing any software using - upload y…
from twitter_favs
4 days ago
Twitter
You can try it out with your own CSV files without installing any software using - upload y…
from twitter_favs
4 days ago
Twitter
Sisters' Islands Marine Park to home Singapore's biggest artificial reef. More at .
from twitter_favs
5 days ago
The Reverse Sear Is the Best Way to Cook a Steak, Period | The Food Lab | Serious Eats
The Food Lab: The Reverse Sear Is the Best Way to Cook a Steak, Period
Serious Eats
The Food Lab
Unraveling the mysteries of home cooking through science.
[Photographs: J. Kenji López-Alt]
More
Guide to Steak
All the methods and tips you need to make perfect steak, each and every time.
I've been using and writing about the reverse sear—the technique of slow-cooking a steak or roast before finishing it off with a hot sear—for well over a decade now, but I've never written a definitive guide for using it on steaks. It's a really remarkable method, and if you're looking for a steak that's perfectly medium-rare from edge to edge, with a crisp crust, there's no better technique that I know of. Here is that definitive article we've been missing, outlining what I think is the best way to cook a steak, indoors or out. First I'll go over a little background information, then I'll explain how to do it, and finally I'll get into the details of why it works so well.
The full history of the reverse sear is a little hazy (though AmazingRibs.com has a pretty good timeline). It's one of those techniques that seem to have been developed independently by multiple people right around the same time. With all the interest in food science and precision cooking techniques like sous vide that cropped up in the early 2000s, I imagine the time was simply ripe for it to come around.
My own experience with it started in 2006, when I was just beginning my very first recipe-writing job. I'd recently been hired as a test cook at Cook's Illustrated magazine, and my first project was to come up with a foolproof technique for cooking thick-cut steaks. After testing dozens and dozens of variables, I realized that I already knew the answer: Cook it sous vide. Traditional cooking techniques inevitably form a gray band of overcooked meat around the outer edges of a steak. Sous vide, thanks to the gentle heat it uses, eliminates that gray band, producing a steak that's cooked just right from edge to edge.
Unfortunately, at that time, sous vide devices were much too expensive for home cooks. Instead, I tried to devise a method that would deliver similar results with no special equipment. The reverse sear is what I came up with, and the recipe was published in the May/June 2007 issue of the magazine (though it didn't get the name "reverse sear" until some time later).
The Basics: How to Reverse-Sear a Steak
The process of reverse-searing is really simple: Season a roast or a thick-cut steak (the method works best with steaks at least one and a half to two inches thick), arrange the meat on a wire rack set in a rimmed baking sheet, and place it in a low oven—between 200 and 275°F (93 and 135°C). You can also do this outdoors by placing the meat directly on the cooler side of a closed grill with half the burners on. Cook it until it's about 10 to 15°F below your desired serving temperature (see the chart at the end of this section), then take it out and sear it in a ripping-hot skillet, or on a grill that's as hot as you can get it.
Then dig into the best-cooked steak you've ever had in your life.
You want it broken down step by step? Okay, here goes:
Step 1: Season the Steak
Season your thick-cut steaks—I like ribeyes, but this will work with any thick steak—generously with salt and pepper on all sides, then place them on a wire rack set in a rimmed baking sheet. If you're cooking the steaks on a grill, skip the rack and pan.
For even better results, refrigerate the steaks uncovered overnight to dry out their exteriors.
Step 2: Preheat the Oven
Preheat the oven to anywhere between 200 and 275°F (93 and 135°C). The lower you go, the more evenly the meat will cook, though it'll also take longer. If you have a very good oven, you can probably set it even lower than this range, but many ovens can't hold temperatures below 200°F very accurately.
If you're doing this outdoors, create a two-zone fire by banking a chimney of coals under one side of the grill, or turning on only half the burners of a gas grill. Cover the grill and let it preheat.
Step 3: Slow-Cook the Steak
Place the steaks—baking sheet, rack, and all—in the oven, and roast until they hit a temperature about 10 to 15°F below the final temperature at which you'd like to serve the meat. A good thermometer is absolutely essential for this process. I recommend either the Thermapen or one of these inexpensive options.
If using the grill, just place the steaks directly on the cooler side of the grill, allowing them to gently cook via indirect heat. Timing may vary depending on the exact temperature that your grill is maintaining, so use a thermometer, and check frequently!
Step 4: Sear the Steak
Just before the steaks come out of the oven, add a tablespoon of vegetable oil or other high-temp-friendly oil to a heavy skillet, then set it to preheat over your strongest burner. Cast iron works great, as does triple-clad stainless steel.
As soon as that oil starts smoking, add the steaks along with a tablespoon of butter, and let them cook, swirling and lifting occasionally, until they're nicely browned on the first side. This should take about 45 seconds. Flip the steaks and get the second side, then hold the steaks sideways to sear their edges.
To finish on the grill, remove the steaks and tent them with foil while you build the biggest fire you can, either with all your gas burners at full blast and the lid down to preheat, or with extra coals. When the fire is rip-roaring hot, cook the steaks over the hot side, flipping every few seconds, until they're crisp and charred all over, about a minute and a half total.
Step 5: Serve
Serve the steaks immediately, or, if you'd like, let them rest for at most a minute or two. With reverse-seared steaks, there's no need to rest your meat, as you would with a more traditional cooking method.
Reverse-Seared Steak Temperature and Timing for 1 1/2–Inch Steaks in a 250°F (120°C) Oven
Doneness Target Temperature in the Oven Final Target Temperature Approximate Time in Oven
Rare 105°F (40°C) 120°F (49°C) 20 to 25 minutes
Medium-Rare 115°F (46°C) 130°F (54°C) 25 to 30 minutes
Medium 125°F (52°C) 140°F (60°C) 30 to 35 minutes
Medium-Well 135°F (57°C) 150°F (66°C) 35 to 40 minutes
NB: All time ranges are approximate. Use a thermometer!
Why Is It Called the Reverse Sear?
It's called the reverse sear because it flips tradition on its head. Historically, almost every cookbook and chef have taught that when you're cooking a piece of meat, the first step should be searing. Most often, the explanation is that searing "locks in juices." These days, we know that this statement is definitively false. Searing does not actually lock in juices at all; it merely adds flavor. Flipping the formula so that the searing comes at the end produces better results. But what exactly are those better results?
Advantage #1: More Even Cooking
The temperature gradient that builds up inside a piece of meat—that is, the difference in temperature as you work your way from the edges toward the center—is directly related to the rate at which energy is transferred to that piece of meat. The higher the temperature you use to cook, the faster energy is transferred, and the less evenly your meat cooks. Conversely, the more gently a steak is cooked, the more evenly it cooks.
Meat cooked at very high temperatures develops a thick, gray band that indicates overcooking.
By starting steaks in a low-temperature oven, you wind up with almost no overcooked meat whatsoever. Juicier results are your reward.
Advantage #2: Better Browning
When searing a piece of meat, our goal is to create a crisp, darkly browned crust to contrast with the tender, pink meat underneath. To do this, we need to trigger the Maillard reaction, the cascade of chemical reactions that occur when proteins and sugars are exposed to high heat. It helps if you think of your screaming-hot cast iron skillet as a big bucket, and the heat energy it contains as water filling that bucket. When you place a steak in that pan, you are essentially pouring that energy out of the skillet and into the steak.
In turn, that steak has three smaller buckets that can be filled with energy.
The first is the temperature change bucket: It takes energy to raise the temperature of the surface of that steak.
Next is the evaporation bucket: It takes energy to evaporate the surface moisture from the steaks.
Third is the Maillard browning bucket: It takes energy to trigger those browning reactions.
The thing is, all of those buckets need to be filled in order. Water won't really start evaporating until it has been heated to 212°F (100°C). The Maillard reaction doesn't really take place in earnest until you hit temperatures of around 300°F (150°C) or higher, and that won't happen until most of the steak's surface moisture has evaporated.
Your goal when searing a steak is to make sure that the temperature and evaporation buckets are as small as possible, so that you can rapidly fill them up and move on to the important process of browning.
Pop quiz: Let's say you pull a steak straight out of the fridge. Which of those three buckets is the biggest one? You might think, Well, it's gotta be the temperature bucket—we're starting with a steak that's almost freezing-cold and bringing it up to boiling temperatures.
to get the moistest possible results, you should start with the driest possible steak
In fact, it's the evaporation bucket that is by far the biggest. It takes approximately five times more energy to evaporate a gram of water than it does to raise the temperature of that same gram of water from freezing to boiling. That's a big bucket! Moral of the story: Moisture is the biggest enemy of a good sear, so any process that can reduce the amount of surface moisture on a steak is going to improve how well it browns and crisps—and, by extension, minimize the amount of time it spends in the pan, thus minimizing the amount of overcooked meat underneath. It's a strange irony that to get the moistest possible results, you … [more]
Order  your  copy  of  The  Food  Lab:  Better  Home  Cooking  Through  Science  today!  from iphone
7 days ago
Twitter
That doesn't sound good, Geoffrey. We'd like to chat about this. Please follow us & let us know when you…
from twitter_favs
8 days ago
Twitter
I don't think there's an explicit blocking feature. You can keep a…
from twitter_favs
8 days ago
Twitter
A GitHub project at the organisation level can track issues across multiple…
from twitter_favs
8 days ago
Twitter
Nintendo Classic Mini: will return to stores on 29/06!
from twitter_favs
10 days ago
Docker is the dangerous gamble which we will regret | Smash Company
Docker is the dangerous gamble which we will regret
(written by lawrence krubner, however indented passages are often quotes). You can contact lawrence at: lawrence@krubner.com
There is perhaps one good argument for using Docker. It is hidden by the many bad arguments for using Docker. I’m going to try to explain why so much Docker rhetoric is stupid, and then look at what reason might be good.
Every time I criticize Docker I get angry responses. When I wrote Why would anyone choose Docker over fat binaries? 6 months ago I saw some very intelligent responses on Hacker News, but also some angry ones. So in writing this current essay, I am trying to answer some of the criticism I expect to get in the future.
But I guess I am lucky because so far I have not gotten a reaction as angry as what The HFT Guy had to face when he talked about his own failed attempt to use Docker in production at the financial firm where he works:
I received a quite insulting email from a guy who is clearly in the amateur league to say that “any idiot can run Docker on Ubuntu” then proceed to give a list of software packages and advanced system tweaks that are mandatory to run Docker on Ubuntu, that allegedly “anyone could have found in 5 seconds with Google“.
On the Internet, that kind of anger is normal. I don’t know why, but it is. Many developers get angry when they hear someone criticize a technology which they favor. That anger gets in the way of their ability to figure out the long-term reality of the situation.
Docker promises portability and security and resource management and orchestration. Therefore the question that rational people should want to answer is “Is Docker the best way to gain portability and security and resource management and orchestration?”
I’m going to respond to some of the responses I got. Some of these are easy to dismiss. There is one argument that is not easy to dismiss. I’ll save that for the end.
One person wrote:
Because choosing Docker requires boiling fewer oceans, and whether those oceans should or should not be boiled has no bearing on whether I can afford to boil them right now.
Okay, but compared to what? Having your devops person write some scripts to standardize the build, the deployment, the orchestration, and the resource use? The criticism seems to imply “I don’t want the devops person to do this, because the result will be ad-hoc, and I want something standardized.”
Docker wins because developers and managers see it as offering something less custom, less chaotic, less ad-hoc, and more standardized. Or at least, having the potential to do so. The reality of Docker has been an incredible mess so far (see Docker in production: a history of failure). But many are willing to argue that all of the problems will soon be resolved, and Docker will emerge as the stable, consistent standard for containerization. This is a very large gamble. Nearly every company that’s taken this gamble so far has ended up burned, but companies keep taking this gamble on the assumption it is going to pay off big at some point soon.
Every company that I have worked with, over the last two years, was either using Docker or was drawing up plans to soon use Docker. They are implicitly paying a very high price to have a standardized solution, rather than an ad-hoc build/deploy/orchestrate script. I personally have not yet seen a case where this was the economically rational choice, so either companies are implicitly hoping this will pay off in the long-run, or they are being irrational.
I use the word “implicitly” because I’ve yet to hear a tech manager verbalize this gamble explicitly. Most people who defend Docker talk about how it offers portability or security or orchestration or configuration. Docker can give us portability or security or orchestration or configuration, but at a cost of considerable complexity. Writing an ad-hoc script would be easier in most cases.
The best articles about Docker emphasize the trade-offs that one makes by choosing to use it:
It’s best to think of Docker as an advanced optimization. Yes, it is extremely cool and powerful, but it adds significantly to the complexity of your systems and should only be used in mission critical systems if you are an expert system administrator that understands all the essential points of how to use it safely in production.
At the moment, you need more systems expertise to use Docker, not less. Nearly every article you’ll read on Docker will show you the extremely simple use-cases and will ignore the complexities of using Docker on multi-host production systems. This gives a false impression of what it takes to actually use Docker in production.
In the world of computer programming, we have the saying “Premature optimization is the root of all evil.” Yet most of my clients this year have insisted “We must Dockerize everything, right from the start.” Rather than build a working system, and then put it in production, and then maybe see if Docker offers an advantage over simpler tools, the push has been to standardize the development and deployment around Docker.
A common conversation:
Me: “We don’t need Docker this early in the project.”
Them: “This app requires Nginx and PostGres and Redis and a bunch of environment variables. How are you going to set all that up without Docker?”
Me: “I can write a bash script. Or “make”. Or any other kind of installer, of which there are dozens. We have all been doing this for many years.”
Them: “That’s insane. The bash script might break, and you’ll have to spend time debugging it, and it won’t necessarily work the same on your machine, compared to mine. With Docker, we write the build script and then its guaranteed to work the same everywhere, on your machine as well as mine.”
Like all sales pitches, this is seductive because it leads with the most attractive feature of Docker. As a development tool, Docker can seem less messy and more consistent than other approaches. It’s the second phase of Docker use, when people try to use it in production, where life becomes painful.
Your dev team might have one developer who owns a Windows machine, another who runs a Mac, another who has installed Ubuntu, and another who has installed RedHat. Perhaps you, the team lead, have no control over what machines they run. Docker can seem like a way to be sure they all have the same development environment. (However, when you consider the hoops you have to jump through to use Docker from a Windows machine, anyone who tells you that Docker simplifies development on a Windows machine is clearly joking with you.)
But when you go to production, you will have complete control over what machines you run in production. If you want to standardize on CentOS, you can. You can have thousands of CentOS servers, and you can use an old technology, such as Puppet, to be sure those servers are identical. The argument for Docker is therefore weaker for production. But apparently having used Docker for development, developers feel it is natural to also use it in production. Yet this is a tough transition.
I can cite a few examples, regarding the problems with Docker, but after a certain point the examples are boring. There are roughly a gazillion blog posts where people have written about the flaws of Docker. Anyone who fails to see the problems with Docker is being willfully blind, and this essay will not change their mind. Rather, they will ignore this essay, or if they read it, they will say, “The Docker eco-system is rapidly maturing and by next year it is going to be really solid and production ready.” They have said this every year for the last 5 years. At some point it will probably be true. But it is a dangerous gamble.
Despite all the problems with Docker, it does seem to be winning — every company I work with seems eager to convert to Docker. Why is that? As near as I can tell, the main thing is standardization.
Again, from the Hacker News responses to my previous essay, “friend-monoid” wrote this defense of Docker:
We have a whole lot of HTTP services in a range of languages. Managing them all with [uber] binaries would be a chore – the author would have to give me a way to set port and listen address, and I have to keep track of every way to set a port. With a net namespace and clever iptables routing, docker can do that for me.
notyourday wrote the response that I wish I’d written:
Of course if you had the same kind of rules written and followed in any other method, you would arrive at the exactly the same place. In fact, you probably would arrive at a better place because you would stop thinking that your application works because of some clever namespace and iptables thing.
anilakar wrote this response to notyourday:
I think that the main point was that docker skills are transferable, i.e. you can expect a new hire to be productive in less time. Too many companies still have in-house build/deploy systems that are probably great for their purpose but don’t offer valuable experience that would be usable outside that company.
And as near as I can tell, this is 100% why Docker is winning. Forget all the nonsense you read about Docker making deployment or security or orchestration easier. It doesn’t. But it is emerging as a standard, something a person can learn at one company and then take to another company. It isn’t messy and ad-hoc the way a custom bash script would be. And that is the real argument in favor of Docker. Whether it can live up to that promise is the gamble.
At the risk of being almost petty, I should point out that these arguments confuse containers with Docker. And I think many pro-Docker people deliberately confuse the issue. Even if containers are a great idea, Docker is driven forward by a specific company which has specific problems. Again from HTF Guy:
Docker has no business model and no way to monetize. It’s fair to say that they are releasing to all platforms (Mac/Windows) and integrating all kind of features (Swarm) as a… [more]
Source  from iphone
10 days ago
Twitter
Malaysia's new Prime Minister Mahathir Mohamad says he will redefine, not revoke, Najib Razak's controversial anti-…
from twitter_favs
10 days ago
Twitter
Tried out the cli 3.10 shipping with 4.

The experience is literally unrivaled. HMR is turbo…
from twitter_favs
12 days ago
My First Go Microservice using MongoDB and Docker Multi-Stage Builds
Rolling a Basic Go Microservice with MongoDB and Docker Multi-Stage Builds:
from twitter_favs
12 days ago
Twitter
# rcctl set sndiod flags -s default -m play,mon -s mon
# rcctl restart sndiod
ffmpeg \
-f sndio -i snd/0.mon \…
from twitter_favs
12 days ago
Twitter
, how do you sync ffmpeg audio and video capture?

Audio from snd/0.mon is captured successfully, but when…
from twitter_favs
12 days ago
Twitter
spotted in IKEA. I believe you're a collector of these?
from twitter_favs
12 days ago
Twitter
Will Smith tells the story of how he landed The Fresh Prince of Bel-Air
from twitter_favs
14 days ago
Twitter
I really like "turns a CSV into an API in three commands" as an elevator pitch for Datasette
from twitter_favs
14 days ago
Twitter
Configure long polling for an Amazon SQS queue using the console. Learn more:
from twitter_favs
14 days ago
Twitter
Invest in your and code snippets. They will save you countless hours ⏳
from twitter_favs
14 days ago
Twitter
⚡ I wrote a post introducing the metric First Input Delay (FID), our latest (and my favorite) user-centric performa…
from twitter_favs
14 days ago
Twitter
It’s my last day at work.
11 years across two states. Dropped out of high school, got a GED, lied to a temp agency,…
from twitter_favs
14 days ago
Twitter
Go+WebAssembly coming to 1.11 thanks to and many reviewers!
from twitter_favs
15 days ago
(429) https://twitter.com/i/web/status/994197501048586241
The Event Gateway is going to be a key component in Serverless architectures. Here's how you run it o…
from twitter_favs
15 days ago
Twitter
Here's what it's like to buy an actual real coffee using the . Lightning is now live as a payment…
from twitter_favs
16 days ago
Twitter
You could leave out .profile and use AWS_PROFILE instead, that's probably the easiest way for now, I'll…
from twitter_favs
17 days ago
Twitter
An introduction to mssql-cli, a command-line client for SQL Server:
from twitter_favs
18 days ago
Twitter
Wow! So has been suspended. There is *no* appeal.

I've looked through and I c…
from twitter_favs
18 days ago
The Twitter Rules
Wow! So has been suspended. There is *no* appeal.

I've looked through and I c…
from twitter_favs
18 days ago
Twitter
I’ve been at stage one for some time, all typed on this delightful thing
from twitter_favs
19 days ago
Beyond the armchair: must philosophy become experimental? | Aeon Essays
Out of the armchair
A growing number of philosophers are conducting experiments to test their arguments. Is this the future for philosophy?
Stephanie Wykstra01 May, 2018
Conducting thought experiments from the armchair has long been an accepted method in analytic philosophy. What do thought experiments from the armchair look like? Philosophers think about real and imagined scenarios involving knowledge, morality, free will and other matters. They then use those scenarios to elicit their own reactions (‘intuitions’), which serve as fodder for arguments.
One well-known kind of thought experiment is called a ‘Gettier case’. Named after the American philosopher Edmund Gettier, these are scenarios used to question a particular notion of knowledge, and are based on a range of examples he provided in a journal article in 1963. One might plausibly think of knowledge as a belief that is both true and justified (ie, based on evidence). But Gettier suggested some counterexamples to this definition, by telling stories in each of which there’s a true, justified belief that he claimed isn’t a case of knowledge. For example, imagine that at noon you look at a stopped clock that happens to have stopped at noon. Your belief that it’s noon is true, and arguably it’s also justified. The question is: do you thereby know that it’s noon, or do you merely believe it? While this example and others might seem frivolous – other cases involve fake zebras and imitation barns – they are intended to make headway in analysing knowledge.
In the late 1990s, the philosophers Jonathan Weinberg, Shaun Nichols and Stephen Stich started raising questions about this methodology of eliciting ‘our’ intuitions. Their question: who does ‘we’ refer to here? They wondered if philosophers – at least in analytic philosophy, a particularly Western, educated, industrialised, rich and democratic bunch, aka ‘WEIRD’ – might have intuitions that people in other demographic groups wouldn’t share. (They’d been inspired to raise this question by work on cross-cultural differences by psychologists such as Richard Nisbett and Jonathan Haidt.)
Rather than just speculate about whether differences existed, Stich and colleagues decided to do some real-world experiments. In their initial research, they focused on common thought-experiments in epistemology (a subfield of philosophy that studies topics such as justified belief and knowledge). They recruited people of East Asian and Western descent, as well as people of Indian-subcontinent descent, and asked them to read and think about some classic vignettes in epistemology. In their 2001 paper, they claimed that among their most interesting findings was something unexpected: while most people of Western descent in their experiment deemed that particular Gettier cases are not instances of knowledge, people of East Asian and of Indian descent often thought the opposite.
Stich and colleagues argued that this kind of variation in intuitions should cause a big shift in the way that analytic philosophy is practised. Until this point, most philosophers had traditionally thought it was fine to sit in their armchairs and consider their own intuitions. It was the way that philosophy was done. But experimental evidence, they claimed, undermined this traditional practice. If such differences of intuition existed, they wrote: ‘Why should we privilege our intuitions rather than the intuitions of some other group?’ If different groups had different intuitions, it wasn’t enough to say that ‘our’ intuition about justice or knowledge or free will is such-and-such. Rather, the philosopher must at the very least specify whose intuition is relevant, and why that intuition should matter rather than another one.
Over subsequent decades, experimental philosophy (x-phi for short) grew significantly. Some philosophers followed Stich et al’s lead, in testing intuitions of participants who varied in gender, age, native language and other categories. They also looked at variation in intuitions based on irrelevant factors such as the order in which cases are presented. Beyond that, some x-phi practitioners also found significant sources of funding. Stich and his fellow philosopher Edouard Machery, together with the anthropologist H Clark Barrett, received a grant of more than $2.5 million from the John Templeton Foundation to embark on a series of experiments on knowledge, understanding and wisdom across 10 countries, with a goal of better understanding these philosophical concepts as they appear across a large swathe of cultures.
Subscribe to our newsletter
Daily/weekly updates on everything new at Aeon.
It’s important to note that arguably the biggest factor in x-phi’s growth has been the result of some philosophers heading off into a new direction. According to a recent survey of the field conducted by Joshua Knobe, not too many philosophers kept up experiments on demographic differences with the aim of showing that traditional philosophy is ill-grounded (this came to be known as ‘the negative programme’ in x-phi). Instead, another class of experiments (with a ‘positive programme’) sprang up.
Knobe, a professor of psychology, philosophy and linguistics at Yale University and well-known in the field for his experimental work, which he’s been doing since the early 2000s, describes one kind of ‘positive programme’ as very similar to cognitive science. Conducting experiments uncovers interesting effects, and researchers then hypothesise about mechanisms that might explain these effects. A well-known example of this kind of work is Knobe’s own finding, called the ‘side-effect effect’ or just the ‘Knobe effect’. In a nutshell, this is the finding that people judge a side-effect to be intentionally caused much more often when that side-effect is negative than when it’s positive.
For example, in Knobe’s original experiment, participants were given this vignette:
The vice-president of a company went to the chairman of the board and said: ‘We are thinking of starting a new programme. It will help us increase profits, but it will also harm the environment.’ The chairman of the board answered: ‘I don’t care at all about harming the environment. I just want to make as much profit as I can. Let’s start the new programme.’ They started the new programme. Sure enough, the environment was harmed.
Other study participants saw the exact same story, except that the word ‘harmed’ was replaced with the word ‘helped’. The striking result was that, in most cases (82 per cent), participants said that the chairman brought about the harmful side-effect intentionally, but only 33 per cent of participants said that he intentionally brought about the helpful side-effect.
Since then, many philosophers have conducted hundreds of these kinds of experiments. Some of them involve repeating and extending the Knobe effect, and many others venture into new directions to run experiments involving questions about moral responsibility, free will, causation, personal identity and other topics. In their ‘Experimental Philosophy Manifesto’ (2007), Knobe and Nichols described the allure of experimental philosophy’s positive programme by writing: ‘Many find it an exciting new way to approach the basic philosophical concerns that attracted them to philosophy in the first place.’ But while x-phi has expanded over the years, not everyone in philosophy has been a fan.
Analysing concepts from the armchair is a poor method, because of the evidence of demographic variation
First, as the positive programme in x-phi shades into psychology and vice versa, some have asked: is experimental philosophy really philosophy? Knobe and some of his colleagues argue that it is. They describe the work as continuous with a long tradition of philosophers trying to understand the human mind, and point to the likes of Aristotle, David Hume and Friedrich Nietzsche as precedents. In their manifesto, Knobe and Nichols write:
It used to be a commonplace that the discipline of philosophy was deeply concerned with questions about the human condition. Philosophers thought about human beings and how their minds worked … On this traditional conception, it wasn’t particularly important to keep philosophy clearly distinct from psychology, history or political science … The new movement of experimental philosophy seeks to return to this traditional vision.
Some philosophers, even those who identify as part of the x-phi movement, disagree with this viewpoint. Machery, a fellow x-phi advocate, argues that even if, historically, philosophers used to engage in a huge range of intellectual endeavours, it doesn’t mean that studying all those things should now count as philosophy. There’s something lost, Machery thinks, if experimental philosophers start to resemble cognitive scientists more and more, and lose their focus on what has been of central interest in philosophy: analysing concepts. (According to Knobe’s recent analysis, only around 10 per cent of x-phi experiments over a period of five years were directly about conceptual analysis, as opposed to revealing new cognitive effects and discussing potential cognitive processes underlying them.) Machery concurs with Stich and other ‘negative programmers’ that trying to analyse concepts from the armchair is a poor method, because of the experimental evidence that judgments vary by demographic group. Instead, he argues in his book Philosophy Within Its Proper Bounds (2017), philosophers should make use of experiments as a way of clarifying and assessing important philosophical ideas.
A second kind of response comes from those who question the usefulness of eliciting intuitions from people outside of philosophy. For example, in his book Relativism and the Foundations of Philosophy (2009), Stephen Hales writes: ‘[I]ntuitions of professional philosophers are much more reliable than either those of inexperienced students or the “folk”.’ This response… [more]
Syndicate  this  Essay  from iphone
21 days ago
Untitled (https://danielmuller.me/2018/05/creating-a-serverless-geoip-api/)
Long time I didn't blog, so here you go: Creating a Serverless GeoIP API
from twitter_favs
21 days ago
Twitter
worst case use a Lambda for access control logic. This kind of stuff will be the bulk of what most people work with…
from twitter_favs
27 days ago
Twitter
haven't tried any of this yet but it's cool to see AWS finally going after a Firebase style suite of client-side AP…
from twitter_favs
27 days ago
Twitter
Twitter People Interpretation vs Your Actual Tweet.
from twitter_favs
27 days ago
Twitter
Always cool tech stuff coming from . Here they share their approach to autoscaling the size of the EC…
from twitter_favs
27 days ago
Twitter
I love how they use a test here to create a visual replay of a bug. It's such a good way to get started…
from twitter_favs
27 days ago
AWS Fargate now available in Ohio, Oregon, and Ireland Regions
AWS Fargate now available in Ohio, Oregon, and Ireland Regions
from twitter_favs
27 days ago
Google Groups
Blink: Intent to Experiment: Kaby Lake VP8 acceleration on ChromeOS
from twitter_favs
27 days ago
Awesomplete: Ultra lightweight, highly customizable, simple autocomplete, by Lea Verou
Awesomplete
2KB minified & gzipped!
Ultra lightweight, customizable, simple autocomplete widget with zero dependencies, built with modern standards for modern browsers. Because <datalist> still doesn’t cut it.
Basic usageCustomizeExtendEventsAPIAdvanced ExamplesDownload!
Demo (no JS, minimal options)
Pick a programming language:
Note that by default you need to type at least 2 characters for the popup to show up, though that’s super easy to customize. With Awesomplete, making something like this can be as simple as:
<input class="awesomplete"
data-list="Ada, Java, JavaScript, Brainfuck, LOLCODE, Node.js, Ruby on Rails" />
Basic usage
Before you try anything, you need to include awesomplete.css and awesomplete.js in your page, via the usual <link rel="stylesheet" href="awesomplete.css" /> and <script src="awesomplete.js" async></script> tags.
For the autocomplete, you just need an <input> text field (might work on <textarea> and elements with contentEditable, but that hasn’t been tested). Add class="awesomplete" for it to be automatically processed (you can still specify many options via HTML attributes), otherwise you can instantiate with a few lines of JS code, which allow for more customization.
There are many ways to link an input to a list of suggestions. The simple example above could have also been made with the following markup, which provides a nice native fallback in case the script doesn’t load:
<input class="awesomplete" list="mylist" />
<datalist id="mylist">
<option>Ada</option>
<option>Java</option>
<option>JavaScript</option>
<option>Brainfuck</option>
<option>LOLCODE</option>
<option>Node.js</option>
<option>Ruby on Rails</option>
</datalist>
Or the following, if you don’t want to use a <datalist>, or if you don’t want to use IDs (since any selector will work in data-list):
<input class="awesomplete" data-list="#mylist" />
<ul id="mylist">
<li>Ada</li>
<li>Java</li>
<li>JavaScript</li>
<li>Brainfuck</li>
<li>LOLCODE</li>
<li>Node.js</li>
<li>Ruby on Rails</li>
</ul>
Or the following, if we want to instantiate in JS:
<input id="myinput" />
<ul id="mylist">
<li>Ada</li>
<li>Java</li>
<li>JavaScript</li>
<li>Brainfuck</li>
<li>LOLCODE</li>
<li>Node.js</li>
<li>Ruby on Rails</li>
</ul>
var input = document.getElementById("myinput");
new Awesomplete(input, {list: "#mylist"});
We can use an element reference for the list instead of a selector:
<input id="myinput" />
<ul id="mylist">
<li>Ada</li>
<li>Java</li>
<li>JavaScript</li>
<li>Brainfuck</li>
<li>LOLCODE</li>
<li>Node.js</li>
<li>Ruby on Rails</li>
</ul>
var input = document.getElementById("myinput");
new Awesomplete(input, {list: document.querySelector("#mylist")});
We can also directly use an array of strings:
<input id="myinput" />
var input = document.getElementById("myinput");
new Awesomplete(input, {
list: ["Ada", "Java", "JavaScript", "Brainfuck", "LOLCODE", "Node.js", "Ruby on Rails"]
});
We can even set it (or override it) later and it will just work:
<input id="myinput" />
var input = document.getElementById("myinput");
var awesomplete = new Awesomplete(input);
awesomplete.list = ["Ada", "Java", "JavaScript", "Brainfuck", "LOLCODE", "Node.js", "Ruby on Rails"];
Suggestions with different label and value are supported too. The label will be shown in autocompleter and the value will be inserted into the input.
<input id="myinput" />
var input = document.getElementById("myinput");
new Awesomplete(input, {
list: [
{ label: "Belarus", value: "BY" },
{ label: "China", value: "CN" },
{ label: "United States", value: "US" }
]
});
new Awesomplete(input, {
list: [
[ "Belarus", "BY" ],
[ "China", "CN" ],
[ "United States", "US" ]
]
});
Extend
The following JS properties do not have equivalent HTML attributes, because their values are functions. They allow you to completely change the way Awesomplete works:
Property Description Value Default
filter Controls how entries get matched. By default, the input can match anywhere in the string and it’s a case insensitive match. Function that takes two parameters, the first one being the current suggestion text that’s being tested and the second a string with the user’s input it’s matched against. Returns true if the match is successful and false if it is not. For example, to only match strings that start with the user’s input, case sensitive, we can do this:
filter: function (text, input) {
return text.indexOf(input) === 0;
}
For case-insensitive matching from the start of the word, there is a predefined filter that you can use, Awesomplete.FILTER_STARTSWITH Awesomplete.FILTER_CONTAINS: Text can match anywhere, case insensitive.
sort Controls how list items are ordered. Sort function (will be passed directly to Array.prototype.sort()) to sort the items after they have been filtered and before they are truncated and converted to HTML elements. If value is false, sorting will be disabled. Sorted by length first, order second.
item Controls how list items are generated. Function that takes two parameters, the first one being the suggestion text and the second one the user’s input and returns a list item. Generates list items with the user’s input highlighted via <mark>.
replace Controls how the user’s selection replaces the user’s input. For example, this is useful if you want the selection to only partially replace the user’s input. Function that takes one parameter, the text of the selected option, and is responsible for replacing the current input value with it.
function (text) {
this.input.value = text;
}
data Controls suggestions' label and value. This is useful if you have list items in custom format, or want to change list items based on user's input. Function that takes two parameters, the first one being the original list item and the second a string with the user’s input and returns a list item in one of supported by default formats:
"JavaScript"
{ label: "JavaScript", value: "JS" }
[ "JavaScript", "JS" ]
To use objects without label or value properties, e.g. name and id instead, you can do this:
data: function (item, input) {
return { label: item.name, value: item.id };
}
You can use any object for label and value and it will be converted to String where necessary:
list: [ new Date("2015-01-01"), ... ]
Original list items as Date objects will be accessible in filter, sort, item and replace functions, but by default we'll just see Date objects converted to strings in autocompleter and the same value will be inserted to the input.
We can also generate list items based on user's input. See E-mail autocomplete example in Advanced Examples section. Awesomplete.DATA: Identity function which just returns the original list item.
Events
Custom events are thrown in several places and are often cancellable. To avoid conflicts, all custom events are prefixed with awesomplete-.
Name Description event.preventDefault()?
awesomplete-select The user has made a selection (either via pressing enter or clicking on an item), but it has not been applied yet. Callback will be passed an object with text (selected suggestion) and origin (DOM element) properties. Yes. The selection will not be applied and the popup will not close.
awesomplete-selectcomplete The user has made a selection (either via pressing enter or clicking on an item), and it has been applied. Callback will be passed an object with a text property containing the selected suggestion. No
awesomplete-open The popup just appeared. No
awesomplete-close The popup just closed. Callback will be passed an object with a reason property that indicates why the event was fired. Reasons include "blur", "esc", "submit", "select", and "nomatches". No
awesomplete-highlight The highlighted item just changed (in response to pressing an arrow key or via an API call). Callback will be passed an object with a text property containing the highlighted suggestion. No
API
There are several methods on every Awesomplete instance that you can call to customize behavior:
Method Description
open() Opens the popup.
close() Closes the popup.
next() Highlights the next item in the popup.
previous() Highlights the previous item in the popup.
goto(i) Highlights the item with index i in the popup (-1 to deselect all). Avoid using this directly and try to use next() or previous() instead when possible.
select() Selects the currently highlighted item, replaces the text field’s value with it and closes the popup.
evaluate() Evaluates the current state of the widget and regenerates the list of suggestions or closes the popup if none are available. You need to call it if you dynamically set list while the popup is open.
destroy() Clean up and remove the instance from the input.
Advanced Examples
These examples show how powerful Awesomplete’s minimal API can be.
E-mail autocomplete
Type an email:
<input type="email" />
new Awesomplete('input[type="email"]', {
list: ["aol.com", "att.net", "comcast.net", "facebook.com", "gmail.com", "gmx.com", "googlemail.com", "google.com", "hotmail.com", "hotmail.co.uk", "mac.com", "me.com", "mail.com", "msn.com", "live.com", "sbcglobal.net", "verizon.net", "yahoo.com", "yahoo.co.uk"],
data: function (text, input) {
return input.slice(0, input.indexOf("@")) + "@" + text;
},
filter: Awesomplete.FILTER_STARTSWITH
});
Multiple values
Tags (comma separated):
<input data-list="CSS, JavaScript, HTML, SVG, ARIA, MathML" data-multiple />
new Awesomplete('input[data-multiple]', {
filter: function(text, input) {
return Awesomplete.FILTER_CONTAINS(text, input.match(/[^,]*$/)[0]);
},
item: function(text, input) {
return Awesomplete.ITEM(text, input.match(/[^,]*$/)[0]);
},
replace: function(text) {
var before = this.input.value.match(/^.+,\s*|/)[0];
this.input.value = before + text + ", ";
}
});
Ajax example (restcountries.eu api)
Select French speaking country
var ajax = new XMLHttpRequest();
ajax.open("GET", "https://restcountries.eu/rest/v1/lang/fr", true);
ajax.onload = function() {
var list = JSON.parse(ajax.responseText).map(function(i) { return i.name… [more]
});  from iphone
27 days ago
Twitter
Wow — Cloudinary now offers automated video transcription using Google Cloud Speech API, free for the first thirty…
from twitter_favs
28 days ago
Twitter
12 Principles for a 21st century conservatism: from
from twitter_favs
29 days ago
Twitter
I try to always have context like the route name etc, especially since line numbers shift around as you…
from twitter_favs
4 weeks ago
Twitter
The best undocumented feature of AirPods.
from twitter_favs
4 weeks ago
« earlier      
access addon ajax analytics android apache apex api apis aplix apple architecture archives archlinux argentina arm audio authentication aws backup bash bbc bios blog bluetooth bookmarks_bar boot broadband browser bts bugs build business c camera canvas charts checkout china chrome cognito community conference configuration contacts cornwall css cycle cycling darwin dash data debian debug delicious-export design desktop dev development device digital dns doc docker dooh dreamhost dwm dynamodb ecs editing email embedded encoding environment events example extension fcpx ffmpeg filesystem finland firefox flash flickr fonts food gadget gdata geo gis git github go golang google gps graphics gstreamer gtk hack hardware hash history hosting howto html html5 http i18n identity image immigration input intel internet iphone ipod irc iso8601 japan japanese java javascript jpeg2000 jquery js json keyboard kiosk korea korean language laptop linux live livecd location london macosx maemo mail management map maps microsoft mikrotik mobile monitoring mozilla mp3 music network networking news nodejs nokia npapi odf office offline ogg openid opensource opera oracle osx pdf performance php plugin plugins politics preseed presentation privacy programming prometheus python qa rails raspberrypi reference rpi2 s3 s60 scripting sdk search security shell signage singapore software southafrica spanish ssl standards startup statistics streaming surrey svg sysadmin systemd technology template terminal test testing text thinkpad tips tools toread touch trac travel tutorial twitter ubuntu uk unicode unix unlabeled up upload usb video vim visualization vmware vuejs w3c web webconverger webdev webkit webpy whatwg widget widgets windows wireless woking wordpress worldcup06 x40 x61 xcode xml xorg

Copy this bookmark:



description:


tags: