db   22686

« earlier    

To join or not to join? An act of #includes – Goiabada
Good overview of the diffs between includes, joins etc.
rails  db  joins 
14 hours ago by bojates
White Supremacist Spotted On CNN Wearing Hydra T-Shirt
A screencap of a CNN broadcast is making the rounds on social media tonight, showing a white supremacist wearing a Hydra t-shirt.
Nazis  db  Comics 
15 hours ago by walt74
Dissecting the Google Employee’s Anti-Diversity Manifesto, Piece-by-Piece
This is the third part of a critique of the “Google ManifestBRO”. Today’s article is published on the day the Google employee was both dismissed and named as James Damore, an employee who had been at the company since 2013. Whilst others can choose to find out information about him and there is merit in the discussion of Google’s response, this series continues to dissect the manifesto to understand the position taken as the position has consequences. I’ve also got hold of the cited version to conduct a desktop study.
db  Googlememo 
16 hours ago by walt74
Dissecting the Google Employee’s Anti-Diversity Manifesto, Piece-by-Piece
Following on from the first post on the Google employee’s anti-diversity manifesto, we start to look at the data and conjecture within the manifesto and find a very weak empirical position.
db  Googlememo 
16 hours ago by walt74
Dissecting the Google Employee’s Anti-Diversity Manifesto, Piece-by-Piece
This weekend, the Google “Anti-Diversity” manifesto came into focus. If you’re unaware of the story, a Google employee wrote a manifesto chastising Google’s position on women in tech. This naturally created an outcry and later, Gizmodo obtained a copy of the manifesto, but left out charts and links, which annoyed the heck out of me, as that is tampering with the evidence for anyone to make an informed decision over.
Googlememo  db 
16 hours ago by walt74
‘22 Push-Ups for a Cause’: Depicting the Moral Self via Social Media Campaign #Mission22
Discussion
This article has provided the first “big data” analysis of the #Mission22 movement that went viral across multiple social media platforms in 2016. We began by arguing that Web 2.0 has ushered in profound changes to how people depict and construct identities that articulate with wider transformations in self and identity in conditions of late-modernity. The “confessional” quality of Web 2.0 means individuals and groups are presented with unprecedented opportunities to “mass self-depict” through new communication and Internet technologies. We suggest that the focus on how Web technologies are implicated in the formation of moral subjectivities is something that has been overlooked in the extant research on identity and Web 2.0 technologies.

Filling this gap, we used the #Mission22 movement on Twitter as an empirical site to analyse how contemporary subjects construct and visually depict moral identities in online contexts. A central finding of our analysis of 225883 Twitter posts is that most engagement with #Mission22 was through retweeting. Our data show that retweets were by far the most popular way to interact and engage with the movement. In other words, most people were not producing original or new content in how they participated in the movement but were re-sharing – re-depicting – what others had shared. This finding highlights the importance of paying attention to the architectural affordances of social media platforms, in this case, the affordances of the ‘retweet’ button, and how they shape online identity practices and moral expression. We use moral expression here as a broad term to capture the different ways individuals and groups make moral evaluations based on a responsiveness to how people are faring and whether they are suffering or flourishing (Sayer). This approach provides an emic account of everyday morality and precludes, for example, wider philosophical debates about whether patriotism or nationalistic solidarity can be understood as moral values.

The prominence of the retweet in driving the shape and nature of #Mission22 raises questions about the depth of moral engagement being communicated. Is the dominance of the retweet suggestive of a type of “moral slacktivism”? Like its online political equivalent, does the retweet highlight a shallow and cursory involvement with a cause or movement? Did online engagement translate to concrete moral actions such as making a donation to the cause or engaging in some other form of civic activity to draw attention to the movement? These questions are beyond the scope of this article but it is interesting to consider the link between the affordances of the platform, capacity for moral expression and how this translates to face-to-face moral action. Putting aside questions of depth, people are compelled not to ignore these posts, they move from “seeing” to “posting”, to taking action within the affordances of the architectural platform.

What then is moving Twitter users to morally engage with this content? How did this movement go viral? What helped bust this movement out of the “long tail distribution” which characterises most movements – that is, few movements “take-off” and become durable within the congested attention economies of social media environments. The Top 10 most retweeted tweets provide powerful answers here. All of them feature highly emotive and affective visual depictions, either high impact photos and statements, or videos of people/groups doing pushups in solidarity together. The images and videos align affective, bodily and fitness practices with nationalistic and patriotic themes to produce a powerful and moving moral cocktail. The Top 50 words also capture the emotionally evocative use of moral language: words like: alone, fight, challenge, better, believe, good, wrong, god, help, mission, weakness and will.

The emotional and embodied visual depictions that characterise the the Top 10 retweets and Top 50 words highlight how moral identity is not just a cerebral practice, but one that is fundamentally emotional and bodily. We do morality not just with our minds and heads but also with our bodies and our hearts. Part of the power of this movement, then, is the way it mobilises interest and involvement with the movement through a physical and embodied practice – doing push-ups. Visually depicting oneself doing push-ups online is a powerful display of morality identity. The “lay morality” being communicated is that not only are you somebody who cares about the flourishing and suffering of Others, you are also a fit, active and engaged citizen. And of course, the subject who actively takes responsibility for their health and well-being is highly valued in neoliberal risk contexts (Lupton).

There is also a strong gendered dimensions to the visual depictions used in #Mission22. All of the Top 10 retweets feature images of men, mostly doing push-ups in groups. In the case of the second most popular retweet, it is two men in suits doing push-ups while three sexualised female singers “look-on” admiringly. Further analysis needs to be done to detail the gendered composition of movement participation, but it is interesting to speculate whether men were more likely to participate. The combination of demonstrating care for Other via a strong assertion of physical strength makes this a potentially more masculinised form of moral self-expression.

Overall, Mission22 highlights how online self-work and cultivation can have a strong moral dimension. In Foucault’s language, the self-work involved in posting a video or image of yourself doing push-ups can be read as “an intensification of social relations”. It involves an ethics that is about self-creation through visual and textual depictions. Following the more pessimistic line of Bauman or Turkle, posting images of oneself doing push-ups might be seen as evidence of narcissism or a consumerist self-absorption. Rather than narcissism, we want to suggest that Mission22 highlights how a self-based moral practice – based on bodily, emotional and visual depictions – can extend to Others in an act of mutual care and exchange. Again Foucault helps clarify our argument: “the intensification of the concern for the self goes hand in hand with a valorisation of the Other”. What our work does, is show how this operates empirically on a large-scale in the new confessional contexts of Web 2.0 and its cultures of mass self-depiction.
db  dp  SocialMedia  Psychology  Morals 
16 hours ago by walt74
Ways of Depicting: The Presentation of One’s Self as a Brand
Ways of Seeing
"Images … define our experiences more precisely in areas where words are inadequate." (Berger 33)
"Different skins, you know, different ways of seeing the world." (Morrison)

The research question animating this article is: 'How does an individual creative worker re-present themselves as a contemporary - and evolving - brand?' Berger notes that the "principal aim has been to start a process of questioning" (5), and the raw material energising this exploration is the life's work of Richard Morrison, the creative director and artist who is the key moving force behind The Morrison Studio collective of designers, film makers and visual effects artists, working globally but based in London. The challenge of maintaining currency in this visually creative marketplace includes seeing what is unique about your potential contribution to a larger project, and communicating it in such a way that this forms an integral part of an evolving brand - on trend, bleeding edge, but reliably professional. One of the classic outputs of Morrison's oeuvre, for example, is the title sequence for Terry Gilliam's Brazil.

Passion cannot be seen yet Morrison conceives it as the central engine that harnesses skills, information and innovative ways of working to deliver the unexpected and the unforgettable. Morrison's perception is that the design itself can come after the creative artist has really seen and understood the client's perspective. As he says: "What some clients are interested in is 'How can we make money from what we're doing?'" Seeing the client, and the client's motivating needs, is central to Morrison's presentation of self as a brand: "the broader your outlook as a creative, the more chance you have of getting it right". Jones and Warren draw attention to one aspect of this dynamic: "Wealthy and private actors, both private and state, historically saw creative practice as something that money was spent on - commissioning a painting or a sculpture, giving salaries to composers to produce new works and so forth. Today, creativity has been reimagined as something that should directly or indirectly make money" (293). As Berger notes, "We never look at just one thing; we are always looking at the relation between things and ourselves…The world-as-it-is is more than pure objective fact, it includes consciousness" (9, 11). What is our consciousness around the creative image?

Individuality is central to Berger's vision of the image in the "specific vision of the image-maker…the result of an increasing consciousness of individuality, accompanying an increasing awareness of history" (10). Yet, as Berger argues "although every image embodies a way of seeing, our perception or appreciation of an image depends also upon our own way of seeing" (10). Later, Berger links the meanings viewers attribute to images as indicating the "historical experience of our relation to the past…the experience of seeking to give meaning to our lives" (33). The seeing and the seeking go hand in hand, and constitute a key reason for Berger's assertion that "the entire art of the past has now become a political issue" (33). This partly reflects the ways in which it is seen, and in which it is presented for view, by whom, where and in which circumstances.

The creation of stand-out images in the visually-saturated 21st century demands a nuanced understanding of ways in which an idea can be re-presented for consumption in a manner that makes it fresh and arresting. The focus on the individual also entails an understanding of the ways in which others are valuable, or vital, in completing a coherent package of skills to address the creative challenge to hand. It is self-evident that other people see things differently, and can thus enrich the broadened outlook identified as important for "getting it right". Morrison talks about "little core teams, there's four or five of you in a hub… [sometimes] spread all round the world, but because of the Internet and the way things work you can still all be connected". Team work and members' individual personalities are consequently combined, in Morrison's view, with the core requirement of passion. As Morrison argues, "personality will carry you a long way in the creative field".

Morrison's key collaborator, senior designer and creative partner/art director Dean Wares lives in Valencia, Spain whereas Morrison is London-based and their clients are globally-dispersed. Although Morrison sees the Internet as a key technology for collaboratively visualising the ways in which to make a visual impact, Berger points to the role of the camera in relation to the quintessential pre-mechanical image: the painting. It is worth acknowledging here that Berger explicitly credits Walter Benjamin, including the use of his image (34), as the foundation for many of Berger's ideas, specifically referencing Benjamin's essay "The work of art in the age of mechanical reproduction". Noting that, prior to the invention of the camera, a painting could never be seen in more than one place at a time, Berger suggests that the camera foments a revolutionary transformation: "its meaning changes. Or, more exactly, its meaning multiplies and fragments into many meanings" (19). This disruption is further fractured once that camera-facilitated image is viewed on a screen, ubiquitous to Morrison's stock in trade, but in Berger's day (1972) particularly associated with the television:

The painting enters each viewer's house. There it is surrounded by his wallpaper, his furniture, his mementoes. It enters the atmosphere of his family. It becomes their talking point. It lends its meaning to their meaning. At the same time it enters a million other houses and, in each of them, is seen in a different context. Because of the camera, the painting now travels to the spectator, rather than the spectator to the painting. In its travels, its meaning is diversified. (Berger, 19-20)

Even so, that image, travelling through space and time is seen on the screen in a sequential and temporal context: "because a film unfolds in time and a painting does not. In a film the way one image follows another, their succession constructs an argument which becomes irreversible. In a painting all its elements are there to be seen simultaneously." Both these dynamics, the still and the sequence, are key to the work of a visual artist such as Morrison responsible for branding a film, television series or event. But the works also create an unfolding sequence which tells a different story to each recipient according to the perceptions of the viewer/reader. For example, instead of valorising Gilliam's Brazil, Morrison's studio could have been tagged with Annaud's Enemy at the Gates or, even, the contemporary Sky series, Niel Jordan's Riviera. Knowing this sequence, and that the back catalogue begins with The Who's Quadrophenia (1979), changes the way we see what the Morrison Studio is doing now.

Ways of Working
Richard Morrison harnesses an evolutionary metaphor to explain his continuing contribution to the industry: "I've adapted, and not been a dinosaur who's just sunk in the mud". He argues that there is a need to explore where "the next niche is and be prepared for change 'cause the only constant thing in life is change. So as a creative you need to have that known." Effectively, adaptation and embracing innovation has become a key part of the Morrison Studio's brand. It is trumpeted in the decision that Morrison and Ware made when they decided to continue their work together, even after Ware moved to Spain. This demonstrated, in an age of faxes and landlines, that the Morrison Studio could make cross country collaboration work: the multiple locations championed the fact that they were open for business "without boundaries".

There was travel, too, and in those early pre-Internet days of remote location Morrison was a frequent visitor to the United States. "I'd be working in Los Angeles and he'd be wherever he was […] we'd use snail mail to actually get stuff across, literally post it by FedEx […]." The intercontinental (as opposed to inter-Europe) collaboration had the added value of offering interlocking working days: "I'd go to sleep, he wakes up […] We were actually doubling our capacity." If anything, these dynamics are more entrenched with better communications. Currah argues that Hollywood attempts to manage the disruptive potential of the internet by "seeking to create a 'closed' sphere of innovation on a global scale […] legitimated, enacted and performed within relational networks" (359). The Morrison Studio's own dispersed existence is one element of these relational networks.

The specific challenge of technological vulnerability was always present, however, long before the Internet: "We'd have a case full of D1 tapes" - the professional standard video tape (1986-96) - "and we'd carefully make sure they'd go through the airport so they don't get rubbed […] what we were doing is we were fitting ourselves up for the new change". At the same time, although the communication technologies change, there are constants in the ways that people use them. Throughout Morrison's career, "when I'm working for Americans, which I'm doing a lot, they expect me to be on the telephone at midnight [because of time zones]. […] They think 'Oh I want to speak to Richard now. Oh it's midnight, so what?' They still phone up. That's constant, that never goes away." He argues that American clients are more complex to communicate with than his Scandinavian clients, giving the example that people assume a UK-US consistency because they share the English language. But "although you think they're talking in a tongue that's the same, their meaning and understanding can sometimes be quite a bit different." He uses the example of the A4 sheet of paper. It has different dimensions in the US than in the UK, illustrating those different ways of seeing.

Morrison believes that there are four key constants in his company's continuing success: … [more]
db  Perception  Philosophy  ComputerVision  Vision  Eye 
16 hours ago by walt74
Hacking the Attention Economy
For most non-technical folks, “hacking” evokes the notion of using sophisticated technical skills to break through the security of a corporate or government system for illicit purposes. Of course, most folks who were engaged in cracking security systems weren’t necessarily in it for espionage and cruelty. In the 1990s, I grew up among teenage hackers who wanted to break into the computer systems of major institutions that were part of the security establishment, just to show that they could. The goal here was to feel a sense of power in a world where they felt pretty powerless. The rush was in being able to do something and feel smarter than the so-called powerful. It was fun and games. At least until they started getting arrested.

Hacking has always been about leveraging skills to push the boundaries of systems. Keep in mind that one early definition of a hacker (from the Jargon File) was “A person who enjoys learning the details of programming systems and how to stretch their capabilities, as opposed to most users who prefer to learn only the minimum necessary.” In another early definition (RFC:1392), a hacker is defined as “A person who delights in having an intimate understanding of the internal workings of a system, computers and computer networks in particular.” Both of these definitions highlight something important: violating the security of a technical system isn’t necessarily the primary objective.

Indeed, over the last 15 years, I’ve watched as countless hacker-minded folks have started leveraging a mix of technical and social engineering skills to reconfigure networks of power. Some are in it for the fun. Some see dollar signs. Some have a much more ideological agenda. But above all, what’s fascinating is how many people have learned to play the game. And in some worlds, those skills are coming home to roost in unexpected ways, especially as groups are seeking to mess with information intermediaries in an effort to hack the attention economy.

It all began with memes… (and porn…)

In 2003, a 15-year-old named Chris Poole started an image board site based on a Japanese trend called 4chan. His goal was not political. Rather, like many of his male teenage peers, he simply wanted a place to share pornography and anime. But as his site’s popularity grew, he ran into a different problem — he couldn’t manage the traffic while storing all of the content. So he decided to delete older content as newer content came in. Users were frustrated that their favorite images disappeared so they reposted them, often with slight modifications. This gave birth to a phenomenon now understood as “meme culture.” Lolcats are an example. These are images of cats captioned with a specific font and a consistent grammar for entertainment.

Those who produced meme-like images quickly realized that they could spread like wildfire thanks to new types of social media (as well as older tools like blogging). People began producing memes just for fun. But for a group of hacker-minded teenagers who were born a decade after I was, a new practice emerged. Rather than trying to hack the security infrastructure, they wanted to attack the emergent attention economy. They wanted to show that they could manipulate the media narrative, just to show that they could. This was happening at a moment when social media sites were skyrocketing, YouTube and blogs were challenging mainstream media, and pundits were pushing the idea that anyone could control the narrative by being their own media channel. Hell, “You” was TIME Magazine’s person of the year in 2006.

Taking a humorist approach, campaigns emerged within 4chan to “hack” mainstream media. For example, many inside 4chan felt that widespread anxieties about pedophilia were exaggerated and sensationalized. They decided to target Oprah Winfrey, who, they felt, was amplifying this fear-mongering. Trolling her online message board, they got her to talk on live TV about how “over 9,000 penises” were raping children. Humored by this success, they then created a broader campaign around a fake character known as Pedobear. In a different campaign, 4chan “b-tards” focused on gaming the TIME 100 list of “the world’s most influential people” by arranging it such that the first letter of each name on the list spelled out “Marblecake also the game,” which is a known in-joke in this community. Many other campaigns emerged to troll major media and other cultural leaders. And frankly, it was hard not to laugh when everyone started scratching their heads about why Rick Astley’s 1987 song “Never Gonna Give You Up” suddenly became a phenomenon again.

By engaging in these campaigns, participants learned how to shape information within a networked ecosystem. They learned how to design information for it to spread across social media.

They also learned how to game social media, manipulate its algorithms, and mess with the incentive structure of both old and new media enterprises. They weren’t alone. I watched teenagers throw brand names and Buzzfeed links into their Facebook posts to increase the likelihood that their friends would see their posts in their News Feed. Consultants starting working for companies to produce catchy content that would get traction and clicks. Justin Bieber fans ran campaign after campaign to keep Bieber-related topics in Twitter Trending Topics. And the activist group Invisible Children leveraged knowledge of how social media worked to architect the #Kony2012 campaign. All of this was seen as legitimate “social media marketing,” making it hard to detect where the boundaries were between those who were hacking for fun and those who were hacking for profit or other “serious” ends.

Running campaigns to shape what the public could see was nothing new, but social media created new pathways for people and organizations to get information out to wide audiences. Marketers discussed it as the future of marketing. Activists talked about it as the next frontier for activism. Political consultants talked about it as the future of political campaigns. And a new form of propaganda emerged.

The political side to the lulz

In her phenomenal account of Anonymous — “Hacker, Hoaxer, Whistleblower, Spy” — Gabriella Coleman describes the interplay between different networks of people playing similar hacker-esque games for different motivations. She describes the goofy nature of those “Anons” who created a campaign to expose Scientology, which many believed to be a farcical religion with too much power and political sway. But she also highlights how the issues became more political and serious as WikiLeaks emerged, law enforcement started going after hackers, and the Arab Spring began.

Anonymous was birthed out of 4chan, but because of the emergent ideological agendas of many Anons, the norms and tactics started shifting. Some folks were in it for fun and games, but the “lulz” started getting darker and those seeking vigilante justice started using techniques like “doxing” to expose people who were seen as deserving of punishment. Targets changed over time, showcasing the divergent political agendas in play.

Perhaps the most notable turn involved “#GamerGate” when issues of sexism in the gaming industry emerged into a campaign of harassment targeted at a group of women. Doxing began being used to enable “swatting” — in which false reports called in by perpetrators would result in SWAT teams sent to targets’ homes. The strategies and tactics that had been used to enable decentralized but coordinated campaigns were now being used by those seeking to use the tools of media and attention to do serious reputational, psychological, economic, and social harm to targets. Although 4chan had long been an “anything goes” environment (with notable exceptions), #GamerGate became taboo there for stepping over the lines.

As #GamerGate unfolded, men’s rights activists began using the situation to push forward a long-standing political agenda to counter feminist ideology, pushing for #GamerGate to be framed as a serious debate as opposed to being seen as a campaign of hate and harassment. In some ways, the resultant media campaign was quite successful: major conferences and journalistic enterprises felt the need to “hear both sides” as though there was a debate unfolding. Watching this, I couldn’t help but think of the work of Frank Luntz, a remarkably effective conservative political consultant known for reframing issues using politicized language.

As doxing and swatting have become more commonplace, another type of harassment also started to emerge en masse: gaslighting. This term refers to a 1944 Ingrid Bergman film called “Gas Light” (which was based on a 1938 play). The film depicts psychological abuse in a domestic violence context, where the victim starts to doubt reality because of the various actions of the abuser. It is a form of psychological warfare that can work tremendously well in an information ecosystem, especially one where it’s possible to put up information in a distributed way to make it very unclear what is legitimate, what is fake, and what is propaganda. More importantly, as many autocratic regimes have learned, this tactic is fantastic for seeding the public’s doubt in institutions and information intermediaries.

The democratization of manipulation

In the early days of blogging, many of my fellow bloggers imagined that our practice could disrupt mainstream media. For many progressive activists, social media could be a tool that could circumvent institutionalized censorship and enable a plethora of diverse voices to speak out and have their say. Civic minded scholars were excited by “smart mobs” who leveraged new communications platforms to coordinate in a decentralized way to speak truth to power. Arab Spring. Occupy Wall Street. Black Lives Matter. These energized progressives as “proof” that social … [more]
Media  Journalism  AttentionEconomy  db 
17 hours ago by walt74
Alt-Left Out – Media manipulation outside the far-right
In May, Alice Marwick and I released Media Manipulation and Disinformation Online — a report that showed how certain far-right online communities work together to manipulate mainstream media narratives and spread misinformation. These groups are often called the “alt-right” but are more accurately a loose collection of internet trolls, conspiracy theorists, white nationalists, and misogynists. Members of these subcultures work in combination with right-wing media personalities and outlets — and even elected officials like Donald Trump — to amplify misleading or false stories and messages.

Our report focused almost exclusively on far-right groups, and many readers asked us whether anything similar was happening on the left. Some pointed to the apparent growth of the “alt-left” and the popularity of Russia-focused conspiracy theorists like Louise Mensch as evidence that the left is “just as bad” as the right when it comes to misinformation. A recent piece in The Atlantic makes a similar claim, arguing that hyper-partisan media outlets on the left have a misinformation problem.

Our research didn’t find the same levels of media manipulation and misinformation spreading on the left — or anywhere besides the far-right.

Why is this? How can we account for the differences? And why might it seem like things are “just as bad” on the left when it comes to misinformation? Since releasing our report, I’ve been exploring these questions.

Source: Pro-Bernie Sanders Facebook group “Donald Trump’s Crank Memes Symposium.”
“Alt-Left”

Ironically, the sense that the left is “just as bad” comes in part from far-right media manipulation. Groups on the right are adept at exploiting the media’s interest in remaining “even-handed.” Journalists often adhere to their professional norm of objectivity by promoting “both sides” of a story even when one side is less valid or plainly incorrect. Using this, the far-right has promoted a false equivalency between themselves and the far-left in recent months.

The term “alt-left” is itself an example of this false equivalence. It was first used in 2016 by far-right websites like World Net Daily as a response to the widespread use of the term “alt-right.” Before long, more mainstream conservative media figures like Sean Hannity and Lou Dobbs had adopted it. Then-candidate Donald Trump himself used the term in August 2016 when he told Anderson Cooper, “Frankly, there’s no alt-right or alt-left. All I’m embracing is common sense.”

The term has had significant staying power. While the right has used it to suggest a strain of liberal extremism that is diametrically opposed to their ideas, members of the left have adopted it to call out progressives they feel resemble the far-right. A Vanity Fair article used it as an updated version of the “Bernie Bro” epithet, denigrating “dude-bros and ‘purity progressives’” who loath Hillary Clinton and reject identity politics.

The difference between the two “alts,” as the Washington Post points out, is that one “was coined by the people who comprise the movement” and “the other was coined by its opponents and doesn’t actually have any subscribers.” The term “alt-right” was developed by white nationalist Richard Spencer as a way to put a fresh coat of paint on very old racist and sexist ideas; the term “alt-left” is used to label various groups as extreme. In this way, the far-right has effectively re-branded both itself and the left to indicate that “the left is just as bad” and simultaneously excuse their own racism and misogyny.

Dis/unity

In our report, Alice and I wrote that, despite significant ideological differences, a wide range of far-right groups find common ground in their mutual hatred of the mainstream media. These groups come together to work toward the common goal of trolling reporters and spreading false narratives. In the process of collaborating, they validate their distrust of the media and create a loosely united front.
Last week, for example, a wide range of far-right groups united against CNN after it identified the creator of an anti-CNN meme posted by Trump. CNN published the creator’s Reddit username and said it reserves the right to publish his real name in the future, which far-right groups interpreted as blackmail. For several weeks, rifts had been forming between different far-right ideological factions, but the CNN incident swiftly gave these groups a common cause: alt-right Twitter users quickly got #CNNBlackmail to trend; 8chan trolls doxed CNN employees; and neo-Nazis on The Daily Stormer made more anti-CNN memes.

The idea of the “alt-left” is misleading because it falsely suggests that ideologically diverse groups on the left have united in a similar way. In reality, the adversaries of the far-right are largely unrelated to each other. For example, the far-right sees Antifa, a subculture devoted to combating fascism through street protests, as one of its main opponents. But the far-right also targets Black Lives Matter and other groups dedicated to activism around race, gender, and sexuality. Libertarian branches of the far-right often focus on communists and socialists as their logical opposites.

While there are elements of these groups that overlap with each other — for example, some Antifa chapters work together with communist groups — they remain fundamentally separate. As the adoption of the term “alt-left” shows, some aspects of these groups actually overlap with the far-right as much as with each other. These radical spaces, then, are messy, varied, and don’t necessarily exist along a linear spectrum.

The far-right’s adversaries have their own reasons to distrust the media, but their responses have not coalesced into a unified hatred. Antifa groups express concerns that fascist governments can easily manipulate news to spread propaganda. Black Lives Matter supporters argue that the media has historically ignored police violence against black people. And many communists and socialists feel that the news is primarily guided by corporate interests.



These criticisms are legitimate, but none of these groups have banded together to act against the mainstream media; instead, they have each developed unique responses. Antifa groups exploit the media’s love of novelty and sensationalism to promote their agenda (it was an Antifa protester who infamously punched Richard Spencer in the face on camera, seeding a series of humorously soundtracked and “remixed” viral videos of the incident). Black Lives Matter activists use collective online action like trending hashtags to bring attention to their cause. Communists and socialists often turn to their own news sources, which are not profit-driven.

During the 2016 election, the far-right also rallied around Donald Trump to represent their wide range of interests, whether it was anti-immigration, white nationalism, or men’s rights. Their adversaries never rallied around Hillary Clinton in the same way. Communists and socialists remained skeptical of her corporate ties, and many progressives who supported Bernie Sanders were reluctant to switch allegiances once he lost the Democratic primary. Antifa groups, while often dubbed “leftist,” often distance themselves from mainstream politics altogether.

Conspiracy

Some readers of our report argued that left-leaning partisan news outlets and conspiracy theorists spread misinformation just like those on the right. However, research conducted during the 2016 election does not support this claim: the creators of for-profit fake news websites claimed that false anti-Clinton content was more profitable than false anti-Trump content, and an extensive report on fake news found that the Facebook pages sharing the most misinformation during the election were pro-Trump groups.

This may be in part because partisan news plays a different role on the right and the left. A Harvard study found that throughout the 2016 election, right-wing media consumers rejected mainstream news altogether in favor of hyper-partisan outlets like Breitbart News and Infowars. Left-leaning consumers, meanwhile, still largely shared the most content from “traditional” mainstream sources like The New York Times and CNN.

In other words, for the left, hyper-partisan news was a mainstream media supplement, while on the right, it was a replacement.

However, these studies focused on news consumption before the election, and now a number of left-leaning publications have published articles anxiously claiming that liberals are turning to conspiracy theories out of feelings of powerlessness. The most popular example they cite is “Conspiracy Queen” Louise Mensch, who promotes baseless theories about Putin and Russia’s involvement in U.S. politics. Mensch, at first glance, does seem like a progressive version of Alex Jones, the famous right-wing conspiracy theorist and founder of media outlet Infowars; they have both built careers by promoting conspiracy theories that resonate in today’s political climate.

But equating them minimizes the outsized impact Jones’s ideas have on mainstream political discourse. Jones has a much bigger online following than Mensch, and a network of influential media figures and politicians amplify his ideas even further. Donald Trump consistently reiterates Alex Jones’ theories verbatim, broadcasting them to his supporters and automatically making them newsworthy to mainstream outlets. Sean Hannity has devoted full weeks of Fox News coverage to Jones’s conspiracy theories about Hillary Clinton’s health and the murder of DNC staffer Seth Rich.

Left-leaning media outlets and Democratic politicians have largely ignored or rejected Mensch’s theories. The closest comparisons are that Keith Olbermann retweeted her a few times and Senator Ed Markey repeated a theory of hers on CNN — before apologizing and retracting … [more]
Left  AltLeft  db 
17 hours ago by walt74
The ‘Ironic Nazi’ Is Coming to an End
Over the weekend, the alt-right descended upon Charlottesville, Virginia, in full force: The bearded militiamen, the shield-toting National Vanguard, and of course, the oddly coiffed representatives of the irony wing of the far-right movement. People like Tim Gionet, known under the nom de troll Baked Alaska, and Millennial Matt.

Millennial Matt, who frequently tweets “ironic” jokes about the Holocaust, and Gionet, who is similarly fond of “jokes” like Photoshopping people’s faces into cartoons of concentration-camp ovens, were among the most prominent faces at this weekend’s Unite the Right event. Matt was featured prominently in photographs of Friday night’s terrifying torchlit march; Gionet was so excited that he created hyperstylized meme renditions of the infamous “14 words” of white nationalists (“We must secure the existence of our people and a future for white children”).

By Sunday, after the gathering was shut down by police and a fellow white nationalist careened through a crowd of counterprotesters, killing one person and injuring many more, both Gionet and Matt had adjusted their tone considerably. Gionet, who was maced by unknown assailants, tweeted, “We must come together as a country and try to understand each other peacefully. We can’t continue to scream nazi or sjw back & forth.” Matt, whose account was suspended, posted a video of himself talking to the camera, his voice shaking: “I’m usually a jokester; I do a lot of comedy, but there’s nothing funny about threatening people’s lives, threatening people’s families.”


Follow
mème brûlée 2 @memebrulee2
@Millennial_Matt has made a statement on #Charlottesville
12:52 AM - Aug 14, 2017
987 987 Replies 686 686 Retweets 1,294 1,294 likes
Twitter Ads info and privacy
Over the last few years, the world of the internet and the world outside of it have been slowly merging, especially as media institutions like Twitter, YouTube, and Facebook have become the primary method of communication and organization for most people. It’s hard not to see this in many ways as a disaster, watching the rise of organized white nationalists and the election of a president whose sole talent is demanding attention. But this weekend was also a reminder that the ongoing interconnection of the internet and “meatspace” (as the physical world is known in online vernacular; I’m sorry) doesn’t mean that the offline world has to play by the same rules as the online. It also means that the rules of the real world will increasingly influence the space of the internet.

The early decades of the internet made it easy to separate one’s online personality from one’s offline personality by way of pseudonymity and limited functionality. It used to be that on the internet, nobody knew you were a dog, in part because there just wasn’t a whole lot to do on the internet. But as the network’s function has ballooned to encompass a dominant part of modern society, more attention is being paid to how what happens online affects what happens offline. People aren’t simply chatting behind screen names, they’re organizing and fighting and blurring the line between aspects of life that used to be kept separate. There’s a reason why the Klan wears hoods.

But the shield of online anonymity is still being wielded widely, perhaps no more aggressively than on right-wing gathering spaces online, like 4chan’s /pol/ board and the murkiest corners of Twitter. 4chan is where the concept of the “ironic” Nazi and white nationalist was born — the racist, homophobic, anti-Semitic users who say the awful things they say not necessarily because they believe them, but because it makes other people angry or scared. That’s textbook trolling: saying stuff that is first and foremost intended to get a rise out of other people. This is the discursive tradition that produced Gionet and Millennial Matt.

The Ironic Nazi exists in part because the internet minimizes friction. Friction includes anything that might stop someone from using a particular service; things like having to create a user account, tying that account to an email address, filling out a CAPTCHA, filters that prevent obscene messages from being sent or published, or even long load times. By reducing the mental overhead required for publishing, it becomes easier to act without thinking. Online platforms benefit significantly from this approach, meant to lower inhibitions. Making it easier for people to share and post means making it easier for people to share and post heinous things. It’s why 4chan, which does not require an account, and Twitter, which limits the verbosity of what people say, are thought of as potent breeding grounds for the Ironic Nazi.

The rise of frictionless, anonymous spaces has been an important test for the strength of free speech, because they remove the friction that the real world insists upon — anything from the difficulty of finding a wide audience for your racist pamphlet to the social consequences of spouting hate speech. White nationalists hiding behind the label of “troll” — like Weev, Baked Alaska, Millennial Matt, and their thousands of anonymous comrades — can spout shit online with no work or investment, and when called on it, dance away with more jokes or claims that it’s all just talk. It’s why horrible stances are framed as “satire” or “social experiments” when the blowback happens. After all, you can’t definitively prove the intentions of an anonymous online comment, and there are a lot of bored teenagers (and adults) operating with impunity.

What happens when that modus operandi translates into the physical world, as it did in Charlottesville, when torch-bearing, flag-waving Nazis and white supremacists turned the city into a violent arena? It’s far easier to write “Hitler did nothing wrong” online and send it to nobody in particular than it is to approach someone on the street and say it to their face. The meatspace is all friction. And I don’t mean conflict here; I mean that there are constant barriers of entry. The Ironic Nazi is framed as a product of how easy platforms make it to be an asshole online. The ones who came to Charlottesville were the exact opposite: focused, methodical, and intentional in their efforts. For Gionet, it meant coordinating travel, lodging, and times to meet up with other demonstrators. It meant going out of his way to stand with literal white supremacists, and investing time, money, and possible bodily harm to do so.

Consider the words of college student Peter Cvjetanovic, a picture of whom, mid-shout while holding a tiki torch, circulated widely on social media this weekend. “I hope that the people sharing the photo are willing to listen that I’m not the angry racist they see in that photo,” he told news station KTVN, despite having traveled to Charlottesville specifically to protect the legacy of Confederate general Robert E. Lee. In the real world, it’s not “just talk” anymore.

When you break it down, the only thing people can do on the internet — contained entirely to the internet — is talk. It is all discourse, and hypotheticals, and bluster, and trolling. But it can do real harm, because as soon as that conversation spills into tangible space, the hypothetical vanishes. Even without overt violence, swarming a public space is an act intended to show force and to intimidate. Men carrying guns, wearing swastikas, and performing a Nazi salute isn’t physical violence, but it isn’t peace, either. The physical world is not a Boolean. The rules of online discourse do not translate to the meatspace, despite what trolls (an ill-fitting title for this type of public spectacle) might think. When you stand with white supremacists on the street, you are literally standing with white supremacists. There is nothing ironic about that.
AltRight  Trolls  Nazis  db 
17 hours ago by walt74
Should We “Stop Equating ‘Science’ With Truth”?
Actually: no.

In the modern world, there are ever fewer reasons to maintain the distinct roles of men and women, which evolved over millions of years. But to imagine that we are not living with that inheritance is to reject not just science, but all forms of logic and reason.

The message that liberates women is not: men and women are the same, and anyone who tells you different is oppressing you. The message that liberates women is: men and women are different. (And in fact, everyone who is intellectually honest knows this—see Geoffrey Miller’s excellent point regarding the central inconsistency in the arguments being presented by the control-left.) And not only are men and women different at a population level, but our distinct strengths and interests allow for greater possibility of emergence in collaboration, in problem-solving, and in progress, than if we work in echo chambers that look and think exactly like ourselves. Shutting down dissent is a classic authoritarian move, and will not result in less oppression. You will send the dissenters underground, and they will seek truth without you.

Evolutionary biology has been through this, over and over and over again. There are straw men. No, the co-option of science by those with a political agenda does not put the lie to the science that was co-opted. Social Darwinism is not Darwinism. You can put that one to rest. There are pseudo-scientific arguments from the left. Gould and Lewontin, back in 1979, argued, from a Marxist political motivation, that biologists are unduly biased in favor of adaptive explanations, which managed to confuse enough people for long enough that evolutionary biology largely stalled out. And, perhaps most alarming, there are concerns that what is true might be ugly. Those who would impose scientific taboos therefore suggest that it is incumbent on scientists not to ask certain questions, for fear that we reveal the ugly. That, I posit, is what underlies the backlash against Damore’s memo.

To which science and scientists need to respond: the truth is not in and of itself oppressive. To the extent that selection has produced differences between groups, such as differences in interests between men and women, denying the reality of that truth is hardly a legitimate response.

People often imagine that when a biologist argues that a pattern is the product of adaptive evolution, they are justifying that pattern. Philosophers have named this confusion the naturalistic fallacy, in which “what is” is conflated with “what ought to be.” Every good evolutionary biologist knows to dismantle such thinking in their own and their students’ heads as quickly as possible. Only by knowing what is, however, can we have a chance of structuring effective, society-wide responses that might actually change some of what is, when that is desirable.

Evolutionary biology is not ‘splaining, man- or otherwise. In contrast, the control-left and, and more specifically Chanda Prescod-Weinstein in her recent Slate piece, most definitely are.

Follow
Dr. Chanda 🇧🇧 @IBJIYONGI
It's time for scientists to pro-actively start doing better: learn history to avoid making the past's mistakes. http://www.slate.com/articles/health_and_science/science/2017/08/evolutionary_psychology_is_the_most_obvious_example_of_how_science_is_flawed.html
1:38 AM - Aug 10, 2017
Photo published for “Science” Is One Reason the Google Memo Happened
“Science” Is One Reason the Google Memo Happened
Evolutionary psychology is just the most obvious example of science’s flaws.
slate.com
22 22 Replies 70 70 Retweets 125 125 likes
Twitter Ads info and privacy
Allow me to explain. ‘Splaining is different from explaining. ‘Splaining uses authority rather than logic to make points. By contrast, explaining, as in the way of science, tries to minimize assumptions and black boxes, and return to first principles whenever possible. Authority is not what scientists use to seek truth. Reality doesn’t care about degrees or gravitas. Working with the scientific method, we follow a meandering path, with many wrong turns and dead-ends. The scientific method can be remarkably inefficient, but we tolerate that inefficiency because of what it buys us: Science is self-correcting. Over time, our answers are ever better, and more in line with objective reality.


It is of course true that, in the European tradition of scientific investigation, white men have mostly been the people doing science. And who does the science will affect what questions are asked. But the scientific method, when deployed correctly, protects us from getting biased answers to those questions. More diversity amongst scientists will diversify the questions being asked, but should not alter the answers to those questions.

On Damore’s list of left vs. right biases (which he includes to point out that we need perspectives from both “sides”), he has “humans are inherently cooperative” as a left bias, and “humans are inherently competitive” as a right bias (read his original memo here). In this case, both are true, simultaneously. It doesn’t even really depend on context. Humans are both competitive and cooperative. Imagining one without the other is missing a big part of what it is to be human. Within evolutionary biology, this isn’t controversial. Furthermore, it’s true not just for humans, but for all species that are long-lived, social, and have both long developmental periods (childhoods) and significant generational overlap. It’s true within dolphins, parrots, wolves, chimps, crows, and elephants, to name just a few. Competition for resources—be those resources food, water, mates, habitat, reputation, or something else—is always present. But sociality is inherently cooperative, and sociality has evolved and persisted many, many times, across a diversity of environmental, developmental, and genetic landscapes.

It is also true that a basic misunderstanding of descriptive statistics—of population-level thinking—pervades the backlash to the Google memo. Damore includes this graphic, and writes, “many of these differences are small and there’s significant overlap between men and women, so you can’t say anything about an individual given these population level distributions.”


An illustration from James Damore’s memo contrasting overlapping bell curves with a binary.
A binary population (as in the lower graph) is one in which are there are two and only two possible states, and those states themselves are invariant. On and off, ones and zeroes. Binaries exist in the world, but the more complex the system, the more emergent the question. Within science, the farther from physics and math, and the closer towards biology you get, the less binary the landscape is.

A bimodal population, by comparison (as in the upper graph), also has two states, but within each of those states, there is variance. Average individuals from the two states will be different in predictable directions: on average, men are taller than women. But, depending on how far apart those two modes are, individuals who belong to population one may look much more like individuals from population two. This does not put the lie to the category—we all know some very tall women. It reinforces the fact that there is variance in the population. Variance is not a refutation of biology; it is what biology, in the form of selection, acts on to sculpt organisms from noise.

There are male brains and there are female brains. But it’s not a binary. When, as the humans that we are, we categorize things, sometimes there are two possibilities, sometimes three, sometimes more. With mammalian sex, there are two. But identifying that there are two categories is not the same thing as saying that there is no variation within those categories. To imagine that two categories means rigid adherence to one of two ways of being is a failure of population-level thinking. And it is a failure of logic, too.

Some categories have members that are indeed invariant. Everywhere that gold atoms show up in the Universe, they are the same. Each isotope is invariant. But individual horses, and strains of salmonella, and male brains, do not all look alike. They are simultaneously of a type, and distinct within that type.

Male raptors are, on average, smaller than female raptors. Male frogs are, almost universally, more talkative than female frogs. Male humans tend to be stronger, shorter lived, and more interested in “things” than female humans.

Male and female are distinct and real. There are multiple levels on which to ascertain maleness and femaleness, and in some people, chromosomal, phenotypic, and brain sex aren’t the same. But the overwhelming majority of humans have convergence between their chromosomal, phenotypic, and brain sex. When we look at male brains and female brains, there are differences. Recognizing difference is not sexist, or oppressive. It is human.

Perhaps we should, in the spirit of inquiry and logic and the values of the Enlightenment, focus on understanding what is true, rather than throwing temper tantrums because we don’t like what’s true. Then we can begin to disentangle societal gender roles from the rule book of evolutionary sex differences that gave rise to them. Because the deepest truth is that those roles have an ancient and important meaning, which is now desperately out of date.
db  Science  EvoPsych  Googlememo 
18 hours ago by walt74
How The Pro-Trump Media Turned The Google Memo Into A National Story
A week ago, James Damore was anonymous. Now, the Google engineer who lost his job after he wrote a viral antidiversity screed has become an icon of the alt-right, with more than 40,000 Twitter followers (literally overnight) and a dedicated online constituency.

This didn't happen by accident: Damore's swift lionization as a casualty of both unchecked social justice warring and an unregulated Big Tech monopoly that silences dissenting voices is the work of a well-oiled pro-Trump media machine, one that's able to instantly bring its brand of digital insurgency to any skirmish. And in the case of Google, and Silicon Valley as a whole, the new right is digging in for a long, hard fight, with Damore at the center.

According to multiple self-proclaimed leaders of the new right, the Damore fiasco isn't just this week's latest outrage, but a tentpole moment in the larger online culture wars.

Silicon Valley offers a perfect target. For years, anti–social justice trolls and right-wing media personalities have railed against the tech industry for censoring their viewpoints and blocking them from services. And in recent months, the battle has intensified. BuzzFeed News reported recently that online payments and crowdfunding companies have “banned or hobbled the accounts of several prominent people and groups that promote far-right politics.” Last week, YouTube announced it would put extremist content “behind an interstitial warning” that prevents them from being monetized, recommended, or eligible for comments or user endorsements. And the pro-Trump crowd has attempted to counter with their own platforms — a movement the New York Times called "alt-tech."
Googlememo  Media  Journalism  Memetics  db 
18 hours ago by walt74
Contra Grant On Exaggerated Differences
I.

An article by Adam Grant called Differences Between Men And Women Are Vastly Exaggerated is going viral, thanks in part to a share by Facebook exec Sheryl Sandberg. It’s a response to an email by a Google employee saying that he thought Google’s low female representation wasn’t a result of sexism, but a result of men and women having different interests long before either gender thinks about joining Google. Grant says that gender differences are small and irrelevant to the current issue. I disagree.

Grant writes:

It’s always precarious to make claims about how one half of the population differs from the other half—especially on something as complicated as technical skills and interests. But I think it’s a travesty when discussions about data devolve into name-calling and threats. As a social scientist, I prefer to look at the evidence.

The gold standard is a meta-analysis: a study of studies, correcting for biases in particular samples and measures. Here’s what meta-analyses tell us about gender differences:

When it comes to abilities, attitudes, and actions, sex differences are few and small.

Across 128 domains of the mind and behavior, “78% of gender differences are small or close to zero.” A recent addition to that list is leadership, where men feel more confident but women are rated as more competent.

There are only a handful of areas with large sex differences: men are physically stronger and more physically aggressive, masturbate more, and are more positive on casual sex. So you can make a case for having more men than women… if you’re fielding a sports team or collecting semen.

The meta-analysis Grant cites is Hyde’s, available here. I’ve looked into it before, and I don’t think it shows what he wants it to show.

Suppose I wanted to convince you that men and women had physically identical bodies. I run studies on things like number of arms, number of kidneys, size of the pancreas, caliber of the aorta, whether the brain is in the head or the chest, et cetera. 90% of these come back identical – in fact, the only ones that don’t are a few outliers like “breast size” or “number of penises”. I conclude that men and women are mostly physically similar. I can even make a statistic like “men and women are physically the same in 78% of traits”.

Then I go back to the person who says women have larger breasts and men are more likely to have penises, and I say “Ha, actually studies prove men and women are mostly physically identical! I sure showed you, you sexist!”

I worry that Hyde’s analysis plays the same trick. She does a wonderful job finding that men and women have minimal differences in eg “likelihood of smiling when not being observed”, “interpersonal leadership style”, et cetera. But if you ask the man on the street “Are men and women different?”, he’s likely to say something like “Yeah, men are more aggressive and women are more sensitive”. And in fact, Hyde found that men were indeed definitely more aggressive, and women indeed definitely more sensitive. But throw in a hundred other effects nobody cares about like “likelihood of smiling when not observed”, and you can report that “78% of gender differences are small or zero”.

Hyde found moderate or large gender differences in (and here I’m paraphrasing very scientific-sounding constructs into more understandable terms) aggressiveness, horniness, language abilities, mechanical abilities, visuospatial skills, mechanical ability, tendermindness, assertiveness, comfort with body, various physical abilities, and computer skills.

Perhaps some peeople might think that finding moderate-to-large-differences in mechanical abilities, computer skills, etc supports the idea that gender differences might play a role in gender balance in the tech industry. But because Hyde’s meta-analysis drowns all of this out with stuff about smiling-when-not-observed, Grant is able to make it sound like Hyde proves his point.

It’s actually worse than this, because Grant misreports the study findings in various ways [EDIT: Or possibly not, see here]. For example, he states that the sex differences in physical aggression and physical strength are “large”. The study very specifically says the opposite of this. Its three different numbers for physical aggression (from three different studies) are 0.4, 0.59, and 0.6, and it sets a cutoff for “large” effects at 0.66 or more.

On the other hand, Grant fails to report an effect that actually is large: mechanical reasoning ability (in the paper as Feingold 1998 DAT mechanical reasoning). There is a large gender difference on this, d = 0.76.

And although Hyde doesn’t look into it in her meta-analysis, other meta-analyses like this one find a large effect size (d = 1.18) for thing-oriented vs. people-oriented interest, the very claim that the argument that Grant is trying to argue against centers around.

So Grant tries to argue against large thing-oriented vs. people-oriented differences by citing a meta-analysis that doesn’t look into them at all. He then misreports the findings of that meta-analysis, exaggerating effects that fit his thesis and failing to report the ones that don’t. Finally, he cites a “summary statistic” that averages away the variation we’re looking for out by combining it with a bunch of noise, and claims the noise proves his point even though the variation is as big as ever.

II.

Next, Grant claims that there are no sex differences in mathematical ability, and also that the sex differences in mathematical ability are culturally determined. I’m not really sure what he means [EDIT: He means sex differences that exist in other countries] but I agree with his first argument – at the levels we’re looking at, there’s no gender difference in math ability.

Grant says that these foreign differences in math ability exist but are due to stereotypes, and so are less noticeable in more progressive, gender-equitable nations:

Girls do as well as boys—or slightly better—in math in elementary, but boys have an edge by high school. Male advantages are more likely to exist in countries that lack gender equity in school enrollment, women in research jobs, and women in parliament—and that have stereotypes associating science with males.

Again, my research suggests no average gender difference in ability, so I can’t speak to whether these differences are caused by stereotypes or not. But I want to go back to the original question: why is there a gender difference in tech-industry-representation [in the US]? Is this also due to stereotypes and the effect of an insufficiently gender-equitable society? Do we find that “countries that lack gender equity in school enrollment” and “stereotypes associating science with males” have fewer women in tech?

No. Galpin investigated the percent of women in computer classes all around the world. Her number of 26% for the US is slightly higher than I usually hear, probably because it’s older (the percent women in computing has actually gone down over time!). The least sexist countries I can think of – Sweden, New Zealand, Canada, etc – all have somewhere around the same number (30%, 20%, and 24%, respectively). The most sexist countries do extremely well on this metric! The highest numbers on the chart are all from non-Western, non-First-World countries that do middling-to-poor on the Gender Development Index: Thailand with 55%, Guyana with 54%, Malaysia with 51%, Iran with 41%, Zimbabwe with 41%, and Mexico with 39%. Needless to say, Zimbabwe is not exactly famous for its deep commitment to gender equality.

Why is this? It’s a very common and well-replicated finding that the more progressive and gender-equal a country, the larger gender differences in personality of the sort Hyde found become. I agree this is a very strange finding, but it’s definitely true. See eg Journal of Personality and Social Psychology, Sex Differences In Big Five Personality Traits Across 55 Cultures:

Previous research suggested that sex differences in personality traits are larger in prosperous, healthy, and egalitarian cultures in which women have more opportunities equal with those of men. In this article, the authors report cross-cultural findings in which this unintuitive result was replicated across samples from 55 nations (n = 17,637).

In case you’re wondering, the countries with the highest gender differences in personality are France, Netherlands, and the Czech Republic. The countries with the lowest sex differences are Indonesia, Fiji, and the Congo.

I conclude that whatever gender-equality-stereotype-related differences Grant has found in the nonexistent math ability difference between men and women, they are more than swamped by the large opposite effects in gender differences in personality. This meshes with what I’ve been saying all along: at the level we’re talking about here, it’s not about ability, it’s about interest.

III.

We know that interests are highly malleable. Female students become significantly more interested in science careers after having a teacher who discusses the problem of underrepresentation. And at Harvey Mudd College, computer science majors were around 10% women a decade ago. Today they’re 55%.

I highly recommend Freddie deBoer’s Why Selection Bias Is The Most Powerful Force In Education. If an educational program shows amazing results, and there’s any possible way it’s selection bias – then it’s selection bias.

I looked into Harvey Mudd’s STEM admission numbers, and, sure enough, they admit women at 2.5x the rate as men. So, yeah, it’s selection bias.

I don’t blame them. All they have to do is cultivate a reputation as a place to go if you’re a woman interested in computer science, attract lots of female CS applicants, then make sure to admit all the CS-interested female applicants they get. In exchange, they get constant glowing … [more]
Feminism  Science  Statistics  Googlememo  db  ScottAlexander 
18 hours ago by walt74
Das politisch korrekte Märchen von der weiblichen Programmierung
Von wegen „Digital Natives“: Wie die Digital-Naiven sich ihre Mythen bilden.

Im Zusammenhang mit der Google-Memo-Affäre kommt gerade wieder mal die Legende hoch, dass Informatik ja eigentlich ein Frauenthema und die Programmierung von Frauen erfunden wurde, Frauen darin viel besser und qualifizierter seien, und es nur an der Unterdrückung läge, dass die IT heute überwiegend männlich besetzt ist. Beispielsweise im Tagesspiegel.

Ich möchte an diesem Beispiel erläutern, wie sich heute die Ungebildeten und Leute, die da einfach historisch nicht wissen, was da gelaufen ist, eine Scheinwelt zusammendichten, und diese mit vermeintlich aussagefähigen Zahlen untermauern wollen (nennt man heute „Datenjournalismus“, ist aber meist nur leeres Gerede, weil man damit die qualitative Untersuchung weglässt und Korrelationen als Kausalitäten ausgibt).

Krieg

Was zur Informatik und zur Frauenquote fast nie erwähnt wird: Computer und Informatik (und auch das Internet) sind zeitlich und von der Motivation her Produkte des Krieges. Vor allem das Brechen der Enigma zunächst durch die elektromechanische Bombe – bedient von den weiblichen Wrens – gab einen wesentlichen Impuls, der zunächst zum Bau des Colossus – auch von Frauen bedient – führte, der auch dem Entschlüsseln diente, und später des ENIAC, der der Armee zu Ballistik-Berechnungen diente. Zitat von Wikipedia:

Der ENIAC wurde programmiert, indem man die einzelnen Komponenten mit Kabeln verband und die gewünschten Operationen auf Drehschaltern einstellte. Der ENIAC wurde von Frauen programmiert, den „ENIAC-Frauen“: Kay McNulty, Betty Jennings, Betty Holberton, Marlyn Wescoff, Frances Bilas und Ruth Teitelbaum. Sie hatten zuvor für das Militär ballistische Berechnungen an mechanischen Tischrechnern angestellt.

Ich bitte, sich das zu merken, ich komme unten noch einmal darauf zurück.

Männer waren in dieser Zeit meist im Krieg – oder tot. Man muss sich bewusst machen, dass es in England und den USA zu der Zeit keine Männer mehr für zivile Aufgaben gab. In den USA wurden die Baseball-Spiele damals von Frauen gespielt, weil die Männer im Baseball-fähigen Alter alle im Krieg waren. Und in London gibt es in der Nähe das Parlaments ein Denkmal, das daran erinnert, wieviele Männer-Berufe im Krieg von Frauen ausgeübt wurden, beispielsweise auch Feuerwehr und Maurer.

Bemerkenswert ist, dass die Frauen diese Berufe gut und erfolgreich ausübten, es ohne Druck und Not aber auch wieder bleiben ließen, als wieder genug Männer da waren.

Seltsam ist, dass Feministen nie danach fragen, was aus den anderen Berufen geworden ist. Mir wäre nicht bekannt, dass sich irgendwer darüber beschwerte, dass Feuerwehr oder Straßenbau nach dem Krieg wieder von Männern übernommen wurden. Ich wüsste auch nicht, dass irgendwer – wie bei Computern – die Hersteller von Feuerwehrautos beschuldigte, dass sie Frauen diskriminieren, weil das Design für Feuerwehrautos so ausgelegt sei, dass es Jungen begeistere. Es hat auch noch keiner gefordert, eine rosa Feuerwehr zu bauen und Feuerwehrfrauen in weißen hohen Lackstiefeln mit hohen Absätzen zu schicken, damit es auch Frauen gefallen könnte. Bei Computern erhebt man aber diesen Vorwurf.

Dass der Umstand, dass Computer damals vorwiegend von Frauen programmiert wurden, weil Krieg war, und nicht etwa deshalb, weil Frauen naturgegeben so viel talentierter sind, wird von Feministen nicht erwähnt.

Konstruktion stand über Programmierung

An den damaligen Computern gab es noch nicht viel zu programmieren, die waren auch noch keine Universalrechner, sondern noch relativ einfache automatisierte Rechenwerke, die zunächst mal Tischrechner und deren „stereotype“ Bedienung ersetzen sollten. Schon die „Speicherkapazität“ – ob in Strippen, Schaltern oder Speicherzellen – war gering, man experimentierte mit Lochstreifen aus alten Filmrollen.

Feministen stellen die Sache heute gerne so hin, als wären Computer damals einfach so vom Himmel gefallen, die waren halt einfach so da, wie heute ein Mac auf dem Schreibtisch einer Quotenstelle herumsteht. Das Ding stand da einfach so herum, Männer waren zu doof dafür, und Frauen hatten ihren Spaß daran, ihre Intelligenz an den Geräten auszutoben. Weil Frauen damals total darauf abfuhren, Differentialgleichungen für Geschossflugbahnen zu berechnen. Männer hatten damit ja eigentlich gar nichts zu tun, die haben sich erst später reingedrängelt.

Das ist natürlich Schrott.

Das war die Frühzeit der Computertechnik, da gab es im Prinzip noch keine Computer, schon gar keine universellen. Die wesentliche Leistung war damals nicht, sie zu programmieren, sondern sie zu erfinden, zu konstruieren, zu bauen. Die waren damit dann schon relativ nahe am Zweck.

Wenn es damals 5 oder 20 oder auch 200 Frauen gab, die die Dinger programmiert haben, tun sie heute so, als wäre das ein reines Frauenmetier gewesen. Dass die wesentliche Leistung damals nicht im Betrieb, sondern im Bau – und das Konstruieren und Bauen von Rechnern ist auch Informatik und nicht nur, wie es die Feministen heute gern hinstellen, das Programmieren in Form von etwas drauf herumklimpern – bestand, aber dass die Dinger erst mal von Männern erfunden und konstruiert werden mussten, verschweigt man dann selektiv. Man sieht gerne Fotos von Frauen in 40er-Jahre-Uniform oder 50er-Jahre-Röcken hübsch drapiert vor Computerwänden, aber verliert kein Wort dazu, wo die herkamen und wer die gebaut hat. Computer bestellt man bei Amazon, die bekommt man am nächsten Tag, weiß man doch.

Handarbeit

Das Programmieren war zu dieser Zeit eigentlich noch kein Programmieren, sondern eher ein Verschalten, nämlich das Ziehen von Strippen und Stellen von Schaltern, echte Handarbeit. Diese Verschaltungen selbst elektronisch zu steuern und kodiert – als Programm – im Speicher abzulegen, ist nicht gleich am Anfang erfunden worden und kam wesentlich mit der von-Neumann-Architektur. Zunächst ging es darum, Rechenwerke nach der jeweiligen Nutzung wie Bausteine miteinander zu verschalten, nicht unähnlich den Analogrechnern. Das war enorme Handarbeit und von „Handarbeiten“ nicht weit entfernt. Dazu eben Routinearbeit, Relais und Röhren kontrollieren, endlos Zahlen auf Tastaturen eintippen. Das hatte nicht übermäßig viel mit „Informatik“ zu tun, eher mit Fließbandarbeit. Monotone Klein- und Handmotorik. Damals typische Frauenarbeit.

So behauptet auch der SPIEGEL, dass Programmieren eine Frauendomäne gewesen und von den „Pionierinnen“ erfunden worden sei, gibt aber im Text zu:

Vor diesem Hintergrund ist es gar nicht so verwunderlich, dass die “Cosmopolitan” 1967 neben dem Artikel “Why a Girl Should Own a Pooch”, warum ein Mädchen ein Hündchen besitzen sollte, einen Artikel über Frauen in der IT druckte.

Dass gezielt Frauen für die Informatikjobs angesprochen wurden, hatte aber noch einen anderen Hintergrund: “Programmieren war anfangs als Arbeit für Bürokräfte mit niedrigem Status gedacht – also für Frauen. Die Disziplin wurde erst nach und nach bewusst in ein wissenschaftliches, männliches Fach mit hohem Status transformiert”, schreibt der amerikanische Historiker Nathan Ensmenger in einem Aufsatz mit dem Titel “Wie Programmieren eine Männerdomäne wurde”.

Je komplizierter es wurde und je mehr man da lernen musste, desto stärker sank der Frauenanteil.

Quantität

Wenige Frauen

Das ganze Argument ist natürlich auch ein quantitativer Schwindel: Man nimmt den Anteil der Frauen an den Programmierern, und rechnet das auf heute hoch.

Tatsächlich aber waren das damals je nach Rechner nur einige zig- oder hundert Frauen, also eine in Bezug auf Frauen verschwindend geringe und überhaupt nicht repräsentative Gruppe. Das ist so ähnlich wie die Argumentation, dass bei der ersten Autofahrt Berta Benz am Steuer saß, und deshalb die naturgegebene Autofahrerinnenquote für alle Zeit bei 100% liege. Zumal ja, wie ich oben erklärt habe, das keine repräsentativ ausgewählte Gruppe war, weil ja andere Einflüsse dazukamen. Hätte man zufällig von der Straße 100 Männer und 100 Frauen geholt und untersucht, wer besser programmieren lernen kann, wäre das eine Untersuchung gewesen. Aber einfach daraus, dass es ein paar Frauen gibt, die den Job machen, zu folgern, dass alle Frauen, die Frauen schlechthin das können, ist Humbug.

Letztlich ist das wieder das Dichter- und Denker-Argument. Wir hatten irgendwann mal zwei Dutzend Dichter und Denker, die längst alle tot sind, deshalb sind wir gern das „Volk der Dichter und Denker“. Wir haben heute ein paar Millionen Analphabeten, aber ein Volk der Analphabeten wollen wir selbstverständlich nicht sein. Immer das, was einem passt und gefällt, wird gerne verallgemeinert und durchstereotypisiert, was einem nicht in den Kram passt, wird individualisiert.

Man unterstellt Männern liebend gerne, dass sie aggressiv und gewalttätig sind, des Testosterons wegen. Das passt in den Kram, das verallgemeinert man. Würde aber jemand sagen, dass sich Männer wegen des Testosterons mehr für Computer interessieren, würde das selbstverständlich empört abgelehnt. Nennt man auch „confirmation bias“. Und wenn es 20 Frauen gab, die programmieren konnten (seltsamerweise nennt man immer so die gleichen drei, Ada Lovelace, Grace Hopper, und die dritte fällt mir schon nicht mehr ein) dann verallgemeinert man das selbstverständlich auf alle Frauen. Frauen können das. Besser als Männer. Einfach so. Wenn es bei Männern 20 Millionen Programmierer gibt, darf das natürlich nicht auf Männer verallgemeinert werden.

Wenige Computer

Es gab damals den Beruf des Informatikers noch nicht, weil es noch nicht genug Computer gab, um einen Beruf daraus zu machen.

Zitat:

“Ich denke, dass es weltweit einen Markt für vielleicht fünf Computer gibt”. Das prognostizierte Thomas Watson, Chairman von IBM, im Kriegsjahr 1943. Und dann war es ausgerechnet IBM, das dem PC, … [more]
Feminism  db  Coding  Tech 
20 hours ago by walt74

« earlier    

related tags

103  2017-08-10  2017-08-12  2017-08-14  401  403  500  altleft  altright  arch  atomic  attentioneconomy  auth  authenticate  authentication  authorisation  authorise  autotuning  autovivify  aws-redshift  aws  badjournalism  bahn  baureihe  books  buzzfeed  catalyst  catch  charlottesville  code  coding  cognition  collection  comics  computervision  csv  dasgeileneueinternet  data-engineering  data  database  databases  dbd  dbms  design  dev  development  devtools  discord  distributed-systems  distributed  diversity  donaldtrump  dp  error  evopsych  excel  eye  fake  fakenews  feminism  fi  finance  firebase  flask  google  googlememo  groupthink  hacks  handling  hash  haskell  http  identitypolitics  illiberalleft  iot  joins  journalism  json  language  latergram  laws  left  legal  library  linux  literature  machinelearning  media  memetics  meritocracy  migration  mikecernovich  ml  mock  morals  native  nazis  nofilter  nomad  nutrition  opensource  ops  outrage  outragememetics  perception  performance  perl  pg  philosophy  polarization  postgres  postgresql  postgressql  posttruth  programming  psychology  python  rails  response  rest  rules  s46  sbahnberlin  scaling  science  scottalexander  series  server  shard  sharding  simple  snap  socialbots  socialmedia  software  spreadsheet  sql  statistics  tech  throwback  time-series  time  timeseries  tools  toptoptop  train  transactions  travel  trolls  try  twitter  usa  vintage  visa  visadb  vishal  vision  webdev  work  zug 

Copy this bookmark:



description:


tags: