scritic + teaching   1371

Study claims Airbnb is great for your neighborhood’s economy, if it’s predominantly white - The Verge
100j Neighborhoods with a booming population of Airbnb guests typically did see a growth in people working for local restaurants, along with a surge in the share of Yelp reviews, which was a measure the researchers used to reconfirm their employment findings. But these findings didn’t carry over into the neighborhoods that were predominantly populated by people of color, even if Airbnb rental numbers were just as high.

The reason, as Rahman told The Washington Post, might be because an “uncomfortable reality” is that some visitors might be less inclined to walk around and check out the local businesses in these neighborhoods, even though they might be drawn to how affordable the housing is. Instead, Rahman
Teaching 
4 days ago by scritic
the geography of friends
Use in 189 to see how ties work -- and how they map across states.
teaching 
19 days ago by scritic
Almost Human: The Surreal, Cyborg Future of Telemarketing - The Atlantic
How telemarketing is being reinvented with humans using bots and scripts
teaching 
28 days ago by scritic
Opinion | We’re Measuring the Economy All Wrong - The New York Times
Wow! Great piece on how we need new statistics. Use in 100g on statistics
polarization  teaching 
4 weeks ago by scritic
In-Class Activities | HASTAC
Has some in-class activities to try ...
teaching 
5 weeks ago by scritic
India Pushes Back Against Tech ‘Colonization’ by Internet Giants - The New York Times
100j

As India sets the new rules of the game, it is seeking inspiration from China. Although India does not want to go as far as China, which has cut off its internet from the global one, officials admire Beijing’s tight control over citizens’ data and how it has nurtured homegrown internet giants like Alibaba and Baidu by limiting foreign competition. At the same time, regulators do not want to push out the American internet services that hundreds of millions of Indians depend on.
platformization  teaching 
6 weeks ago by scritic
Volume 23, Number 9 - 3 September 2018
Emoji special issue in first monday; 100j?
teaching 
6 weeks ago by scritic
A new school year. A new fight against affirmative action. This time at Harvard. - Vox
Use along with John CArson's history of intelligence testing? Or with Nicholas Lemann? 100g
teaching  polarization 
6 weeks ago by scritic
This new museum doesn’t want Instagram or crowds. Does that make it elitist? - The Washington Post
Museums face a unique set of challenges in an age that makes aesthetic value judgments based primarily on ideas of circulation and mass appreciation, driven by social media and the instantaneous and ubiquitous exchange of images. Concert halls, for example, can be built to a limited size to maximize acoustical values and interaction with the performers. Even a musical such as “Hamilton” can charge ridiculously high ticket prices to a limited audience without being considered elitist, simply because there is no other way to present the drama except to a limited live audience on a nightly basis. Museums, however, have spoken the language of mass access for so long that almost anything they do to control the experience — prohibit cameras or selfie sticks, for example — is likely to be seen as elitist. Glenstone will discourage taking pictures in the galleries and will instead invite visitors to engage with guides in the galleries, or look up information when they get home, or buy a book from the bookstore.
research  teaching 
6 weeks ago by scritic
How Women Took China Lake
Great article on what it meant to program; 100j
teaching 
6 weeks ago by scritic
Odd Numbers — Real Life
100j

The social scientists, attorneys, and computer scientists promoting algorithmic accountability aspire to advance knowledge and promote justice. But what should such “accountability” more specifically consist of? Who will define it? At a two-day, interdisciplinary roundtable on AI ethics I recently attended, such questions featured prominently, and humanists, policy experts, and lawyers engaged in a free-wheeling discussion about topics ranging from robot arms races to computationally planned economies. But at the end of the event, an emissary from a group funded by Elon Musk and Peter Thiel among others pronounced our work useless. “You have no common methodology,” he informed us (apparently unaware that that’s the point of an interdisciplinary meeting). “We have a great deal of money to fund real research on AI ethics and policy”— which he thought of as dry, economistic modeling of competition and cooperation via technology — “but this is not the right group.” He then gratuitously lashed out at academics in attendance as “rent seekers,” largely because we had the temerity to advance distinctive disciplinary perspectives rather than fall in line with his research agenda.

Algorithms and data could be misused, but were also responsible for enormous benefits — how could we turn back the clock on them now?

Most corporate contacts and philanthrocapitalists are more polite, but their sense of what is realistic and what is utopian, what is worth studying and what is mere ideology, is strongly shaping algorithmic accountability research in both social science and computer science. This influence in the realm of ideas has powerful effects beyond it. Energy that could be put into better public transit systems is instead diverted to perfect the coding of self-driving cars. Anti-surveillance activism transmogrifies into proposals to improve facial recognition systems to better recognize all faces. To help payday-loan seekers, developers might design data-segmentation protocols to show them what personal information they should reveal to get a lower interest rate. But the idea that such self-monitoring and data curation can be a trap, disciplining the user in ever finer-grained ways, remains less explored. Trying to make these games fairer, the research elides the possibility of rejecting them altogether.

One of the algorithmic accountability movement’s greatest initial successes — getting the attention of corporate leaders — is limiting its larger political imagination. In an era of Trumpism, Tory chaos, and ethnonationalist resurgence, it is easy for academics to give up on trying to influence government policy and seek changes directly from corporate leaders. However, the price of that direct approach is translating one’s work into a way of advancing overall corporate goals — a distortion similar to the mistranslation of reality into code that provoked algorithmic-accountability scholars in the first place. Such corporate goals may help burnish scholars’ reputations at first, but eventually they need to boost the bottom line. Even monopolistic firms like Google, Amazon, and Facebook, which should have a much freer hand at engaging ethically than the run-of-the-mill corporate giant, are ultimately beholden to investors.
teaching 
7 weeks ago by scritic
Using Amazon Rekognition to Identify Persons of Interest for Law Enforcement | AWS Machine Learning Blog
An actual blogpost by a police person on how to use Amazon's Rekognition system. Use in 100j!
teaching 
7 weeks ago by scritic
Amazon's Facial Recognition System Mistakes Members of Congress for Mugshots | WIRED
Good article to recommend on policing, face recognition and bias. 100j
teaching 
7 weeks ago by scritic
Levees, Slavery, and Maintenance – Technology's Stories
Great piece! Definitely make them read in 100g esp. on the question of maintainence
teaching 
7 weeks ago by scritic
The Court Case that Enabled Today's Toxic Internet | WIRED
100j; on how platforms were protected from content but remember that this is also what enabled YouTube to survive.
teaching 
8 weeks ago by scritic
« earlier      
per page:    204080120160

Copy this bookmark:



description:


tags: