Emma McNally’s Fields, Charts, Soundings Cartographies – SOCKS
Emma McNally‘s work is an artistic cartography of imaginary nodes, network topologies, noise patterns, musical notations. Traces and scatters shape an imaginary, poetic confluence of scientific advances in genetics, neuroscience, physics, molecular biology, computer systems, and sociology

From a descriptive text on her Flickr profile:

“In Emma McNally’s work dense layers of carbon on paper create fields which offer themselves up to meaning: planes, vectors, topoi are overlaid, or coexist with swarms, shoals, marks laid out in rhythmic sequence.

The effect is of a continuous flux formed by a congruence of information systems: neural networks, contagion maps, sonar soundings, weather systems, water currents, charts plotting the migratory habits of deep-ocean mammals.

Focusing on rhythm as an expression of the dynamic of forming/unforming, McNally thinks this through graphically by highly charged percussive mark-making. Lines carry force, like the pulse of an ECG or a measure of seismic activity.

Ways in which the ‘matter’ or ‘noise’ of charged marks (unclaimed by frequencies or channels) combine, disperse and recombine into gatherings of static are explored. Passage is forged between differing rhythmic expressions: highly regularised, geometric systems of marks enter into configurations with chaotic swarms and fugitive marks.

Regularised, centralising and defining forces are disrupted, subverted and deterritorialised. The nomadic and fugitive are subject to forces that capture and formalise. Monolithic and viral tendencies mutually infiltrate.

Overall the attempt is made to maintain a state of flow, of passage between these forces where both are in danger of overrunning but are constantly overthrown – with the resulting mutations and proliferations played out.”
networks  information  systems  illustrations  infrastructure  electromagnetic_waves  lines 
4 hours ago
Fast and Free: New York's Vision for Public Wi-Fi Everywhere - YouTube
Wi-Fi is essential to New York City's strategy to give every resident and business access to affordable, reliable, high-speed broadband service everywhere in the city. Globally, Wi-Fi is the workhorse of the Internet. Currently Wi-Fi carries 60% to 80% of all broadband data traffic on smartphones, laptops, and other mobile devices, far more than cellular networks do. But a new technology is threatening the effectiveness of Wi-Fi – and its ability to create connectivity for all.

On Monday, May 2, New York City sent a letter to the FCC highlighting its concerns about the potential harms that LTE-U will have on WiFi. Read the letter here: https://static.newamerica.org/attachm...

New York City's innovative use of Wi-Fi to make Internet access available, fast, and affordable for all New Yorkers include:

CityBridge's LinkNYC franchise, which will replace at least 7,500 kiosks with free, high-speed Wi-Fi hotspots across the five boroughs
New York City's Economic Development Corporation's RISE : NYC resiliency initiative, which will fund the installation of resilient Wi-Fi networks to serve small businesses in areas impacted by Hurricane Sandy

Support for free public Wi-Fi in Chelsea, Harlem, downtown Manhattan, and downtown Brooklyn, as well as City parks, libraries, and train stations

Free broadband service to more than 21,000 residents of public housing, beginning with the Queensbridge Houses, the largest public housing development in the country

All of that free connectivity, though, may be at risk due to a plan by many cellular carriers like Verizon and T-Mobile to begin offloading data traffic onto the unlicensed frequencies of our public airwaves – on which Wi-Fi depends – to augment the licensed spectrum they currently use. The interference could slow or even shut down public Wi-Fi systems, shrinking access, undermining digital equity, and scrapping hundreds of millions of dollars marked for improving the social, digital, and economic equity of NYC.

Join New America for a conversation on the suite of initiatives that keep city systems and residents connected, and the forces that threaten to block their visions for equitable governance.
wifi  wireless  connectivity  broadband  infrastructure  digital_equity 
6 hours ago
Calling Bullshit — About
The world is awash in bullshit. Politicians are unconstrained by facts. Science is conducted by press release. So-called higher education often rewards bullshit over analytic thought. Startup culture has elevated bullshit to high art. Advertisers wink conspiratorially and invite us to join them in seeing through all the bullshit, then take advantage of our lowered guard to bombard us with second-order bullshit. The majority of administrative activity, whether in private business or the public sphere, often seems to be little more than a sophisticated exercise in the combinatorial reassembly of bullshit.

We're sick of it. It's time to do something, and as educators, one constructive thing we know how to do is to teach people. So, the aim of this course is to help students navigate the bullshit-rich modern environment by identifying bullshit, seeing through it, and combatting it with effective analysis and argument.

What do we mean, exactly, by the term bullshit? As a first approximation, bullshit is language intended to persuade by impressing and overwhelming a reader or listener, with a blatant disregard for truth and logical coherence.

While bullshit may reach its apogee in the political sphere, this isn't a course on political bullshit. Instead, we will focus on bullshit that comes clad in the trappings of scholarly discourse. Traditionally, such highbrow nonsense has come couched in big words and fancy rhetoric, but more and more we see it presented instead in the guise of big data and fancy algorithms — and these quantitative, statistical, and computational forms of bullshit are those that we will be addressing in the present course.

Of course an advertisement is trying to sell you something, but do you know whether the TED talk you watched last night is also bullshit — and if so, can you explain why? Can you see the problem with the latest New York Times or Washington Post article fawning over some startup's big data analytics? Can you tell when a clinical trial reported in the New England Journal or JAMA is trustworthy, and when it is just a veiled press release for some big pharma company?

Our aim in this course is to teach you how to think critically about the data and models that constitute evidence in the social and natural sciences.
epistemology  syllabus  big_data  pedagogy  methodology  statistics 
16 hours ago
Covert Cartographics – BLDGBLOG
The collections include state-of-the-art graphic tools for producing maps and other measured cartographic products, as well as the maps themselves. Organized by the decade of its production—including batches from the 1940s, 1950s, and 1960s—“each map is a time capsule of that era’s international issues,” as Allison Meier points out.

“The 1940s include a 1942 map of German dialects,” Meier writes, “and a 1944 map of concentration camps in the country. The 1950s, with innovative photomechanical reproduction and precast lead letters, saw maps on the Korean War and railroad construction in Communist China. The 1960s are punctuated by the Cuban Missile Crisis and Vietnam War, while the 1970s, with increasing map automation, contain charts of the Soviet invasion of Afghanistan, and the Arab oil embargo.”

But it’s the mapping tools themselves that really interest me here.

On one level, these graphic devices are utterly mundane—triangular rulers, ten-point dividers, and interchangeable pen nibs, for example, any of which, on its own, would convey about as much magic as a ballpoint pen. ...there is something hugely compelling for me in glimpsing the actual devices through which a country’s global geopolitical influence was simultaneously mapped and strategized.
mapping  cartography  tools  instruments  methodology 
16 hours ago
On Anthropolysis - e-flux Architecture - e-flux
As I and others have written, the reason we know that climate change is even happening at the nuanced degrees that we do is because of the measurement capacities of terrestrial, oceanic, atmospheric sensing meta-apparatuses that are at least representative of an industrial-technological system whose appetite is significantly responsible for the changes being measured in the first place.4 This correspondence may be the rule, not the exception, and for the Anthropogeny/Anthropolysis dynamic, a more crucial example is the relationship between oil and deep time. Finding oil was (and is) an impetus for the excavation of Earth, an ongoing project that turns up sedimentary layers of fossils and provides evidence of an old Earth and deep time. If not for the comprehensive disgorging of fossil fuels since the late nineteenth century, we would not have this Anthropocene, and if not for the economic incentive to look below and at rocks in this way, we may not have been confronted with the utter discontinuity between anthropometric time and planetary time. So, even if deep time is one of the ways that we learn to de-link social and phenomenological time from planetary time, its discovery was made possible by an industry that operated on nature with the local conceit that ecological time is subordinate to social time, and now we have the “accidental” fulfillment of that superstition by the Anthropocene's binding of social and geologic time. By pursuing the illusion as if it were true, we discovered, as a by-product, that it was false, but the by-product of doing so is that we made it true.
anthropocene  temporality  deep_time  climate_change  mining 
yesterday
Our Graduates Are Rubes
The pampering of students as customers, the proliferation of faux "universities," grade inflation, and the power reversal between instructor and student are well-documented, much-lamented academic phenomena. These parts, however, make up a far more dangerous whole: a citizenry unprepared for its duties in the public sphere and mired in the confusion that comes from the indifferent and lazy grazing of cable, talk radio, and the web. Worse, citizens are no longer approaching political participation as a civic duty, but instead are engaging in relentless conflict on social media, taking offense at everything while believing anything.

College, in an earlier time, was supposed to be an uncomfortable experience because growth is always a challenge. It was where a student left behind the rote learning of childhood and accepted the anxiety, discomfort, and challenge of complexity that leads to deeper knowledge — hopefully, for a lifetime.

That, sadly, is no longer how higher education is viewed, either by colleges or by students. College today is a client-centered experience. Rather than disabuse students of their intellectual solipsism, the modern university reinforces it. Students can leave the campus without fully accepting that they’ve met anyone more intelligent than they are, either among their peers or their professors (insofar as they even bother to make that distinction)....

College, in an earlier time, was supposed to be an uncomfortable experience because growth is always a challenge. It was where a student left behind the rote learning of childhood and accepted the anxiety, discomfort, and challenge of complexity that leads to deeper knowledge — hopefully, for a lifetime.

That, sadly, is no longer how higher education is viewed, either by colleges or by students. College today is a client-centered experience. Rather than disabuse students of their intellectual solipsism, the modern university reinforces it. Students can leave the campus without fully accepting that they’ve met anyone more intelligent than they are, either among their peers or their professors (insofar as they even bother to make that distinction)....

Faculty members both in the classroom and on social media report that incidents like that, in which students see themselves as faculty peers or take correction as an insult, are occurring more frequently. Unearned praise and hollow successes build a fragile arrogance in students that can lead them to lash out at the first teacher or employer who dispels that illusion, a habit that carries over into a resistance to believe anything inconvenient or challenging in adulthood.
expertise  ego  advising  pedagogy  higher_education 
yesterday
Envisioning the Fully Integrated Library
Two trends are emerging in the development of academic libraries. On one hand, they are becoming more holistic learning environments, supporting a variety of needs. Students receive help finding information, but they also find help with writing, course tutoring, expertise with specialized technologies, and uniquely designed study spaces. Depending on the circumstances of each institution, the library may become a "lab outside the classroom," a one-stop learning facility, or a student version of faculty centers for teaching and learning.

On the other hand, libraries are becoming sophisticated research centers, supporting the manipulation, analysis, creation, and construction of knowledge. We see this in such diverse initiatives as data curation and visualization, digital humanities, and scholarly communication.

Concerns about library change frequently focus on the conflict that surrounds withdrawing books to make rooms for other services. These debates miss the point entirely. What is important is not whether the library removes books, but whether, and to what degree, library resources and services are integrated into teaching, learning, and research. Within this context, books, databases, library instruction, and the reference desk all deserve scrutiny....

we see it when the teaching of more-complex information skills are scattered across the curriculum. Or when a librarian participates in a course as an embedded participant. Or when assignments are created in collaboration with librarians in a way that incorporates library resources, technologies, and information-skill development.
pedagogy  research  libraries  academic_libraries 
yesterday
Virus, Coal, and Seed: Subcutaneous Life in the Polar North - Los Angeles Review of Books
Anthrax is not the only ghost haunting the Arctic.
In the Arctic Circle, life seems to keep its own time. If you travel across the Barents Sea from Yamalo-Nenets, you’ll arrive at a Norwegian archipelago called Svalbard. It is an otherworldly place, inhospitable to most life yet starkly and sublimely beautiful. Roughly 2,600 intrepid people, most of them adult men, live here. But you can’t die in Svalbard. No, inhabitants are not immortal. Rather, their life cycles are abridged in mundane ways: Norwegian officials forcibly evict the sick, disabled, and elderly, shipping them back to the Norwegian mainland to end their days. You can’t be born in Svalbard either. The governor orders women in their third trimester to leave. Svalbard is not, as citizens call it, a “life cycle community” — no concessions are made for birth and death, and only able-bodied working adults are welcome....

The link between anthrax in Yamalo-Nenets and life in Svalbard is complicated, but key to understanding both is the climate and the ways in which arctic cold transfigures that which is old....

Platåberget is full of vaults and faults, graves and caves. The mountain is now a place to unearth coal and bury coal miners, to immortalize seeds and resurrect viruses. On Platåberget, viruses that lived and died in the past have lately erupted into the present; ruins of coal mines are persistently in the present; and seeds in the vault are artifacts of the present that are now buried for future disinterral. At the ends of the earth, time seems out of joint. Here in the polar north, viruses, coal, and seeds are geopolitical and climatological relics, telling tales of coal extraction, contested land claims, and crumbling empires. And, in the Arctic, geopolitics is decidedly climatological — punctuated by global war, Cold War, and global warming....

Longyearbyen is named for John Munro Longyear, an American capitalist whose name itself suggests a kind of temporal slackening. Longyear arrived in Svalbard and espied riches in the plentiful Triassic coal seams that marked the land, exposed by glacial gashes. Coal is, of course, dead organic matter. All of that shiny black sediment is the detritus of deciduous forests and puzzlegrass that flourished in a balmier Svalbard 65 to 23 million years ago, their dead tissue inspissated by heat and pressure until latent energy condensed into something combustible....

Even though coal mining in Longyearbyen is largely shuttered, its infrastructure remains scattered across Longyearbyen’s landscape [Figure 2]. The dark skeletal remains of coal tipples, lift systems, and aerial tramway conveyors litter the surrounding mountains, looking much the way Store Norske left them in 1958 — they resist decay because the temperature is too cold for liquid water to rot wood. They are ruins, and will most likely remain so indefinitely....

What will become of life, then, as the climate warms and these glaciers recede — as ecological catastrophe joins geopolitical catastrophe to make this and every other place precarious and unlivable? In 1984, agricultural researchers from a Norwegian university decided to conduct what they termed a “hundred-year experiment.” They gathered a small collection of seeds and stored them underground in Mine 3 on a Platåberget pass just past the Longyearbyen airport. The interior of the coal mine maintains an ambient temperature between -2.5 and -3.5 degrees Celsius, far enough below freezing that, the researchers suspected, seeds would be naturally preserved. Checking one year to the next, the researchers confirmed the seeds’ suspended animation: none have germinated. The Norwegian scientists made a proposal to the UN’s Food and Agriculture Organization (FAO): since there was plenty more room in this mine shaft, now repurposed as a naturally occurring cryobank, other countries might want to pay a small fee in order to archive their own seeds. The UN turned them down, on the grounds that intellectual property disputes might arise if one country stored a significant amount of its national germplasm in another nation’s territory. The mine shuttered in 1996 when its thin coal seam was exhausted. The seeds stored in 1984 are still there....

But in the wake of Hurricane Katrina, Fowler began to wonder whether agricultural diversity could ever truly be secure if cities were so vulnerable to geopolitical and ecological disaster. He and Shands realized that the gene banks were located in places where the best technological infrastructure could be quickly dismantled by political strife or natural disaster — Nigeria, Colombia, Nairobi, Kenya, Nepal. It was then that Fowler recalled the Norwegian scientists whose failed proposal had crossed his desk years earlier at the FAO. Back then, he had nixed the proposal, but now he thought differently: a vault dug into the permafrost beneath Platåberget seemed as safe a place as any, and perhaps safer than most.
The Svalbard Global Seed Vault broke ground soon thereafter....

Gesturing downhill, he points out the air traffic control tower for the Longyearbyen airport, and explains that this location allows the air traffic controllers to keep an eye on the vault and sound an alarm if they notice an intruder.
A thin cement wedge piercing the frozen mountainside at a steep incline, the vault’s Brutalist exterior suggests how deeply it is lodged beneath the earth [Figure 4]. Above the doors and along the roof is an installation of prisms and fiber-optic cables that reflect the midnight sun in the summer and glitter like the aurora borealis during the polar night. It looks like a post-apocalyptic bunker, which, I suppose, is exactly what it is....

The doors slam heavily behind us, and we face a long hallway, really a tube of corrugated metal sloping downward into the mountain. Everything is duplicated: ventilation, backup generators, and pumps. There’s no use for one water-pump, let alone two, in a hole beneath permafrost, but the building’s designers have prepared for a future when the permafrost has thawed. Engineers have planned ahead in other ways as well. For instance, they surveyed the mountain to ensure that the vault is nowhere near a coal seam. Their reasoning was that a century or more from now, when the vault is forgotten, miners may return to this mountain seeking coal seams, only to inadvertently drill into the vault. The engineers also accounted for a 70-meter sea level rise, which is an estimate of what would happen if all the glaciers in the world melted. They compounded that scenario with a tsunami, and then built the vault five stories above the predicted waterline. Engineers calculate that, given the current rate of climate change, the vault would remain below freezing even if the electricity went out for the next two centuries. How long did you build it to last, I ask? Fowler: “Essentially forever.”...

the room into which he escorts me next is wondrous indeed: a stark and cavernous antechamber of raw limestone hollowed into vaulted ceilings and washed in white reinforced concrete, rock rimed in frost. “I really enjoy being here,” Fowler murmurs, and his voice reverberates. The wall opposite the doors through which we entered is gently concave; to our left, two doors are offset, and a third door is on the other side of the parabolic bare wall. Fowler explains that they avoided putting any of the interior chambers directly opposite the door leading to the hallway so that “if someone were to fire a missile down here … it wouldn’t hit the place where the seeds are.” So, too, the wall is concave so that shockwaves — from a ballistic missile or a plane crashing into the mountain, for example — can reflect back toward the entrance instead of propagating deeper into the mountain and injuring the seeds....

Yet here is abundant life: 860,000 different varieties of crops, and 120,000 different strains of rice alone. Seeds are sealed in triple-ply, puncture-resistant vacuum packaging and then loaded into plastic crates, which are stacked on shelves. Looking inside one box, I find ampules of squash and bags of anise. Every major crop in the world is in this room — not just wheat, oats, barley, potatoes, lentils, soybeans, and alfalfa, but also heirloom seeds and forgotten landraces. Boxfuls of foraged grasses are stored cheek-by-jowl alongside sorghum, foxtail millet, bur clover, purple bush-beans, pigeon peas, Kentucky bluegrass, and creeping beggarweed. Every country in the world is represented, as are several countries that no longer exist. Colombia, North Korea, Russia, Taiwan, Ukraine, Switzerland, Nigeria, Germany, Israel, Syria, Zimbabwe, Tajikistan, and Armenia share shelf-space in this pastoral League of Nations. With over 90 million seeds deposited in the bank, India represents the largest crop diversity, nearly three times as much as Mexico, the next most prolific contributor.
On February 26, 2008, the day the seed vault opened, Pakistan and Kenya were first in line to store their seeds. The previous year, the disputed election of Mwai Kibaki in Kenya triggered ethnic violence against Kikuyus. Karachi had catastrophically flooded and was scene to a bloody suicide bombing, and Benazir Bhutto was assassinated in Rawalpindi. One can speculate that, for Kenya and Pakistan, a cache in the Seed Vault is a way to refuse political and climatological vulnerability — to forecast a future that might, somehow, sustain life....

One shelf of the vault is half empty. Four years into the civil war and humanitarian crisis in Syria, violence barreling northward toward Aleppo jeopardized the Headquarters of the International Center for Agricultural Research in the Dry Areas (ICARDA). Hundreds of thousands of seeds were banked here in Svalbard, including some of the earliest strains of Levantine wheat and durum, which are more than 10 thousand years old. The Syrian gene bank, now relocated to Morocco and Lebanon, recently requested 30 thousand samples from its original … [more]
archives  climate_change  biology  svaldbard  mining  death  temporality  ruins  infrastructure 
yesterday
Art In the Age of Obsolescence
There is, however, a rich tradition at The Museum of Modern Art of offsetting this trend through collaborations with academics and researchers. Through this, we are often able to build small-scale research projects that give students incredible real-world experience — and afford museum conservators the sort of research we wish we had more time for. About a year ago, I realized that Lovers was a perfect case study for a course I was teaching at New York University, called Handling Complex Media. The artwork is composed of a veritable cocktail of technologies and media formats: 35mm slides, analog video, robotics, software — you name it. So we pulled Lovers from storage, along with its two-inch-thick folder of documentation, and began the work of understanding just what we had....

Out came LaserDiscs, 35mm slides, speakers, wires, accessories, slide projectors, an eight-foot-tall metal tower containing video projectors with robotics to control which direction they are pointing, two flight cases full of behind-the-scenes control hardware and software, and a hefty folder containing documentation, manuals, installation specifications, and correspondence with the artist and his studio. Our art handlers carefully delivered all of this material to a small viewing room at MoMA’s art storage facility in Queens. Although Lovers calls for a 32 x 32' space for proper installation, this was the best we could do for a basic assessment. After two days of combing through manuals and carefully wiring the various components to one another, we were ready to power on the artwork for the first time in decades....

The class was tasked with understanding and documenting the following: What is the anatomy of the artwork? How does it work? What condition are its various components in? What components are at risk of failure? Where can we source backups and/or replacements for the exact components used by the artist, and if exact replacements are not available, which components have significant aesthetic impact on the work beyond mere behind-the-scenes utility?...

The original LCD video projectors and the behind-the-scenes control hardware needed to be replaced due to their instability and rarity. This meant a full-on re-implementation of the original control and timing hardware and software would be necessary. Additionally, the NYU MIAP students’ research had revealed several gaps in the installation documentation. There were many unknowns regarding the parameters for successful installation, questions we knew we could only answer by working with Shiro and Yoko Takatani, Kyoto-based members of the Dumb Type artist collective and performance group, of which Furuhashi had been a pivotal member. Due to his battle with AIDS, Furuhashi was frequently hospitalized during the creation of Lovers, and Shiro Takatani was responsible for much of the artwork’s technical execution. His input would be critical in our efforts....

Our aim was to replace the at-risk components, translating the work to more stable technologies, while prioritizing two essential tenets of conservation — minimal intervention and reversibility....

At first glance, the hardware connecting the PCs to the robotics was completely incomprehensible. How did it work? There was only one way to find out. We needed use analytical and diagnostic tools to reverse engineer exactly what the PCs were doing....

Now we had the score, and we knew how to perform it, but that still was not enough; all of this documentation was very scientific and precise, but it didn’t tell us how the work felt. Furthermore, there was still the PC that did not contain plainly readable metadata, and only an impenetrable binary file. How could we reverse engineer the robotics and behavior of the slide projectors and interactive video of Furuhashi? Observation and careful documentation was the answer. I proceeded to spend hours upon hours running the original system, carefully watching and listening to the robotics, while also capturing video and audio documentation. In the end, it was this direct observation that allowed us to reverse engineer the basic algorithm....

Once we had completely documented all of Shiro’s knowledge regarding alignment, lighting, and sound, he told us it was time to move on to the refinement and correction of the motion and timing of the robotics. With puzzled looks on our faces, we reminded him of our quantitative proof that we had reproduced the timing and motion of the original control software within a completely imperceptible .0002-second margin of error. Smiling, and ever patient, Takatani, who had stewarded this work for years, explained that the timing of Lovers had been reviewed and refined nearly every time the work was installed. He suggested, therefore, that although we had perfectly reproduced the behavior, timing, and motion of the final snapshot of the artwork as it existed when it was collected, it was now time to continue its active life, and carefully refine the motion as Furuhashi would have wished. ...

Just as the original equipment that controlled Lovers had aged, obsolesced, and become unusable, so will our newly restored solutions. The field of conservation is continually evolving, not merely technologically, but philosophically and ethically. The day may come when our work here seems somehow wrong or misguided, so it is our job as responsible conservators to ensure that we produce the requisite documentation, ensuring that our work is truly reversible.
archives  preservation  digital_preservation  digital_art  emulation  ontology  pedagogy  reverse_engineering  materiality  media_archaeology  methodology  exhibition 
2 days ago
Learning to Teach/Teaching to Learn II - Google Docs
With the Spring semester approaching fast, the second edition of the Learning to Teach mini-conference returns, January 15, 2017 in New York City. For many educators, January is the perfect time to review the last year, write syllabus and prepare for new classes. Organized by the School for Poetic Computation in partnership with the Processing Foundation, this day long conference is an open forum for educators teaching computer programming in creative and artistic contexts. The morning session is a series of talks from experienced educators on approaches for teaching effectively, strategies for assessment and feedback. In the afternoon, participants will be invited to workshop sessions to discuss curriculum development and environments and tools for learning. Together, we will explore the intersection of pedagogy and creative practice, and provide an opportunity to share ideas for another year of teaching ahead.

video: https://www.youtube.com/watch?v=D7-m6NJ90RE
pedagogy  teaching 
2 days ago
Iron Mountain's Butler County mine expands to hold data secure | Pittsburgh Post-Gazette
Iron Mountain can show off plenty of these rooms across more than 200 acres of underground space carved into an abandoned limestone mine in Butler County. The facility — famous for its geology and for holding some of the most precious pieces of paper and film in America — lately has been installing large racks of blinking computer servers that stretch as far as the eye can see.

The Boston-based information management company that owns the mine has been advancing deeper into the shafts to serve health care and insurance businesses, financial institutions and tech companies looking for the safest place to store their irreplaceable digital information.

By this spring, Iron Mountain expects another 11 acres of the former mine to be in use by clients storing digital data.

Iron Mountain portrays its mine as optimal for businesses that want the highest level of security at a reasonable price. The security comes in the form of armed guards and metal detectors at the entrance all employees and visitors walk through....

It also comes with the 20-foot-thick seam limestone — bound by layers of impermeable shale rock — that could largely withstand any explosion. (Slight imprints from dynamite blasting can still be seen on the walls.)

And security is found in digital defenses: The facility’s computer system is entirely disconnected from the internet, and its computers won’t allow anyone to plug in an external hard drive.

The company touts its client base of highly regulated and sensitive companies that have bought into those assurances. In fact, the federal government uses a significant part of the mine, employing most of the 2,000 workers who enter and leave the facility each day.

When Iron Mountain purchased the data center in 1998, much of the storage was used for paper and film — patents, motion pictures, Social Security applications filed by every resident of the United States, pension records, boxes of business records....
Mr. Hill pointed to a copper/​lead door installed for a large insurance company that was concerned about electromagnetic pulses, which can be sent by terrorists or even the natural environment and could cripple equipment. (In addition to the door, a study conducted later at the mine proved that limestone layers naturally shield such waves.)
preservation  archives  mining  underground 
2 days ago
Wayne Barrar photographs renovated mines and industrial sites in his series, “Expanding Subterra.”
Wayne Barrar had long been photographing mines when he started to wonder what became of the mines after they were depleted. As he found while creating his series, “Expanding Subterra,” many are well suited to be transformed into other types of spaces, including offices, libraries, and even paintball fields.

“The major benefits of these sites are their security and their stable surprisingly dry and mild environment. They are cheap forms of industrial architecture,” he said via email.
mines  storage  photographs  underground 
2 days ago
The Digital Life: DNA as Data Storage
On this episode of The Digital Life podcast we discuss how bio-inspired technology is beginning to intersect with information technology in big ways. With the exponential increase of digital data, we face an ongoing problem of information storage. Today most digital information is stored on media that will expire relatively quickly, lasting a few decades at most. Because of this, we require new methods for long-term data storage, and biotech might just have the answer. DNA could be the storage media of the future: It can last thousands, even potentially tens of thousands of years. And the tech industry has taken notice. For instance, last month Microsoft agreed to purchase millions of strands of synthetic DNA, from San Francisco based Twist Bioscience to encode digital data. Of course we may be years away from a commercial DNA storage product, but the potential for a revolutionary, even disaster proof media is there.
storage  archives  DNA  biomedia  preservation 
2 days ago
‘Smart Cities’ Will Know Everything About You - WSJ
If the Internet age has taught us anything, it’s that where there is information, there is money to be made. With so much personal information available and countless ways to use it, businesses and authorities will be faced with a number of ethical questions.

In a fully “smart” city, every movement an individual makes can be tracked. The data will reveal where she works, how she commutes, her shopping habits, places she visits and her proximity to other people. You could argue that this sort of tracking already exists via various apps and on social-media platforms, or is held by public-transport companies and e-commerce sites. The difference is that with a smart city this data will be centralized and easy to access. Given the value of this data, it’s conceivable that municipalities or private businesses that pay to create a smart city will seek to recoup their expenses by selling it.

By analyzing this information using data-science techniques, a company could learn not only the day-to-day routine of an individual but also his preferences, behavior and emotional state. Private companies could know more about people than they know about themselves....

What degree of targeting is too specific and violates privacy? Should businesses limit the types of goods or services they offer to certain individuals? Is it ethical for data—on an employee’s eating habits, for instance—to be sold to employers or to insurance companies to help them assess claims? Do individuals own their own personal data once it enters the smart-city system?

With or without stringent controlling legislation, businesses in a smart city will need to craft their own policies and procedures regarding the use of data. A large-scale misuse of personal data could provoke a consumer backlash that could cripple a company’s reputation and lead to monster lawsuits. An additional problem is that businesses won’t know which individuals might welcome the convenience of targeted advertising and which will find it creepy—although data science could solve this equation eventually by predicting where each individual’s privacy line is.
smart_cities  big_data  privacy 
3 days ago
India’s Digital ID Rollout Collides With Rickety Reality - WSJ
India’s new digital identification system, years in the making and now being put into widespread use, has yet to deliver the new era of modern efficiency it promised... The system, which relies on fingerprints and eye scans to eventually provide IDs to all 1.25 billion Indians, is also expected to improve the distribution of state food and fuel rations and eventually facilitate daily needs such as banking and buying train tickets... The government began building the system, called Aadhaar, or “foundation,” with great fanfare in 2009, led by a team of pioneering technology entrepreneurs. Since then, almost 90% of India’s population has been enrolled in what is now the world’s largest biometric data set... But the technology is colliding with the rickety reality of India, where many people live off the grid or have fingerprints compromised by manual labor or age....

An Aadhaar ID is intended to be a great convenience, replacing the multitude of paperwork required by banks, merchants and government agencies. The benefits are only just beginning, backers say, as the biometric IDs are linked to programs and services.

But in rural areas, home to hundreds of millions of impoverished Indians dependent on subsidies, the impact of technical disruptions has already been evident.
infrastructure  India  identity  privacy  informal_infrastructure  authentication 
4 days ago
The Book As
First, we have the alphabet. Then scrolls. Then the codex. The codex has endured through history since around the 2nd century A.D. By themselves, these codices thrived in areas like religion. Then, around 1450, the printing press came along. This new technology changed the codex forever. Fast forward to the present day, and the printing press seems tedious, even archaic. So how has the book changed since its inception to our ever-changing digital age? Take a look through each section in the table of contents to see how certain authors/artists have altered the book in incredibly varied ways to complement the diversity of the digital age. 
books  book_art 
5 days ago
Empathy as Faux Ethics - EPIC
The word “empathy” comes from the German Einfühlung, meaning “in feeling,” and the Greek empatheia, meaning “a passion or state of emotion,” adopted from em, an offshoot of en, or “in,” and pathos, “feeling.” Pathos was originally used in art theory to indicate the idea that appreciation for a piece of art depends on the viewer projecting themselves into the piece.

The meaning of empathy has shifted in design discourse: designers project themselves into the other’s perspective not just to appreciate their views, but also to turn that understanding into design interventions. There is a productive mode to empathy that sets an ethical standard for designers to act on their knowledge—to discover and solve the other’s problems.

This model has several dangers. It sets up a framework in which empathy becomes a way to further separate the ones who design (professionally) from those who do not (I am deliberately avoiding labels such as “designers” and “users”). It assumes that “The Designer” possesses a unique ability to access to the psyche of “The Other.” It’s no wonder that design is so often viewed as a self-aggrandizing profession. The model also assumes that the insight acquired by empathizing gives The Designer sufficient understanding to define and resolve The Other’s problems—even the world’s problems....

Despite limitations around empathy used to distance designers and the subjects of design (we should not forget design’s ability to subjugate), there are applicable use cases, especially around health care and social justice issues. Empathy in commercial design, however, is suspect. Empathy often takes form as a subtler way of othering in nonprofit and government contexts, but I am picking on commercial design here simply because the ability to care for something not immediately profitable is so foreign to most businesses....

Empathy is applied retroactively to fit a business-centric product into a human-centric frame. It becomes an ethical practice designers use to feel better about the potentiality of making superfluous things that no one actually needs. But no matter how one justifies it, empathy for commercial ends is simply marketing. Does anyone have a real need to be sold things? Sustainable designs will never be reached by empathy alone.

Back to our coffee shop. Here’s a standard solution: Keep the bathroom door locked and require people to ask for a key available only to paying customers. This solves a discrete problem for the coffee shop. After all, how can one business possibly take on an issue like inequality or homelessness? But it does more; it actively ignores the larger, systemic responsibilities the business has to the community. By empathizing with one group of people, we necessarily exclude another...

The crux of human-centered design is that human needs should be considered before business and technological needs. If a design does not meet a defined human need, then its business viability and technical feasibility don’t matter. This human-business-technology model ignores other components of design, such as sustainability, ethics, and egalitarianism. One might argue that these considerations can be wrapped up in the “human” part, but in practice, surface level understandings of empathy tend to dominate over broad definitions that might include more politically infused ideas.

This tendency has to do with emphasizing the individual over the collective, thus reinforcing deep-seated notions of anthropocentrism that run through the history of western epistemology. Empathy does not consider ecological sustainability because human-centricity, forecloses on ecological thought, as argued by actor-network theory, deep ecology, or if you want some really fun reading, anti-civilization and anarcho-primitivism.
empathy  design  teaching  pedagogy  ethics  design_process 
5 days ago
I finally stepped out of my progressive bubble—and now I understand why people hate “the liberal elite”
For the first time in my life, I was on the outside of the so-called liberal bubble, looking in. And what I saw was not pretty. I watched as many of my highly educated friends and contacts addressed those who disagreed with them with contempt and arrogance, and an offensive air of intellectual superiority.
It was surprising and frustrating to find myself lumped in with political parties and ideologies I do not support. But it also provided some insight into why many liberals seem incapable of talking with those who hold different opinions. (This is, broadly speaking, not just a liberal problem.) In so much of what I read, there was a tone of odious condescension, the idea that us no voters were perhaps too simpleminded or too uninformed to really grasp the situation....

I suspect that the sudden popularity of the term populism has led to a similar lack of respect and curiosity for opinions we disapprove of. It may even betray a fundamental belief, inadvertent or explicit, that the populus is somehow lesser—less critical, less acute, and easier to sway.
But it is not. Liberals may be heavily represented in the media, the centers of culture (popular, and otherwise), and in academia. But unless we are able to start learning how to talk to people unlike us, we’ll likely keep losing. It is not the only reason for the current political polarization—but it is one we can all work to address.
politics  elitism  populism 
7 days ago
Your brain does not process information and it is not a computer | Aeon Essays
Our shoddy thinking about the brain has deep historical roots, but the invention of computers in the 1940s got us especially confused. For more than half a century now, psychologists, linguists, neuroscientists and other experts on human behaviour have been asserting that the human brain works like a computer....

here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them – ever.

We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not.

Computers, quite literally, process information – numbers, letters, words, formulas, images. The information first has to be encoded into a format computers can use, which means patterns of ones and zeroes (‘bits’) organised into small chunks (‘bytes’)....

computers really do operate on symbolic representations of the world. They really store and retrieve. They really process. They really have physical memories. They really are guided in everything they do, without exception, by algorithms.

Humans, on the other hand, do not – never did, never will. Given this reality, why do so many scientists talk about our mental life as if we were computers?...

In his book In Our Own Image (2015), the artificial intelligence expert George Zarkadakis describes six different metaphors people have employed over the past 2,000 years to try to explain human intelligence.

In the earliest one, eventually preserved in the Bible, humans were formed from clay or dirt, which an intelligent god then infused with its spirit. That spirit ‘explained’ our intelligence – grammatically, at least.

The invention of hydraulic engineering in the 3rd century BCE led to the popularity of a hydraulic model of human intelligence, the idea that the flow of different fluids in the body – the ‘humours’ – accounted for both our physical and mental functioning. The hydraulic metaphor persisted for more than 1,600 years, handicapping medical practice all the while.

By the 1500s, automata powered by springs and gears had been devised, eventually inspiring leading thinkers such as René Descartes to assert that humans are complex machines. In the 1600s, the British philosopher Thomas Hobbes suggested that thinking arose from small mechanical motions in the brain. By the 1700s, discoveries about electricity and chemistry led to new theories of human intelligence – again, largely metaphorical in nature. In the mid-1800s, inspired by recent advances in communications, the German physicist Hermann von Helmholtz compared the brain to a telegraph.

Each metaphor reflected the most advanced thinking of the era that spawned it. Predictably, just a few years after the dawn of computer technology in the 1940s, the brain was said to operate like a computer, with the role of physical hardware played by the brain itself and our thoughts serving as software. The landmark event that launched what is now broadly called ‘cognitive science’ was the publication of Language and Communication (1951) by the psychologist George Miller. Miller proposed that the mental world could be studied rigorously using concepts from information theory, computation and linguistics.

This kind of thinking was taken to its ultimate expression in the short book The Computer and the Brain (1958), in which the mathematician John von Neumann stated flatly that the function of the human nervous system is ‘prima facie digital’. Although he acknowledged that little was actually known about the role the brain played in human reasoning and memory, he drew parallel after parallel between the components of the computing machines of the day and the components of the human brain.

Propelled by subsequent advances in both computer technology and brain research, an ambitious multidisciplinary effort to understand human intelligence gradually developed, firmly rooted in the idea that humans are, like computers, information processors. This effort now involves thousands of researchers, consumes billions of dollars in funding, and has generated a vast literature consisting of both technical and mainstream articles and books. Ray Kurzweil’s book How to Create a Mind: The Secret of Human Thought Revealed (2013), exemplifies this perspective, speculating about the ‘algorithms’ of the brain, how the brain ‘processes data’, and even how it superficially resembles integrated circuits in its structure.

The information processing (IP) metaphor of human intelligence now dominates human thinking, both on the street and in the sciences. ...But the IP metaphor is, after all, just another metaphor – a story we tell to make sense of something we don’t actually understand. And like all the metaphors that preceded it, it will certainly be cast aside at some point – either replaced by another metaphor or, in the end, replaced by actual knowledge....

The faulty logic of the IP metaphor is easy enough to state. It is based on a faulty syllogism – one with two reasonable premises and a faulty conclusion. Reasonable premise #1: all computers are capable of behaving intelligently. Reasonable premise #2: all computers are information processors. Faulty conclusion: all entities that are capable of behaving intelligently are information processors.

Setting aside the formal language, the idea that humans must be information processors just because computers are information processors is just plain silly...

The idea, advanced by several scientists, that specific memories are somehow stored in individual neurons is preposterous; if anything, that assertion just pushes the problem of memory to an even more challenging level: how and where, after all, is the memory stored in the cell?...

A few cognitive scientists – notably Anthony Chemero of the University of Cincinnati, the author of Radical Embodied Cognitive Science (2009) – now completely reject the view that the human brain works like a computer. The mainstream view is that we, like computers, make sense of the world by performing computations on mental representations of it, but Chemero and others describe another way of understanding intelligent behaviour – as a direct interaction between organisms and their world....

the mainstream cognitive sciences continue to wallow uncritically in the IP metaphor, and some of the world’s most influential thinkers have made grand predictions about humanity’s future that depend on the validity of the metaphor.

One prediction – made by the futurist Kurzweil, the physicist Stephen Hawking and the neuroscientist Randal Koene, among others – is that, because human consciousness is supposedly like computer software, it will soon be possible to download human minds to a computer, in the circuits of which we will become immensely powerful intellectually and, quite possibly, immortal. ...

To understand even the basics of how the brain maintains the human intellect, we might need to know not just the current state of all 86 billion neurons and their 100 trillion interconnections, not just the varying strengths with which they are connected, and not just the states of more than 1,000 proteins that exist at each connection point, but how the moment-to-moment activity of the brain contributes to the integrity of the system. Add to this the uniqueness of each brain, brought about in part because of the uniqueness of each person’s life history...

Meanwhile, vast sums of money are being raised for brain research, based in some cases on faulty ideas and promises that cannot be kept. The most blatant instance of neuroscience gone awry, documented recently in a report in Scientific American, concerns the $1.3 billion Human Brain Project launched by the European Union in 2013. Convinced by the charismatic Henry Markram that he could create a simulation of the entire human brain on a supercomputer by the year 2023, and that such a model would revolutionise the treatment of Alzheimer’s disease and other disorders, EU officials funded his project with virtually no restrictions.
cognitive_science  brains  computers 
7 days ago
White Spots
Do you ever desire to escape from the information flows surrounding us?
The White Spots App visualizes the invisible electromagnetic cloud that we live in and offers a way out.
Use the App with Google cardboard to travel from the online to the offline world in Virtual Reality, or use the White Spots world map to travel to places off the grid near you.

In VR mode, the network scanner shows the invisible digital signals around you in real time and takes you on a journey to the end of the Internet in immersive 360° stories.
mapping  electromagnetic_waves  telecommunications  escape  connectivity  making_visible_invisible  data_visualization 
7 days ago
Memory of Mankind: All of Human Knowledge Buried in a Salt Mine - The Atlantic
Martin Kunze wants to gather a snapshot of all of human knowledge onto plates and bury it away in the world’s oldest salt mine.

In Hallstatt, Austria, a picturesque village nestled into a lake-peppered region called Salzkammergut, Kunze has spent the past four years engraving images and text onto hand-sized clay squares. A ceramicist by trade, he believes the durability of the materials he plies gives them an as-yet unmatched ability to store information. Ceramic is impervious to water, chemicals, and radiation; it’s emboldened by fire. Tablets of Sumerian cuneiform are still around today that date from earlier than 3000 B.C.E.

“The only thing that can threaten this kind of data carrier is a hammer,” Kunze says.

So far, he has created around 500 squares, which he allows anyone to design for a small donation. Many preserve memories of the lives or work of people involved in the project. Around 150 of the tablets showcase items from collections in Vienna’s museums of National History and Art History. Some local companies have been immortalized. One researcher’s CV now lies in the vault.

But Kunze aims to expand the project, to copy research, books, and newspaper editorials from around the world—along with instructions for the languages needed to read them. For this, the clay squares he’s currently using would take up far too much space than could be set aside for such an audacious undertaking. So Kunze also has conceived of a much thinner medium: He will laser-print a microscopic font onto 1-mm-thick ceramic sheets, encased in wafer-thin layers of glass. One 20 cm piece of this microfilm can store 5 million characters; whole libraries of information—readable with a 10x-magnifying lens—could be slotted next to each other and hardly take up any space.

The goal of the project, which he calls the Memory of Mankind, is to build up a complete, unbiased picture of modern societies. The sheets will be stored along with the larger tablets in a vault 2 km inside Hallstatt’s still-active salt mine. If all goes according to plan, the vault will naturally seal over the next few decades, ready for a curious future generation to open whenever it’s deemed necessary.

To Kunze, this peculiar ambition is more than a courtesy to future generations. He believes the age of digital information has lulled people into a false sense that memories are forever preserved. If today’s digital archives disappear—or, in Kunze’s view, when they do—he wants to make sure there’s a real, physical record to mark our era’s place in history....

Much of this information goes into digital storage—ranging from servers on personal computers to colossal data centers, like the NSA’s facility in Utah.... But this method of storage has inherent problems. Digital space is finite and expensive. Digitally stored data can become corrupted and decay as electrical charges used to encode information into binary bits leak out over time, altering the contents. And any enduring information could be lost if the software to access it becomes obsolete. Or a potent, well-timed coronal mass ejection could cause irreparable damage to electronic systems.

“There’s no getting around the risk of catastrophic loss in our culture,” says Robert Darnton, the librarian emeritus at the Harvard University Library. “Digital texts are much more fragile than printed books.”...

As the project slowly starts to take shape, some are worried that its own place in collective memory may ebb over time. “The thing I don’t like about the time capsule is the sense that it’s frozen,” says Richard Ovenden, the director of the Bodleian Libraries at the University of Oxford. “Information is much more likely to be kept if it’s used. The danger is that [Kunze’s project] will end up being forgotten.”

To avoid this, Kunze plans to distribute ceramic tokens around the world to everyone who either funds, contributes to, or advises on the project. ... The location of the mine will be carved onto each token, and it will require geological knowledge similar to our own to find it, especially as land shifts with time. This would be a safeguard against unwanted discoveries if for some unpredicted reason—nuclear war, say—human civilization disappears or regresses to the Stone Age....

Kunze has teamed up with the Human Document Project, another preservation scheme, and University College London’s Heritage Futures project, to co-organize the event.
archives  preservation  geology  chemistry  data_centers 
7 days ago
Facilities for Correction - e-flux Architecture - e-flux
There is a familiar echo in this recourse to a reductive totality that will change everything for good. Far from simply a production of the digital, the algorithm and the genealogy of machines of thought it belongs to has been correcting us for a very long time. The development of a system that might “calculate” the truth is arguably as old as the idea of the machine itself and certainly inseparable from it. Ramon Lull housed his Ars Combinatoria in a notional set of revolving paper wheels long before Gottfried Leibniz’ ratiocinator gave those same wheels materiality (brass) and cogs that would count their way to Charles Babbage, if not to the contemporary digital. But there is more to this entanglement of machine and thought and their mutual bodies: in the historic search for an algorithm that would correct us we have endeavored to equal, and failing that, mimic, the machine long before it even existed. That is, the machine of thought is our most treasured and cunning mirror; the “quest” for Universal Learning (the holy-grail of today), simply a digital reformulation of the early Enlightenment question of a Universal Language....

According to Leibniz’s arch formulation, a Universal Method was to “Help us to eliminate and extinguish the controversial arguments which depend upon reason, because once we have realized this language, calculating and reasoning will be the same thing.”12 That is, in providing an omniscient system able to calculate the truth, it would act as the bridge that would connect what had been (and still was) God to what (no doubt) will be the Master Algorithm. By strategically recasting Leibniz and his work on logic as poised at the end of the Renaissance rather than at the beginning of Modernism, Rossi exposed a previously concealed complex network of influences that directly connected fourteenth century cabala to seventeenth century Real Characters, and ancient Ars Memoria and rhetoric to Leibniz’s logical “calculus.” That is, Leibniz’s invention of what proved to be the progenitor of modern logic also emerged precisely from within rather than simply in reaction to a milieu that was antithetical to modernity.
algorithm  computer_history  cognition  language  machine_learning 
8 days ago
Reverse the Perspective: It’s Time to Track the Development of Embodied Technologies and their… – Medium
We developed The Fabric of Digital Life (FABRIC) to track the development of embodied technologies and to create a space for realizing narratives around technological progress and society. FABRIC is asking: how can we help the public understand and track these embodied technologies as they grow in importance?
FABRIC is an online digital archive for storing media related to embodied technologies — things like patents, news releases, instructional videos, and art. The archive allows users to track, catalog, and view artifacts related to human-computer interaction platforms, designs, and ideas, including images, videos, texts, websites, and data sets that document emerging trends. Curated sub-collections are hosted on the archive that relate to a variety of themes, including ethics, surveillance, and vulnerable populations.

The underlying motivation is to provide a tool for illustrating the diverse, shared origins of embodied technology platforms, separate from profit-driven inventors and companies. A secondary motivation was to provide a space for thinking about social issues and embodied technologies. The archive allows users to browse keywords (“smart watch” for example) and through a customized metadata scheme users can collect and catalogue embodied technologies and the discourse that surrounds them.
digital_archive  ideology  technology 
8 days ago
Research documents life impact of attending a liberal arts college
Graduates who reported that in college they talked with faculty members about nonacademic and academic subjects outside class were 25 to 45 percent more likely (depending on other factors) to have become leaders in their localities or professions. Those who reported discussions on issues such as peace, justice and human rights with fellow students outside class were 27 to 52 percent more likely to become leaders.
Graduates who reported that students took a large role in class discussions were 27 to 38 percent more likely to report characteristics of lifelong learners than others were. Students who reported most of their classwork was professionally oriented were less likely to become lifelong learners.
Graduates who reported that as students they discussed philosophical or ethical issues in many classes, and who took many classes in the humanities, were 25 to 60 percent more likely than others to have characteristics of altruists (volunteer involvement, giving to nonprofit groups, etc.).
Graduates who reported that as students most professors knew their first names, and that they talked regularly with faculty members about academic subjects outside class, were 32 to 90 percent more likely to report that they felt personally fulfilled in their lives. ...

https://www.whitehouse.gov/the-press-office/2015/09/15/executive-order-using-behavioral-science-insights-better-serve-american
academia  teaching  advising  liberal_arts 
8 days ago
The Factory of Fakes - The New Yorker
As I examined the facsimile, I was prepared to summon my inner Walter Benjamin and bemoan the mechanical reproduction’s lack of an “aura.” But there were no Disneyfied abominations: the baboons, with their playful upturned tails, looked as mischievous, mold-mottled, and ancient as the originals. I could make out the spot where, in a long brushstroke outlining a baboon’s crest, the artist had just begun to run out of paint. In their brutal objectivity, the 2009 scans had recorded beauty and blemish alike. “That’s printed dust,” Lowe joked, pointing at a baboon that had been painted on a particularly bumpy area. “It’s not something that will just come off.” The only thing that was perceptibly modern was the absence of a musty odor. Lowe noted that the room’s sound wasn’t right, either. He hopes to enlist engineers to record the “acoustic signature” of Tutankhamun’s tomb, so that he can re-create it inside the facsimile.

Factum began operations in 1998, when it was becoming clear that 3-D printing was a revolutionary tool. The workshop has made millions of dollars by fabricating sculptures for artists—Anish Kapoor, Maya Lin, Marc Quinn—who sometimes require technological assistance to realize their visions. Lowe appears to spend nearly all his profits on fanciful-seeming projects that, in aggregate, mount a serious case that the facsimile can play a central role in art conservation. In order to raise funds for his preservation projects, he established a nonprofit wing, the Factum Foundation.

A digitally recorded copy, Lowe argues, can be both a lode of “forensically accurate information” and a vehicle for provoking a “deep emotional response.” Because an art work can be scanned without physical contact, the facsimile process makes traditional conservation efforts—from repainting to varnishing—seem like an exalted form of graffiti. A facsimile also allows the public to see objects that are nearly impossible to approach in person: Factum has recorded and reassembled everything from a Renaissance painting outside the Pope’s bedroom to rock carvings on a remote plateau in Chad. Lowe has a boyish indifference to danger, and colleagues must constantly dissuade him from, say, driving into Libya with a scanner in the trunk.

Factum made its reputation in 2007, with a replica of Paolo Veronese’s monumental painting “The Wedding at Cana,” which Napoleon presented to a new museum, the Louvre, after ripping it off the wall of a refectory in Venice in 1797. The painting’s place in the refectory, which was designed by Palladio, had never been filled; Lowe installed his copy in the exact spot. Factum’s noninvasive protocol, in which their scanner’s lasers captured every whorled brushstroke without touching the canvas, was in stark contrast to the Louvre’s restoration of the painting, in the nineteen-nineties, during which it accidentally fell onto some scaffolding and was gored in five places....

Factum’s Web site contains dozens of treatises questioning the aesthetic assumptions behind our disdain for fakes. Lowe proclaims that his workshop seeks to “redefine the relationship between the original and the copy.” He told me, “We have this weird obsession with the original, as though it were static and immortal, even though we know that, like everything else, it journeys through time. We know this every time we look in the mirror! But why is it that we engage in these efforts to try to keep an art object looking the same, especially when those efforts so often fail?” Is seeing “The Wedding at Cana” in the Louvre—where it occupies the same noisy salon as the “Mona Lisa”—a richer experience than seeing the facsimile in the painting’s original location? When Italians witnessed the unveiling of the Veronese replica, in the creamily lit space where the artist intended his masterpiece to be seen, many of them wept. Bruno Latour, the French theorist, championed the “Cana” project, and he and Lowe later wrote an essay about it, in which they referred to a “migration of the aura” from original to copy.

Some scholars remain wary. Sybille Ebert-Schifferer, of the Max Planck Institute for Art History, in Rome, told me that the “Cana” installation is valuable, because it makes Palladio’s refectory “aesthetically complete.” But she warned that a visitor “will not learn anything about the original painting,” adding, “A painting is not just made of the surface, but of a multitude of layers and changing pigments which are part of its individuality and that of its creator. It has not only an iconography but also its own material personality. It is a horrifying vision to imagine that we could be faced with a horde of never-aging clones.”

But clones have their advantages. AbdelGaber, the antiquities official, told me that tourists in Luxor are typically allowed to view the actual Tut tomb for about “ten to fifteen minutes.” Many Egyptologists expect that Tutankhamun’s resting place, like many others in the Valley of the Kings, will one day be closed to tourists, in order to save it from destruction. But they can pant as long as they like inside the fake tomb, which is built beneath the same scalding sun, and set at the same angle.

Lowe pointed out a few divergences between Tutankhamun’s tomb and his reconstruction of it. Most notably, the facsimile contains a “virtual restoration” of a printed panel that used to be part of the south wall. Howard Carter largely destroyed that wall when he broke into the sealed room. A boulder-size fragment, now missing, was photographed, in black-and-white, soon after Carter’s discovery. Factum technicians scanned the photograph, then colorized it and added relief, by extrapolating from topographical data extracted from similar areas in the tomb. Placing the missing panel inside the replica led Lowe to notice that its surface was far less deteriorated—visual proof that tourism is rapidly damaging the surviving walls....

AbdelGaber told me that his country is grateful to Factum. “When we make a replica, we can protect something very fragile—that’s good for Egypt and for culture,” he said, adding, with a laugh, “The Russian people! They go inside the tombs and touch the color. We’ve talked about it with their Ambassador. But, with only one or two guards, you can’t watch all the tourists.”

By educating visitors about their impact, Lowe argues, tourism can “become a positive force in the preservation of the past.” He is training Egyptians in his scanning methods, and he plans to set up, in Luxor, a digital-fabrication studio modelled on the glass-blowing workshops in Murano....

Quercia replicas had been made with a 3-D printer—a device that expels pixel-size globules of synthetic resin, which harden and amass into a complex shape. “That’s one way to make a 3-D object,” he said. “We generally prefer using C.N.C.s”—computer-numerical-control milling machines, which carve into a block of material. For large objects, this process is more accurate. “With current technology, subtracting is better than adding,” he said. Across the room, a hulking milling machine was applying rotary cutters to a slab of high-density polyurethane, which, Lowe said, “is the material that will hold the finest information.” (Stone cannot be cut as precisely.) The machine’s robotic head darted a few inches above its plastic target, lunging at the surface like a fencer....

Lowe agrees with Michel about one thing: locals should be enlisted to digitally record cultural-heritage sites, as a safeguard against iconoclasm or accidental damage. Although Lowe is fond of his lasers, sophisticated digital replicas of monuments can now be generated with S.L.R. cameras—if thousands of shots are taken from every possible angle, in consistent light. This method, called photogrammetry, “is really about to take off, and replace 3-D scanning as the state of the art for facsimiles,” Lowe said. “The algorithms have gotten so much better.” The latest technology can achieve a resolution of a hundred microns, which is what the typical human eye can discern at reading distance. “If you can get the same quality data and object by using photographs, not lasers, the recording process is cheaper and a lot faster.”
digital_archaeology  restoration  preservation  materiality  authenticity  archaeology 
10 days ago
An Artist Gives Us a Vision of the Future Through Books
Awful review, sort-of interesting work:

Placed in a grid, the 38-year-old artist has stripped encyclopedias, law books, and other hardback, reference tomes down to their covers, and in so doing stripped them of their promise of our indepth comprehension of the world....

The Chicago-based artist’s books do represent abstract arguments concerning history and the acquisition of collective knowledge: he has essentially destroyed historical accounts and remade them, to say it is possible to construct new histories, new edifices of knowledge. He’s done so by tearing the covers off books, but also by obscuring all but a few select words on the faces and spines. If I spend a little while looking I find titles: Race Relations; Ethnicity in the United States; The Book of Knowledge Annual. These words become clandestine operators poking out from Jones’s simple color schemes. Before his intervention, these words were swallowed up in a cascade of language....

Jones both destroys books and keeps them in memory, carefully and attentively sewing book covers onto canvases to make the case that some sort of collective history can be lovingly remade, be rethought, reimagined — with different emphases.
book_art  books  epistemology  historiography 
10 days ago
No Limit and Cash Money Music Videos Provide a Visual History of Pre-Katrina Public Housing Projects - CityLab
We have an audio-visual record of life in New Orleans’ public-housing landscape, thanks to the music videos of No Limit Records and Cash Money Records. The artists of these two New Orleans-based hip-hop labels blew up in large part due to the Straight Outta NOLA-type stories they offered the world through their songs and videos.

“The emergence of No Limit and Cash Money Records helped to bring New Orleans rap and hip-hop from a city, state, and regional audience to a nationwide audience in the late 1990s,” says Amber N. Wiley, an architecture and American Studies professor who examined this phenomenon while leading the “Sites and Sounds” community public-history project in New Orleans from 2012 to 2014. “The rise of the labels and their musical stylings is heavily indebted to the musical traditions of the city. The locations celebrated in the music, however, have all but disappeared in the post-Katrina urban-planning frenzy.”
media_space  sound_space  music_scenes  music_videos 
11 days ago
Claiming Your Right to Say No | Vitae
It’s hard for a scrupulous teacher to resist the fear that, in declining to write a recommendation, you may be torpedoing someone’s professional life. Ultimately, though, a student’s application materials will speak for themselves and the professional world will make its own judgment, fairly or not. Disappointment, even heartbreak, is a reality from which even the deserving can’t always be shielded. And you aren’t obligated to make a case for a student whom you can’t, in good conscience, support.

Articulating a policy on recommendation letters can save you (and your students) frustration and time. It can also improve your teaching and advising by helping students make decisions about their own professional preparation throughout their college careers.
advising  teaching 
11 days ago
The M.B.A. Classroom That Knows When You’re Bored - WSJ
This is no ordinary lecture hall. It’s the Instituto de Empresa’s “WOW” room, or the “Window on the World,” a place the Spanish business school believes is the future of the master’s in business administration.

The room is a physical space where faculty members deliver course modules to an audiovisual mosaic wall of students watching via video camera. A software application scans the video streams and runs a sentiment analysis to gauge students’ reactions. It can, for example, alert a lecturer if someone is losing interest, or is angry.

Even a lack of emotion from the students can suggest the lecturer’s material isn’t engaging. “That’s a good bit of feedback,” says Jolanta Golanowska, director of learning innovation at IE. She notes that faculty members can use the data to help fine-tune their lectures.
teaching  educational_media  pedagogy  sentiment_analysis  affect 
11 days ago
Does Information Smell? | by Riccardo Manzotti | NYR Daily | The New York Review of Books
we are in thrall to the analogy of the brain as computer. For example, a recent paper I was reading about the neural activity that correlates with the sense of smell begins, “The lateral entorhinal cortex (LEC) computes and transfers olfactory information from the olfactory bulb to the hippocampus.” Words like “input,” “output,” “code,” “encoding,” and “decoding” abound. It all sounds so familiar, as if we knew exactly what was going on.... the way they describe their experiments by way of a computer analogy—in particular of information processing and memory storage—can give the mistaken impression that they’re getting nearer to understanding what consciousness is.....

when dealing with the brain, we suddenly find that neurons are processing “information,” rather than chemicals....

information, or data, is not a thing. It’s an idea we stipulated because it served a certain purpose, but it doesn’t exist physically, as an entity in its own right in the causal chain. Brutally, when we look inside a computer, or a brain, we don’t see or even detect information. Or data. We see physical stuff: voltage levels in a computer, chemicals in the brain.

Parks: So what you’re saying is that everything that goes on in a computer or in a brain could be fully and properly described without resorting to words like information or data?...

when you describe what the brain or even a calculator does, everything can be exhaustively described in terms of causal processes, chemical releases, and voltage changes without ever using the word information....

Parks: But then what is information? How can Floridi make the claims he does? What part can information have in the consciousness debate?

Manzotti: Obviously there is the definition of the word in common use: “facts, data, communicated about something.” The bus leaves at six. Yesterday it rained. The cash machine is out of order. That meaning has been around in English since the fifteenth century.

Parks: And?

Manzotti: Then there is the technical IT definition established by the mathematician Claude Shannon in 1949. Shannon was concerned about achieving accurate communication through technological devices and described information as an estimate of the probability that a given channel would successfully transmit words, images, or sound between a source and a receiver....

Information here is simply the capacity of any channel to affect a causal coupling between two events, speaking and hearing, typing letters and reading them. It is not a thing between those events. If there is no one on the receiving end to hear the voice or read the letters then quite simply there is no information because we don’t have our two events.

Parks: So what do neuroscientists mean when they talk of information processing in relation to the brain? For example, a mouse’s brain when the animal smells a piece of cheese.... The problem with the concept of “information” comes when we start to take it literally, as Floridi does. We start to imagine there really is a mental, non-physical stuff called information. A subtle dualism creeps in, as if the brain contained organic material on the one hand and this mysterious, immaterial “information” on the other. In fact Floridi speaks of moving from a materialist vision “in which physical objects and processes play a key role, to an informational one,” as if there were some sphere of existence that is not physical.

However, in its precise scientific usage—and certainly most neuroscientists would see it this way—“information processing” simply means that a physical system—a computer, or the human body, the brain—allows given events to pass along their causal influence to further events. ... The notion of information and information processing is then built on top of all that causation. It is a kind of shorthand for describing a causal chain so complex as to be beyond any visualization or easy explanation....

Parks: OK, let me try to sum up so far. The neuroscientists, for the most part internalists, continue to fill us in on the brain’s exceedingly complex chemical and electronic activity. Meantime the extended computer metaphor that they almost always employ conveys the impression that what is going on is not just organic, but “mental,” that the brain is producing consciousness, storing memories, decoding representations, processing data. So there is a general feeling of promise and expectation, but actually we get no nearer to an explanation of consciousness itself, since we are simply describing, with ever greater precision, what neurons organically do.

Manzotti: I’d agree with that. And perhaps add that maybe people are not unhappy with the situation: we get regular, often melodramatic updates on how marvelously complex we are and how clever scientists have become, while consciousness remains blissfully mysterious. In short, we get to feel very special all round....

Chalmers agrees that information, as Shannon construes it, lacks any phenomenal character (colors, smells, feelings), or indeed intrinsic meaning, in that a string of zeros and ones in a computer might mean anything. Yet he believes that the brain is basically a computational device crammed with information. So how do all those zeroes and ones, or some neuronal version of the same, become colors, sounds, pains, and pleasures? His solution is that information has a dual aspect—the functional aspect (the zeroes and ones that govern our behavior) and the phenomenal aspect that constitutes conscious experience (colors, sound, itches, whatever). He does not explain why or how this should be and admits himself that his position is basically dualist: information has two sides, one that science can deal with, neurons controlling behavior, and another that is, simply, consciousness.
information  consciousness  brain_science  epistemology 
12 days ago
Get Ready for Quieter NYC Subway Stations (Yes, It’s Possible) | WIRED
The line’s first phase, a 4.2 mile stretch of track buried 10 stories below the Upper East Side, opens in December. When the entire line eventually opens, it will stretch 8.5 miles and include 16 new stations. The Metropolitan Transportation Authority has hired Arup to make them easier on the ears.

Arup’s acousticians can’t set up shop in a subway station, so they built digital models of the Second Avenue subway using recorded sounds and measurements, some collected from existing stations. They’d use these models to make subtle changes to a station design, exploring the best way to minimize the din....

Arup’s plan to rethink the subway begins—where else?—with the track. The MTA is investing in a “low-vibration track” using ties encased in concrete-covered rubber and neoprene pads. It nixed joints between tracks in favor of a continuously welded rail that does away with the “badump, badump” of the wheels. That’s just the start.

“The big change is really in the finishes,” says Joe Solway, Arup’s acoustic lead on the Second Avenue subway project. Most subway stations are built with tile and stone, which bounce sound all over the place. MTA will line the ceilings with relatively absorbent rigid fiberglass or mineral wool—hardier versions of the pink, fluffy insulation in your attic—covered with a perforated metal or enamel sheet to keep it out of human hands. It’s like a Roach Motel for noise.

The ceiling will gently curve like others in New York, but it will direct sound toward the train instead the platform. The speakers, the safety raison d’être of the whole subway silencing enterprise, will sit at 15-foot intervals, angled to holler directly at riders, says Solway, for ideal resonance and volume. Improved cables, sound-isolated booths, and greater diction by those making announcements will further improve fidelity.
arup  acoustics  sound  media_city  subways  transportation  noise 
12 days ago
I Spy with my Machine Eye - e-flux Architecture - e-flux
I first flew from a balloon to decipher the war games below. On the ground they scrambled to trick me and inflate rubber tanks, build fake cities and paint trees on factory rooftops. Technologies of vision have always generated new technologies of camouflage. You have always found ways to resist new ways of seeing....

I am your remote eyes free from the tyranny of fixed location. I have been tasked to survey a new landscape and I haven’t seen the world like this before. Our technology is splayed out before us. From the ground it’s just a white mound of soil, but from above the earth comes alive with the colors of lithium electricity....

This is how I see the world for navigation. Its nuance, its subtlety, is processed as blank geometries, calibration markers and simple surfaces, like an animated cubist painting, where every meaningful inch is calculated so as to be effectively navigated, controlled and managed. You are just a surface that my sensors reflect off....

In the distance we can make out the tracery of markings scored across the surface of the earth. Its not evidence of some ancient culture or a forgotten relic of the Nazca lines, but the traces of new tribes of remote sensing; the animal tracks of my orbiting eyes above. Satellite mounted cameras come here to calibrate our lenses. The skin of the earth is a digital test pattern and, like a cave painting, these are the primitive markings of a new culture firmly on the rise.
drones  design_fiction  machine_vision  indexical_landscapes 
12 days ago
How to Become a Famous Media Scholar: The Case of Marshall McLuhan - Los Angeles Review of Books
Like most celebrity ascensions, McLuhan’s was the product of a conscious publicity campaign. Handlers, press agents, and impresarios worked together to make “McLuhan” a household name. He was packaged and promoted like a promising starlet, with multimedia gusto. Understanding Media garnered a few mainstream print reviews upon publication, but McLuhan’s break came in early 1965, when a pair of San Francisco prospectors — one, Gerald Feigen, a physician, the other, Howard Gossage, an ad-agency executive — “discovered” McLuhan and promptly arranged to visit the Canadian in Toronto. Feigen and Gossage were self-fashioned avant-gardists, using profits from their business consulting firm for “genius scouting”; the doctor read Understanding Media and alerted his partner. Together they plotted a full-fledged publicity rollout, starting with cocktail parties in New York City with media and publishing figures. The pair staged a weeklong “McLuhan Festival” that summer, with nightly parties and a rotating cast of ad executives, newspaper editors, mayoral aides, and business leaders in attendance.
Tom Wolfe, not yet famous as a prophet of the New Journalism, was there too, on assignment for the New York Herald Tribune’s Sunday magazine New York. He soon published a feverish profile (“What If He’s Right?”)...

The case of McLuhan raises intriguing questions about the relationship between academic celebrity and intellectual influence. There is good reason to doubt a simple one-to-one correlation: the most famous scholars in their own time are rarely the most respected and consequential in the long run. Indeed, fame may exact a penalty in reputational terms. Public visibility can come off as lightweight pandering. Media savviness and the ability to give good soundbite may signal an unserious mind to scholars who labor over footnotes. Academic celebrities, simply by winning the spotlight, are suspect, judged to be “media whores.” The sociologist Pierre Bourdieu, in On Television, made the point with unusual vigor, comparing media-friendly scholars to a Trojan horse. “Supported by external forces, these agents are accorded an authority they cannot get from their peers,” he wrote. Such scholars are “already, or are about to become, ‘failures,’” which explains their interest in screen time, “however precipitate, premature, and ephemeral.” Akin to collaborationists under the Occupation, they are the vessels by which “the laws of the market” contaminate otherwise autonomous scholarly fields....

McLuhan’s medium-is-the-message formalism has indeed provoked lots of important work in media studies. He’s the fountainhead for the modish “German media theory” that’s gaining fast syllabus traction in the English-speaking academy. The most interesting American media thinker, John Durham Peters, credits McLuhan as an “unmissable destination for media theorists.” In some ways, though, McLuhan was more a product of the media culture than its student. He seduced Esquire and the ad men (and later Wired) because what he had to say resonated with Americans already primed for the good news about technology. That’s no reason to stop reading him: McLuhan’s probes, taken as truth-indifferent provocations, really are good to think with. It’s just that the man — rewarded for closeting his gloom — is more instructive than his books.
media_theory  mcluhan  academic  celebrity 
12 days ago
Searching for Lost Knowledge in the Age of Intelligent Machines - The Atlantic
The corroded device still bore faded inscriptions and it appeared to have the guts of a clock, mechanics that didn’t make any sense. After all, the lump had been found among the wreckage of a ship that sailed the Mediterranean more than 1,000 years before timekeeping gearwork first appeared in Medieval Europe. When the ship went down, no one on the planet was supposed to have had complex scientific instruments—what was this thing?


It came to be known as the Antikythera Mechanism. In the decades that followed, with ever more sophisticated technology to guide them, researchers would begin to understand how the peculiar device once worked. Today, the mechanism is often described as the world’s oldest computer—more precisely, it seemed to be an analog machine for modeling and predicting astronomical and calendrical patterns. Even before it was lost, the device must have been a treasure. When it was new, the mechanism was a turn-crank marvel housed in a rectangular wooden case, like a mantel clock, with two dials on the back. Instead of having two hands to tell the time on the front, the mechanism had seven hands for displaying the movement of celestial bodies—the sun, the moon, Mercury, Venus, Mars, Jupiter, and Saturn. The planets were represented by tiny spheres that could themselves rotate, with the moon painted black and silvery white to depict its phases....

Using machines to find meaning in vast sets of data has been one of the great promises of the computing age since long before the internet was built. In his prescient essay, “As We May Think,” published by The Atlantic in 1945, the influential engineer and inventor Vannevar Bush imagined a future in which machines could handle tasks of logic by consulting large troves of connected data. His essay would prove instrumental in influencing early hypertext—which in turn helped shape the linked infrastructure of the web as we know it.

Bush envisioned sophisticated “selection devices” that would be able to comb through dense information and yield the relevant bits quickly and accurately. At the center of all this was what Bush called the Memex, his idea for a deep indexing system that could consolidate and search mammoth collections of information in various formats—including text, photocells, microfilm, and audio. The Memex, he argued, would be a technological solution to an almost existential problem: The totality of recorded human knowledge was constantly growing, but the tools for consulting this ever-swelling record remained “totally inadequate.” Instead, he looked to the intricate pathways of the human mind to inspire the architecture of a fantastical new system....

Just as Vannevar Bush envisioned, engineers are building computer models of neural networks, machines that mimic the elegance and complexity of human thought. But there are still many challenges ahead. Sourcing is a big one. Even a database built from tens of millions of well-vetted books and articles isn’t comprehensive. And there’s still the question of how the results from these new search engines ought to appear to the person searching. A simple graph that shows a connect-the-dots web of related resources and ideas is one way. A more sophisticated map-like interface is another—“like Google Maps,” Gramatica offers—but you’d still lose scale and context as you zoom in and out.

“In terms of how to visualize it, that is one of the biggest challenges. We need to move away from the list-of-links approach, like the traditional search engine, because otherwise you’re back to the same situation where you need to click, and read, and click, and another window opens, and another window, and another window—and you don’t let your brain see the whole connection.”...

In the case of the Lincoln report, a human researcher happened upon the document. In the future, such serendipity may not be necessary. A machine that scrapes vast catalogues of text for context would be able to comb archived collections at the item level. (Of course, this would require digitization of the physical document, but that’s another issue). “I don’t think machines are going to completely supplant us, but they’re certainly going to augment our ability to discover things,” said Sam Arbesman, a scientist who studies complexity and the future of knowledge. “There are going to be more and more of these human-machine partnerships, especially in the realm of innovation and discovery.”

The structural underpinnings for these sorts of partnerships are already being built at the institutional level. For several years, the Library of Congress has been working with several universities—including Stanford, Cornell, Harvard, Princeton, and Columbia—on a project it calls BIBFRAME, a next-generation cataloguing system that will ultimately replace the current electronic system that most libraries use. The outgoing system, built on MARC records—short for MAchine-Readable Cataloging record—was what replaced physical card catalogues in the 1970s. Today’s electronic records are designed such that you can trace any descriptive element from one record—an author’s name, for example—to other records stored in the same format. But BIBFRAME will go much deeper, producing links that reveal connections about any number of other elements related to a book or resource, including items from the web. The new system is built for the Internet Age, and meant to meet expectations about how people search for information online. “[The existing system] is self contained and library-oriented, and we need to get something that is conversant with the larger information community,” said Beacher Wiggins, the library’s director for acquisitions and bibliographic access. With BIBFRAME, the idea is to use “the same language that the browser community and the internet community uses,” so that the library stays linked to outside resources even as browser technology changes....

“The value that I see going forward is the linking part of the data environment,” Wiggins added. “You start searching at one point, but you may be linked to things you didn’t know existed because of how another institution has listed it. This new system will show the relationship there. That’s going to be the piece that makes this transformative. It is the linking that is going to be the transformative.”

The idea for linking information this way can be traced back more than 70 years, all the way to Bush’s Memex. But none of it would be possible without new technology. Machine learning and artificial intelligence will change the way people search, but the search environments themselves will evolve, too. Already, computer scientists are building search functionalities into virtual reality. In other words, the future of human knowledge—how we discover and contextualize what we know—depends almost entirely on tools and digital spaces that are rapidly changing and will continue to change.
instruments  tools  clock  time  temporality  search  computing  knowledge  cataloguing  archives  research  historiography 
12 days ago
Filing Cabinets
The earliest advertisement we have found for a filing cabinet for storing unfolded letters in a vertical position is in the 1900 Library Bureau catalog.  Click on the following link and scroll down one page to page 113:  1900 Image of Vertical File
According to secondary sources: Perley Morse, Business Machines, 1932, states that the vertical file was invented in 1892 by Dr. Rosenau and exhibited in 1893 at the World's Fair.  Allen Chaffee, How to File Business Papers and Records, 1938, p.4, repeats this. (See note at bottom of this web page.) Yates (pp. 56-57) states that "Vertical filing of papers,...which evolved from the vertical file card files used by librarians, was presented to the business world in 1893....In 1892, the Library Bureau devised guides and folders for filing correspondence on edge and had file cases designed for them.  They presented that system at the Chicago World's Fair of 1893, where it won a gold medal....However, the changeover [to vertical files] was not immediate and universal."

Based on our research using primary sources: The Library Bureau, which was a well established supplier of furniture to libraries by 1893, had an exhibit at the 1893 Columbian Exposition in Chicago. The catalog of the exposition states that the Library Bureau exhibited a "Card-case for records of charitable societies." (World's Columbian Exposition Official Catalog, Part VII, Department G, Chicago, 1893, p. 35) Other accounts indicate that the Library Bureau also exhibited other library furniture. However, we have not found evidence in primary sources that the Library Bureau exhibited or won a prize for a vertical file in 1893.

The Library Bureau published an annual illustrated catalog that was over 100 pages long during the 1890s to promote its furniture to prospective customers. No vertical file is advertised in the 1894, 1897, or 1899 catalogs, although these catalogs did advertise card catalogs. For the first time, the Library Bureau's 1900 catalog includes a Vertical Filing Cabinet, which was designed for storing letters. The catalog states: "This practical construction, [was] first used in card catalog cabinets." The catalog states that the company "next manufactured vertical filing cases for invoices and loose sheets, about 5" x 8" inside," and that "a still larger file is now made having inside dimensions 10" x 12" and 22" deep. This file is designed for letters, pamphlets,....." (Library Bureau, Classified Illustrated Catalog of the Library Bureau, Boston, 1900, p. 112, emphasis added) These statements suggest that it was not until 1900 that the Library Bureau marketed vertical files large enough for an unfolded letter to be filed vertically. These vertical filing cabinets apparently used technology that was patented or licensed by the Library Bureau beginning in 1892, but that technology was probably developed for card catalogs.

We reviewed many illustrated catalogs and ads from the 1890s showing filing cabinets made by various manufactures. We also reviewed numerous photos of office interiors from the 1890s. None of these catalogs, ads, or photos showed or mentioned vertical filing cabinets. After extensive searching, the earliest evidence we have found of a vertical filing cabinet being marketed is the 1900 Library Bureau catalog cited in the preceding paragraph. Other companies began to advertise vertical filing cabinets in 1901 (see below). A large number of companies were advertising vertical files in 1903. Yates reports that, according to a report by a government commission, by 1911 "vertical flat filing [had] practically supplanted all other systems" in the large companies it investigated. See also Flanzraich.
storage  filing  filing_cabinet  library_bureau 
13 days ago
IBM and the 1964 World’s Fair | Computer History Museum
Just a few weeks ago, we received an interesting donation to the Museum: a commemorative punched card from the IBM Pavilion at the 1964 World’s Fair. It’s a standard IBM punched card—a piece of card stock with holes (usually) punched into it to represent information. This one, however, was given to visitors to the IBM Pavilion who took part in an exhibit there on handwriting recognition and databases, or what an IBM brochure describing the fair called Optical Scanning and Information Retrieval. I’m just going to keep things easy and call it IBM’s This Date in History exhibit.
IBM  worlds_fair  punch_cards 
13 days ago
Room One Thousand — Little Boxes: High-Tech and the Silicon Valley
Before Silicon Valley existed, another “little box,” the garage, already occupied a central role in the development of the electronic industry in the area. Although historians often cite the Stanford Industrial Park as the key physical site in the development of high-tech industry, a good case can be made that the lowly garage played an equally central role. In 1938, Bill Hewlett and David Packard rented a house in Palo Alto. Hewlett moved into the 12 x 18 foot garage, which they also used as a workshop. There, they invented and produced audio oscillators; the first product of what later became the electronics industry. Now a museum, the preserved shack is listed on the national register of historic places with a plaque reading “Birthplace of Silicon Valley.”[16] After this, the new Hewlett Packard Company moved to a shop front in Palo Alto and then built their own warehouse-style building adjacent to the railroad tracks there. Only much later, in 1957, did they separate research and management from production, building one of the first modernist complexes in the new Stanford Industrial Park. As the flagship tenant in the development, their buildings, designed by architects Clark, Stromquist and Clark and landscaped by Thomas Church, set an example for later Silicon Valley campuses: with their striking design and employee amenities, rare at the time, including gardens, cafeterias, fountains and a “worker’s playground” with a horseshoe pit, badminton and volleyball courts....

Observers, attempting to account for this have traced Silicon Valley’s unique achievements to many causal factors: its history of well-funded military research, Stanford University’s strong links with tech industries, the Valley’s mild climate, and the availability of risk-tolerant venture capital.[22] Annalee Saxenian argues that the Valley’s success lies in its dense communication networks, where continuous personal interaction between engineers led them to change firms or start new firms, a type of intellectual and economic mobility that has produced the continuous flow of startups that guarantee the Valley’s continued economic vitality and growth.[23]

Less attention has been given to the role of Silicon Valley’s suburban landscape in fostering this culture. In fact, urban scholars have cited its “haphazard planning” and “car dependency” as detriments to its development.[24] However, it might also be argued that the flexible, network-based structure of Silicon Valley life and work found its physical analog in flexible, connected suburban space, with its freeway network and cheap and easily adaptable buildings. Over the decades, the little boxes that Malvina Reynolds imagined would produce identical residents have accommodated groups as varied as engineers and their families, working class homeowners, low-wage production workers, groups of unrelated high-tech employees, and Chinese and Indian immigrant families. ...

Silicon Valley demonstrates that a dispersed landscape predicated on mobility and the continuous construction of banal and repetitive building types is easily adaptable to growth and change. Since both are essential elements of innovation, it is not surprising that innovation flourishes here.
suburbia  silicon_valley  media_architecture  media_workplace 
14 days ago
Matt Mullican at Micheline Szwajcer (Contemporary Art Daily)
Since the 1970s, US artist Matt Mullican has been interested in models for explaining the world. He has developed a complex system of symbols consisting of various pictograms and colors as a means of tackling the question of the structure of the world, and with his system he aims to portray in symbols every aspect of the human condition in different combinations.
Every color has a specific symbolic value attached to it. For example, green stands for material, blue for the everyday world, yellow for ideas, white and black for language and red for the subjective. The model of perception that Mullican calls the theory of the five worlds serves him as a system of order for his method of working as an artist. It illustrates the relationship between the world and its representation. The artist is particularly interested in how we charge symbols and systems of symbols with meaning.
archive_art  information  epistemology  symbols  classification  organization 
15 days ago
A People's Archive of Sinking and Melting
Technosols comprise a new reference soil group (RSG) and combine soils whose properties and pedogenesis are dominated by their technical origin. They contain a significant amount of artefacts (something in the soil recognizably made or extracted from the earth by humans), or are sealed by technic hard rock (material created by humans, having properties unlike natural rock). They include soils from wastes (landfills, sludge, cinders, mine spoils and ashes), pavements with their underlying unconsolidated materials, soils with geomembranes and constructed soils in human-made materials. Technosols are often referred to as urban or mine soils. They are recognized in the new Russian soil classification system as Technogenic Superficial Formations. IUSS Working Group WRB. 2006
library_art  libraries  geology  agriculture  anthropocene 
15 days ago
Book Review: Denis Wood reviews “The Power of Maps” – But not his “The Power of Maps” | Making Maps: DIY Cartography
While there’s a philosophical inclination to insist that these projects are “demand driven” – by the locals – it’s plain enough that they’re instigated by the agencies funding the work and so they’re in pursuit of agency goals....

Further along in the further-readings is CTA’s “Training kit on participatory spatial information management and communication” (2010). This consists of fifteen modules, most of which contains four units, none of which takes less than an hour, which is to say we’re talking about a commitment of more than 60 hours, and that’s without all the stuff they ask you to download and read or watch. Working through this training kit demands a serious chunk of time and energy. But, then, the whole thing does. The least of it is the construction and interpretation of the model, since once that’s done the whole thing has to be turned into a GIS, and maps have to be made. Maps, as my Power of Maps made plain, will be the tools used to implement any action, which since these are more or less all government projects, pretty much goes without saying. The 3D models are really about securing buy in, consensus, on the part of the locals...

I’ve critiqued participatory mapping before, in the keynote, “Public? Participation? Geographic? Information? Systems?” that I gave to the 2005 URISA Conference on PPGIS in Cleveland. The title should make the nature of my complaint clear enough. At the time I was unaware of Bill Cooke and Uma Kotiiari’s Participation: the New Tyranny? (Zed Books, London 2001) which examines many of my complaints in piercing detail; and I certainly didn’t know of Cooke’s “Rules of thumb for participatory change agents” (from Samuel Hickey and Giles Mohan’s Participation – From Tyranny to Transformation?, Zed Books, London, 2004), the first of which is, “Don’t work for the World Bank,” which naturally enough turns out to have funded “Mapping in Madagascar – from skepticism to ownership.” In fact, most of the projects laid out in The Power of Maps violate most of Cooke’s rules. One of these, “Data belong to those from whom they were taken,” includes “The use of photographs of participants in presentations and publications without their consent, informed or otherwise.” Again, nearly every one of the better than four dozen photos in The Power of Mapsconsists of photos of the people, the kids, the adults, the “elders” and, far more rarely, the government and NGO folk involved. Can you imagine the photographer of the image on the cover scurrying around to solicit the permission of each of the 23 caught? (Why bother? They’re mostly kids.) Under the same rubric Cooke mentions the use of material gathered in one capacity, as a participatory change agent, in another, for example as an academic in a journal, again without permission. Further he notes the public disclosure of information, in conferences or faculty staff rooms, again without permission, and contrasts this with the censure that would clobber people working with First World clients (therapists, for example)....

The worst of it is the startling lack of evidence that all this cardboard and plaster and paint and yarn has paid off in significant benefits for the locals, who end up no more than exposing their local knowledge to outsiders whose ultimate goal is buy in from the locals.
mapping  participatory_mapping  public_process 
15 days ago
Why Google is giving up on its dream to bring super-fast broadband to everyone - Vox
He rejected Crawford’s federal approach as “an absurd waste of government resources.”

To the extent that there’s any problem with US internet service, he said, it’s with connectivity in remote, rural areas.

The focus of policy should be on “making sure our nation’s poor have access to the internet at acceptable speeds,” he said. For the most part, that means incentivizing the major broadband providers to expand and upgrade their existing networks, which are often much slower than 1 gigabit.

Brake points to Vector DSL and DOCSIS 3.1, new standards that allow telephone and cable networks, respectively, to carry more data than before. He also highlighted the forthcoming fifth generation of mobile broadband (5G), which will have speeds about 10 times those of 4G devices.

With that kind of steady progress, he argues, it would be foolish to spend billions of dollars on a fiber networks that would deliver far more bandwidth than most households know what to do with....

Google Fiber’s rollout has been painstaking and slow, and the project has so far failed to meet its growth goals.

From 2014 onward, the company has required wannabe “Fiber cities” to promise they’ll expedite Fiber’s installation in various ways. These include providing detailed information on and access to existing infrastructure, streamlined construction permitting, and other concessions. Getting cities to buy into the whole program has proved difficult. AT&T is continuing to expand its fiber optic network, called Giga Power. Verizon invested heavily in its own fiber optic service, called FiOS, over the past decade. But in recent years, new FiOS investments have practically ground to a halt.

All three firms have recently signaled plans to shift away from laying fiber lines directly into subscribers’ homes. The new initiatives revolved around covering the so-called “last mile” using 5G wireless, which is easier to install but runs at slower speeds.
infrastructure  fiber_optics  google 
15 days ago
Audiotactility & the Medieval Soundscape of Parchment | Sounding Out!
Although the field of audiotactile integration has been somewhat dormant in the biological sciences since Paul von Schiller suggested back in 1932 that sounds, especially patterned noises, could affect tactile perception of roughness, recently some researchers have conducted experiments that test audiotactile qualities of materials. Several have suggested that these results might be synthetic—that is, the impact the sounds have modulate the haptic perception of the material being touched. For the most part, there seems to be connections between the perceptions of the sounds involved in touch and the perceptions of the stiffness of material. However, one study demonstrates that synchronized movements and sounds can affect the perception of the subject’s own skin.  Suffice it to say, then, that sounds and texture and material quality are linked, both physically and perceptively...

Although humans rarely display deliberate awareness of audiotactile interaction, both auditory and haptic stimulation share similar temporal and psychological patterns in human consciousness. This connection would have perhaps been even more true in the Middle Ages as it is now, since the context of parchment and manuscript production and consumption was more immediately personal than paper production and reading is today.

...how parchment masters could make a parchment “sing,” and how they, through this sound, knew whether or not the skin had reached its full potential....

Age, breed, size, and animal health can all contribute to the audiotactile qualities of a skin. However, there are some general guidelines. Sheepskins, for instance, are stretchier than goat skins, so their “ringing” can be muffled. Goatskins, which are thinner and stiffer, make a higher pitched “ring.” Calfskins are larger and easier than the others to get clean, and thus often make the cleanest “ring” and can do so more quickly than the others....

How did those pages feel to a medieval reader? Did they feel rough or smooth? Did the reader feel a frisson of excitement? Animal skin, such as parchment, carried with it the essence of the life of the animal, thus imbuing the images painted onto it with some semblance of life force, such as suggested by Thomas Aquinas in his Question 8(Summa Theologica) regarding the potential for divinity placed within material objects. To a certain extent, then, touching an image of Christ was akin to touching a proxy of his body, allowing a powerful and individual haptic experience of faith. But what about the sounds made when these images became the subject of interaction? Was the medieval reader aware of touching the page, touching Christ’s wounds, even more because he or she would hear the interaction?
material_texts  tactility  haptics  sound  sensory_history  manuscripts  parchment 
15 days ago
History for an Empty Future - e-flux Architecture - e-flux
A history of architects’ names would be less a collection of biographies than an anthology of traces left by existences; traces that became more articulated in their design as those existences were shaped by an increasing number of increasingly denaturing systems of production, structures of power and abstract epistemologies.2 Palladio may have been widely known by a specially designed name, but architects did not commonly sign things until well into the eighteenth century. By then, a name alone was insufficient for the complex task of authentication required by a culture organized around the circulation, collection and exchange of images. Palladio’s name printed in a book was enough to spread Palladianism, but Piranesi needed both a name and an identifying signature to function as avatar and keep him attached to his drawings as they dispersed, image by image, across Europe.3 As industrial modernization further alienated the architect-as-person from his productions, compensatory architectural signatures proliferated until even buildings themselves were signed....

The advent of a geological era designed by humans, today commonly known as the Anthropocene, was therefore raising the question of what kind of signatures, what signs of life, would be useful for a time after the human—for a historical age without human posterity? Those becoming architects as these questions were taking shape are the architects associated with postmodernity and its anxieties about history. These architects constitute the last generation for whom the use of their own name as avatar, linked to their biographies and personal experience, appeared to be a natural selection rather than an act of design. This generation was scrupulously attentive to how they would enter the historical record, constructing elaborate and explicitly designed genealogies of architecture into which they inserted their names. Above all they self-archived, almost continuously, and while their obsessively constant choices about what to keep and what to expunge from the design of their future memory produced an almost comprehensive record of their existences, their efforts nevertheless contain gaps; evidence of moments when they appear to have been distracted from posterity by the exigencies of immediate events. And in these unintended lacunae, self-designs not only without designers, but more importantly without designated or even imagined recipients, begin to appear.

Peter Eisenman and Robert Venturi are two architects who were particularly consumed by self-archiving in the 1960s, although they followed apparently opposed design methods in this enterprise. Eisenman allowed relatively little material to find its way into the archive of House 1, for example, which is evidence of the great deal of material he elected to repress.6 He included only drawings, largely by his own hand, and no paperwork, thereby eliminating any trace of constraints on him and exposing his desire to appear absolutely autonomous, which is to say self-regulated and self-designed. His protocol recalls that of an art collector or museum curator, carefully picking and choosing things according to criteria designed to appear subjective and hence able to produce a record of a human existence identified as such by its pure and autonomous subjectivity. Many of these drawings are signed. Venturi, on the other hand, kept and included everything: every scrap of paper, specification set and phone memo.7 His protocol was clinical and scholarly, designed to appear objective, and his archive contains far more signatures on typed letters, contracts and other mechanically produced documents than on drawings. Where Eisenman’s archive is holographic, like a handwritten last will and testament, with his name appearing as an intrinsically authenticating autograph, Venturi’s archive is an accumulation of copies, typesets, and transcripts; the sort of documents that require notaries to authenticate their signatures. The former imagines its salience in terms of its capacity to record the unfolding of a personal history—what the field calls a “design process”—addressed to other persons with histories, while the latter imagines its salience in terms of its capacity to record the traces of functions deposited in an impersonal mountain of paperwork addressed to other operatives.
media_architecture  architectural_history  authorship  archives  intellectual_property 
15 days ago
The Great A.I. Awakening - The New York Times
There has always been another vision for A.I. — a dissenting view — in which the computers would learn from the ground up (from data) rather than from the top down (from rules). This notion dates to the early 1940s, when it occurred to researchers that the best model for flexible automated intelligence was the brain itself. A brain, after all, is just a bunch of widgets, called neurons, that either pass along an electrical charge to their neighbors or don’t. What’s important are less the individual neurons themselves than the manifold connections among them. This structure, in its simplicity, has afforded the brain a wealth of adaptive advantages.... There was no reason you couldn’t try to mimic this structure in electronic form, and in 1943 it was shown that arrangements of simple artificial neurons could carry out basic logical functions. They could also, at least in theory, learn the way we do. With life experience, depending on a particular person’s trials and errors, the synaptic connections among pairs of neurons get stronger or weaker. An artificial neural network could do something similar, by gradually altering, on a guided trial-and-error basis, the numerical relationships among artificial neurons. It wouldn’t need to be preprogrammed with fixed rules. It would, instead, rewire itself to reflect patterns in the data it absorbed.... Google Brain was the first major commercial institution to invest in the possibilities embodied by this way of thinking about A.I....

The first layer of the network learns to identify the very basic visual trope of an “edge,” meaning a nothing (an off-pixel) followed by a something(an on-pixel) or vice versa. Each successive layer of the network looks for a pattern in the previous layer. A pattern of edges might be a circle or a rectangle. A pattern of circles or rectangles might be a face. And so on. This more or less parallels the way information is put together in increasingly abstract ways as it travels from the photoreceptors in the retina back and up through the visual cortex. At each conceptual step, detail that isn’t immediately relevant is thrown away. If several edges and circles come together to make a face, you don’t care exactly where the face is found in the visual field; you just care that it’s a face....

A lot of our ambient fears about A.I. rest on the idea that they’re just vacuuming up knowledge like a sociopathic prodigy in a library, and that an artificial intelligence constructed to make paper clips might someday decide to treat humans like ants or lettuce. This just isn’t how they work. All they’re doing is shuffling information around in search of commonalities — basic patterns, at first, and then more complex ones — and for the moment, at least, the greatest danger is that the information we’re feeding them is biased in the first place...

Now imagine that instead of hard-wiring the machine with a set of rules for classification stored in one location of the computer’s memory, you try the same thing on a neural network. There is no special place that can hold the definition of “cat.” There is just a giant blob of interconnected switches, like forks in a path. On one side of the blob, you present the inputs (the pictures); on the other side, you present the corresponding outputs (the labels). Then you just tell it to work out for itself, via the individual calibration of all of these interconnected switches, whatever path the data should take so that the inputs are mapped to the correct outputs. The training is the process by which a labyrinthine series of elaborate tunnels are excavated through the blob, tunnels that connect any given input to its proper output....

The more “voters” you have, and the more times you make them vote, the more keenly the network can register even very weak signals. If you have only Joe, Frank and Mary, you can maybe use them only to differentiate among a cat, a dog and a defibrillator. If you have millions of different voters that can associate in billions of different ways, you can learn to classify data with incredible granularity. Your trained voter assembly will be able to look at an unlabeled picture and identify it more or less accurately.... The neuronal “voters” will recognize a happy cat dozing in the sun and an angry cat glaring out from the shadows of an untidy litter box, as long as they have been exposed to millions of diverse cat scenes. You just need lots and lots of the voters — in order to make sure that some part of your network picks up on even very weak regularities, on Scottish Folds with droopy ears, for example — and enough labeled data to make sure your network has seen the widest possible variance in phenomena....

If your data had a picture of a man and a woman in suits that someone had labeled “woman with her boss,” that relationship would be encoded into all future pattern recognition. Labeled data is thus fallible the way that human labelers are fallible. If a machine was asked to identify creditworthy candidates for loans, it might use data like felony convictions, but if felony convictions were unfair in the first place — if they were based on, say, discriminatory drug laws — then the loan recommendations would perforce also be fallible....

What the cat paper demonstrated was that a neural network with more than a billion “synaptic” connections — a hundred times larger than any publicized neural network to that point, yet still many orders of magnitude smaller than our brains — could observe raw, unlabeled data and pick out for itself a high-order human concept. The Brain researchers had shown the network millions of still frames from YouTube videos, and out of the welter of the pure sensorium the network had isolated a stable pattern any toddler or chipmunk would recognize without a moment’s hesitation as the face of a cat. The machine had not been programmed with the foreknowledge of a cat; it reached directly into the world and seized the idea for itself... Most machine learning to that point had been limited by the quantities of labeled data. The cat paper showed that machines could also deal with raw unlabeleddata, perhaps even data of which humans had no established foreknowledge. This seemed like a major advance not only in cat-recognition studies but also in overall artificial intelligence...

A neural network, however, was a black box. It divined patterns, but the patterns it identified didn’t always make intuitive sense to a human observer....

When you summarize images, you can divine a picture of what each stage of the summary looks like — an edge, a circle, etc. When you summarize language in a similar way, you essentially produce multidimensional maps of the distances, based on common usage, between one word and every single other word in the language. The machine is not “analyzing” the data the way that we might, with linguistic rules that identify some of them as nouns and others as verbs. Instead, it is shifting and twisting and warping the words around in the map. In two dimensions, you cannot make this map useful. You want, for example, “cat” to be in the rough vicinity of “dog,” but you also want “cat” to be near “tail” and near “supercilious” and near “meme,” because you want to try to capture all of the different relationships — both strong and weak — that the word “cat” has to other words. It can be related to all these other words simultaneously only if it is related to each of them in a different dimension. You can’t easily make a 160,000-dimensional map, but it turns out you can represent a language pretty well in a mere thousand or so dimensions — in other words, a universe in which each word is designated by a list of a thousand numbers....

Unlike Searle, they don’t assume that “consciousness” is some special, numinously glowing mental attribute — what the philosopher Gilbert Ryle called the “ghost in the machine.” They just believe instead that the complex assortment of skills we call “consciousness” has randomly emerged from the coordinated activity of many different simple mechanisms. The implication is that our facility with what we consider the higher registers of thought are no different in kind from what we’re tempted to perceive as the lower registers. Logical reasoning, on this account, is seen as a lucky adaptation; so is the ability to throw and catch a ball. Artificial intelligence is not about building a mind; it’s about the improvement of tools to solve problems....

Radiologists are extensively trained and extremely well paid, and we think of their skill as one of professional insight — the highest register of thought. In the past year alone, researchers have shown not only that neural networks can find tumors in medical images much earlier than their human counterparts but also that machines can even make such diagnoses from the texts of pathology reports. What radiologists do turns out to be something much closer to predictive pattern-matching than logical analysis. They’re not telling you what caused the cancer; they’re just telling you it’s there.
learning  machine_learning  intelligence  artificial_intelligence  neural_nets  computing_history 
15 days ago
Why Can’t the U.S. Decolonize Its Design Education?Eye on Design | Eye on Design
She sites a few key figures in the U.S. and UK who have made an impact in developing graphic design curricula that not only “decolonizes” but “demystifies” the complexity of these issues. Professor Elizabeth Resnick at Massachusetts College of Art and Design has curated international political poster exhibitions that explore different cultural traditions in graphic design, while AIGA Medalist Lucille Tenazas’ approach to curriculum at the New School “has been directly influenced by her cross-cultural experiences as a designer from the Philippines.”

In the UK, Triggs cites “the work Aisha Richards, director of Shades of Noir, founded at Central Saint Martins, University of the Arts London, which focuses on themes that encourage active discussion on race, religion, masculinity, and so forth, but within a supportive environment.” There are also conferences like the Design Research Society, held in Brighton this year, that have proved to be fertile grounds for the discussion of decolonization in education and within the greater design community.

When asked why the U.S. has such a limited scope in its design programs, she replied, “I’d turn around and ask what has been conducive within UK design education where there is a seemingly broader appetite for an integrated approach. In my opinion, this has, in part, come out of a strong foundation for research and research degrees at Master’s in research and MPhil/PhD programmes in art and design.”

“An increase in the interdisciplinarity of the design field, and social factors and in human-centered research, is producing PhDs who are seamlessly crossing design with geography, anthropology, humanities, and social sciences, where these subjects are already engaged with ‘decolonization.’”

For the U.S., whose decolonized design programs are few and far between, a transdisciplinary approach to design might be the best way for educators to build on established groundwork and start a conversation about design that acknowledges many voices instead of a select few. Because unless our country’s indigenous design history is recognized as foundational to contemporary design education, that conversation will remain one-sided, and incomplete.
decolonialization  design_history 
15 days ago
Making Feminist Points | feministkilljoys
But so many of my feminist killjoy experiences within the academy relate to the politics of citation: I would describe citation as a rather successful reproductive technology, a way of reproducing the world around certain bodies.

These citational structures can form what we call disciplines. I was once asked to contribute to a sociology course, for example, and found that all the core readings were by male writers. I pointed this out and the course convener implied that “that” was simply a reflection of the history of the discipline. Well: this is a very selective history! The reproduction of a discipline can be the reproduction of these techniques of selection, ways of making certain bodies and thematics core to the discipline, and others not even part.

I have noticed as well that these citational practices can occur even when the topic is one that feminists have written extensively about. ...

Even when feminists cite each other, there is still a tendency to frame our own work in relation to a male intellectual tradition. And there is certainly an expectation that you will recognise your place through giving your allegiance or love to this or that male theorist....

I have noticed when giving talks or hearing other female academics giving talks how often the first question is ‘how does what you are saying relate to such and such a male theorist?’ as a way of slotting you into an established male intellectual genealogy.
academia  writing  UMS  citation  feminism 
15 days ago
An Ancient City Emerges in a Remote Rain Forest - The New Yorker
The revelation of an ancient city in a valley in the Mosquitia mountains, of Honduras, one of the last scientifically unexplored regions on Earth, was a different story. This was the first time a large archaeological site had been discovered in a purely speculative search using a technology called lidar, or “light detection and ranging,” which can map terrain through the thickest jungle foliage, an event I chronicled in a story for the magazine in 2013. As a result, this discovery revealed something vanishingly rare: a city in an absolutely intact, undisturbed, pristine state, buried in a rain forest so remote and untouched that the animals there appeared never to have seen people before....

Only through technology did we know our location in the ruins. The chief archaeologist on the expedition, Chris Fisher, carried a sophisticated Trimble G.P.S. displaying a digital lidar map of the city with the trees removed, and our location pinpointed on it. As we moved through the jungle, Chris would check the Trimble and say, “There’s a big mound thirty feet ahead,” but we could see nothing but leaves until we practically walked into it.
archaeology  urban_history  sensors  lidar  machine_vision 
15 days ago
​The age of humanism is ending | Opinion | Analysis | M&G
Abetted by technological and military might, finance capital has achieved its hegemony over the world by annexing the core of human desires and, in the process, by turning itself into the first global secular theology. Fusing the attributes of a technology and a religion, it relied on uncontested dogmas modern forms of capitalism had reluctantly shared with democracy since the post-war period — individual liberty, market competition and the rule of the commodity and of property, the cult of science, technology and reason.

Each of these articles of faith is under threat. At its core, liberal democracy is not compatible with the inner logic of finance capitalism. The clash between these two ideas and principles is likely to be the most signifying event of the first half of a 21st-century political landscape — a landscape shaped less by the rule of reason than by the general release of passions, emotions and affect.

In this new landscape, knowledge will be defined as knowledge for the market. The market itself will be re-imagined as the primary mechanism for the validation of truth.

As markets themselves are increasingly turning into algorithmic structures and technologies, the only useful knowledge will be algorithmic.

Instead of people with body, history and flesh, statistical inferences will be all that count. Statistics and other big data will mostly be derived from computation. 

As a result of the conflation of knowledge, technology and markets, contempt will be extended to anyone who has nothing to sell.

The humanistic and Enlightenment notion of the rational subject capable of deliberation and choice will be replaced by the consciously deliberating and choosing consumer.
epistemology  statistics  big_data  liberalism  ethics 
15 days ago
Ecce Emendator: The Cost of Knowledge for Scholarly Editors | Vitae
To be sure, scholarly publishing is still a big business. University libraries make enormous outlays of cash to ensure that the faculty of each department have access to the very best and most recent research.

But editors see none of that money. And their labor to support the mechanism is, more often than not, completely unrewarded and unsupported. In the absence of financial assistance, editors invariably turn a hopeful eye to academic deans and departmental chairs for course relief—or even for acknowledgement of their intense workloads. And, just as invariably, they are rebuffed.

Yet deans and chairs are among the primary beneficiaries of scholarly editing, since the publication of peer-reviewed essays serves as the sine qua non for advancement, raises, reappointments, and promotions.

How, then, does the actual—and hidden—process of editing itself get evaluated? Deans, chairs, and provosts have grown so comfortable with seeing peer-reviewed papers listed on CV after CV. Lost in this evaluation process is the editor who has enabled the very structure of peer review that drives the engine of academia.

Let me use my own case as an example. In a recent performance evaluation, my service as an editor of a major journal in the cultural studies of science was noted briefly, alongside my work on the department’s web and assessment committees. Yet in that year—in addition to my personal scholarly output—I was responsible for the appearance of over 480 pages of scholarly research.

Had I edited a volume of that size, that work surely would have been acknowledged as a significant career accomplishment. And yet journal editors repeat that accomplishment year in and year out. For me, that was equivalent of more than 5,000 pages of edited scholarship over a 10-year period.

This enormous productivity occurs, for editors in general, with minimal support, little or no course relief, and less recognition than the task warrants. The long-established practice of disdaining the job of editing—while coveting peer-reviewed essays amongst faculty—is not a sustainable model for labor in the workplace. And yet it endures.
academia  tenure  journals  editing 
15 days ago
Exploring ‘Immaterials’: Mediating Design’s Invisible Materials
This article explores the related issues of invisibility and material in interaction design, and argues that there is a need to consider ‘immaterials’ as a frame to explore and mediate invisible technological systems. Contemporary visions of technological development often focus on invisibility and ‘seamlessness’ in interface technologies, while the methods of building knowledge about designing with these technologies or issues of agency and control over these invisible interfaces are overlooked. I approach this in two related ways. First, I investigate the context of invisible interfaces and the issues of immateriality in computing, and argue for a renewed investigation of materials in interaction design. Second, I present an exploratory design research enquiry in which an invisible interface technology called Radio Frequency Identification (RFID) was discursively revealed. Drawing on these foundations, I show how design approaches can create new material knowledge by making technical exploration apparent through visualisation, photography, animation, and filmmaking. Overall, this enquiry illustrates a communicative, mediational design research practice that I call discursive design, that constructs language, new narratives, and communicative material that may translate between complex technical subjects and broader audiences and discourses.
RFID  sensors  interfaces  invisibility  interaction_design  materiality 
17 days ago
Hell's Kitchen in the 1860s Was a Sanitary Nightmare - CityLab
But in the 1860s, the Manhattan neighborhood was a beastly wonderland of stenches, bloody parades, and diseases from which to horribly perish. Among its meatpacking-focused highlights were slaughterhouses, gut-cleaning and fat-boiling outfits, towering manure heaps, and stables devoted to the production of “swill milk”—the squeezings of frequently diseased cows that were consumed by the poor, to their detriment.

The area’s stinking history was recently highlighted by the New York Public Library’s Map Division, which posted this map of “Bone Boiling and Swill-Milk Nuisances" from 1865’s Report of the Council of Hygiene and Public Health of the Citizens' Association of New York Upon the Sanitary Condition of the City.
urban_history  sensory_history  olfaction  smell  labor  industry 
17 days ago
From Tape Drives to Memory Orbs, the Data Formats of Star Wars Suck (Spoilers) | Motherboard
Upon reviewing the Star Wars canon of movies (no animated films or shows, and no Expanded Universe content, which now exists in a purgatory of maybe-canon), it’s become clear to me that that the galaxy is crippled by an abundance of disk formats, with all of the accompanying interoperability issues that we see on our own planet. Every time the Rebel Alliance changes bases, they must be lugging around a spaceship full of drives, both new and obsolete, to read every possible format.
star_wars  archives  preservation  format_studies 
19 days ago
Theory of City Form | Architecture | MIT OpenCourseWare
This course covers theories about the form that settlements should take and attempts a distinction between descriptive and normative theory by examining examples of various theories of city form over time. Case studies will highlight the origins of the modern city and theories about its emerging form, including the transformation of the nineteenth-century city and its organization. Through examples and historical context, current issues of city form in relation to city-making, social structure, and physical design will also be discussed and analyzed.
urban_history  urban_form 
20 days ago
Reimagining cities from the internet up – Sidewalk Talk – Medium
If you compare pictures of cities from 1870 to 1940, it’s like night and day. If you make the same comparison from 1940 to today, hardly anything has changed. Thus it’s not surprising that, despite the rise of computers and the internet, growth has slowed and productivity increases are so low.

So our mission is to accelerate the process of urban innovation, and over the past year we’ve been exploring ways to do just that.

Larry Page wrote at the time of our formation that it was critical “to start from first principles and get a big-picture view of the many factors that affect city life.” So we started by conducting a detailed thought experiment: What would a city look like if you started from scratch in the internet era — if you built a city “from the internet up?” What I mean by that is a place where ubiquitous connectivity is truly built into the foundation of the city, and where people use the data that’s generated to enhance quality of life....

In the process, we wrestled together with the technologist-urbanist divide. Our technologists pushed the teams to think big, challenge conventional assumptions about how things work, and leapfrog slow change. Our urbanists reminded us of the importance of data privacy, the complexity of land use, the greatness of diverse communities and vibrant streets, and the many other externalities that are ever-present in dense environments.
We also studied every prior or current effort to integrate technology into new cities or urban districts. All too often, such efforts took a top-down approach — forgetting that cities aren’t primarily about tech-infused buildings or shiny new tools, but the people and communities whose character makes the place so unique. We recognized that you can never truly plan a city. Instead you can lay the foundations and let people create on top of it....

In that sense, we drew inspiration from great platforms like the web, which thanks to open, flexible foundations has enabled creation from people around the world. In a city built from the internet up, we imagined a flexible physical layer (such as street grids, open utility channels, and upgradeable digital infrastructure) with adaptable software (such as privacy rules, regulations that lay out approaches to city management, and principles of governance) that would empower people to build and change “applications” much faster than is possible in cities today....

Our thought experiment was just the start of an ongoing learning process about the nature of urban life. But we’ve reached some broad views about the type of place you might get if you reimagined a city with ubiquitous connectivity designed into its very foundation. We think you get a place that gives people more of what we love about cities with less of what we don’t. A place that’s adaptable, constantly evolving with changing demands, technologies, and tastes. A place that’s personalized for our needs and desires. A place that’s shareable in a million new ways. A place that’s more transparent, with greater trust among neighbors and greater faith in government. A place that feels like a city but functions like a local community....

And that’s why we’re first creating a series of labs to work in close partnership with local communities to develop tools that meet their challenges.
Led by entrepreneurs-in-residence, these labs will consist of hyper-focused, cross-disciplinary teams of policy experts, engineers, product managers, and designers — a full range of urbanists and technologists. They’ll be empowered to advance an idea into a functional prototype that can be tested in the real world, drawing on the Sidewalk team for business development, talent acquisition, communications, and administrative support. Our hope is that many of them will eventually be spun into new companies that create useful tools, products, and services for cities. ...

Sometimes their efforts will develop through pilot projects designed with city agencies or in partnership with organizations, like Sidewalk’s current effort with Transportation for America to tackle mobility challenges. Other times we might hold competitions, recognizing the success of contests like the U.S. DOT Smart City Challenge. The aim is to keep these labs open: engaging the public, sharing what we’ve learned, and refining our ideas....

Model Lab will focus on the challenges faced by communities as they attempt to build consensus on affordability, sustainability, and transportation needs. It will explore the role of new modeling tools along with online collaboration and communication....

A large-scale district holds great potential to serve as a living laboratory for urban technology — a place to explore coordinated solutions, showcase innovations, and establish models for others to follow. Sidewalk is having conversations with community leaders about what truly integrated urban solutions might entail, and we’ve already fielded inquiries from communities around the world interested in exploring such a partnership.
sidewalk_labs  urban_planning  smart_cities  infrastructure  labs  entrepreneurship  methodology  zones 
22 days ago
The Strange History of Microfilm, Which Will Be With Us for Centuries | Atlas Obscura
That tool is microfiche, the plasticky film used to archive old print content, and it has a surprisingly diverse history—one that starts with a guy named John Benjamin Dancer.

In 1839, Dancer, whose father owned an optical goods firm, combined his family’s chosen trade with the then-new daguerreotype process of photography, and started tinkering.

Playing around, he figured out a way to shrink pictures of large objects by a ratio of 160 to 1—and as a result, created the first piece of microfilm. (To clarify terms, “microfilm” is usually distributed in roll form, like you would pull out of a 35mm camera, while “microfiche” is flat.)

Dancer’s experiments also led to an early example of photomicrography, the process of expanding an image of something small to a large size, when he created a six-inch daguerreotype of a flea....

Dragon would create tiny microfilmed photographs of documents, then put them inside tiny tubes attached to the carrier pigeon’s wing. Since the images were visible with the use of a magic lantern—an early form of film projector—this allowed for the discrete distribution of messages to and from the battlefield....

More than three decades later, libraries began to catch on, thanks in part, to a couple Belgians, who, in 1906, made the first argument that microfilm could be used to help save space.

Information scientist Paul Otlet and his colleague Robert Goldschmidt’s paper Sur Une Forme Nouvelle Du Livre: Le Livre Microphotographique did not immediately set the microfiche world ablaze, even after the duo showed off a Steve Jobs-style demo of the technique at the American Library Institute’s annual meeting in 1913.

But by the 1930s, publications such as The New York Times and libraries such as those at Harvard University began using the format as a way to preserve old newspapers. Quickly, the technology became common in libraries everywhere....

These days, of course, the internet has quickly usurped microfilm and microfiche, but content-wise, there are some cases where microfiche arguably does a better job. One of those is classic comic books, for three major reasons:

Low-quality source material. As you may or may not know, comic books were not originally published using the highest quality of paper or ink, and as a result, have not aged well. Microfiche that’s decades old, on the other hand, holds up pretty darn well.

High cost of original copies. Old comic books are incredibly valuable, and as a result are out of financial reach for most people. And that includes libraries as well. The library at Michigan State University has a comic book collection with more than 80,000 entries. But it is no longer purchasing original copies due to “the fragility and great expense of most of these items.” Instead, it’s buying microfilm, which can be recreated at will.

General snobbishness. The New York Public Library has a wide collection of comic books on microfilm, but the reason much of that collection has been archived in that form wasn’t out of a desire to protect it, but because comics were once deemed unfit for a library.

...it’s designed to last for hundreds of years, far longer than any hard drive or CD-ROM ever will.
media_archaeology  microfilm  preservation  archives 
22 days ago
Library Excavations and the Love of Print with Marc Fischer – Sixty Inches From Center
You’ll be hard-pressed to find a lover of libraries, archives, and printed matter more devoted than Marc Fischer. He’s known for a long practice of discovering, creating, and distributing books and ephemera, making him a regular at libraries and post offices throughout the city. It’s practices like Fischer’s, which pays close attention to how, why, and what libraries, museums and archives collect, that have served as inspiration for the work that we at Sixty do, as well as our focus and approach. In 2007, he founded Public Collectors, a project that uses publishing and exhibitions to give glimpses into people’s personal collections–ones that usually go unseen. Prior to that, he and Brett Bloom founded Temporary Services, a project started in 1998 which also holds under its umbrella Half Letter Press, whose publications I’ve come across in Los Angeles, New York, London, and many bookstores in between. 

But his commitment to printed matter in a time of digital dominance isn’t exactly what makes these projects get the notice that they have over the years. It’s the strength of the content and Fischer’s ability to shine a light on things we often don’t pay much attention to, never knew existed, or often take for granted.   

This is certainly the case with Public Collectors’ latest publications, Library Excavations, which is an ongoing series of booklets created from content found on the shelves, in the stacks, or in the archives of the Chicago Public Libraries. The first four forefront topics that lie just below the surface of things that confront and concern us everyday–incarceration, the music industry, racial biases and profiling–but presented in a way that minimizes a heavy-handed or swayed contextualization, allowing for the content to speak, pretty loudly, for itself. Even with these four publications that alone have so much to say, Fischer took some time to explain his process for selecting the content of each book and why he finds such value in libraries and repositories....

Libraries are a lot more democratic than museums, however. They are free, they attract a more diverse range of people, and like democracy, they can be a bit messy–particularly as these buildings become a refuge for teenagers that lack other safe spaces in their neighborhoods after school, or people living on the streets or otherwise existing on the margins and looking for access to basic services like bathrooms, water, and electricity. Libraries are also on the frontlines of protecting free speech, and while library holdings could always be more diverse and radical, a good library like the Harold Washington Library Center contains a broad range of published perspectives...

It is unbelievable how much material many libraries have that no one is looking at. I constantly discover things I did not know existed by browsing in libraries or asking to see something in reference that I have never heard of. The world of books and magazines and printed ephemera is endless and there’s always something new to see.
library_art  print  zines  libraries 
24 days ago
The World’s Largest Hedge Fund Is Building an Algorithmic Model From its Employees’ Brains - WSJ
Deep inside Bridgewater Associates LP, the world’s largest hedge-fund firm, software engineers are at work on a secret project that founder Ray Dalio has sometimes called “The Book of the Future.”

The goal is technology that would automate most of the firm’s management. It would represent a culmination of Mr. Dalio’s life work to build Bridgewater into an altar to radical openness—and a place that can endure without him.

At Bridgewater, most meetings are recorded, employees are expected to criticize one another continually, people are subject to frequent probes of their weaknesses, and personal performance is assessed on a host of data points, all under Mr. Dalio’s gaze.

Bridgewater’s new technology would enshrine his unorthodox management approach in a software system. It could dole out GPS-style directions for how staff members should spend every aspect of their days, down to whether an employee should make a particular phone call....

Mr. Dalio also believes humans work like machines, a word that appears 84 times in the Principles. The problem, he has often said, is that people are prevented from achieving their best performance by emotional interference. It is something he thinks can be overcome through systematic practice.

That applies to managing, too. Successful managers “design a ‘machine’ consisting of the right people doing the right things to get what they want,” he wrote in the Principles....

Data are incorporated from a phalanx of personality tests that Mr. Dalio requires of his employees. In one, managers undergo written exams to determine their “stratum,” an unconventional score for conceptual skills developed by the late Canadian-born psychoanalyst Elliott Jaques....

At the core of the technology project now under way is a walled-off group called the Systematized Intelligence Lab, headed by David Ferrucci, who led development of the artificial-intelligence system Watson at International Business Machines Corp. before joining Bridgewater in 2013.

Though outsiders expected Mr. Ferrucci would use his talents to help find hidden signals in the financial markets, his job has focused more narrowly on analyzing the torrent of data the firm gathers about its employees. The data include ratings employees give each other throughout the work day, called “dots.”

The Systematized Intelligence Lab is involved in several iPad applications that are part of employees’ everyday lives, among them the “Dot Collector.” It allows employees to rate each other on dozens of attributes and to hold snap polls on issues during meetings, including asking blunt questions such as whether a current conversation is a waste of time.

The data blend with others to produce “Baseball Cards” that show people’s strengths and weaknesses in various categories, such as “touching the nerve,” a prized attribute....

Those are initial uses of PriOS, the management software Mr. Dalio is developing. Future uses would include the ability to scan open positions at the company and have PriOS sort through the staff to find people with particular talents and strengths to fill jobs.

In other instances, employees at loggerheads over decisions wouldn’t have to hash out each debate out loud. They would key their opinions into PriOS, and the software would rank their perspectives, consult with Mr. Dalio’s Principles, and spit out the best way to proceed.
personality  testing  management  algorithms  optimization  artificial_intelligence 
25 days ago
Continuous Paper: MLA
When scholars consider electronic literature, the screen is often portrayed as an essential aspect of all creative and communicative computing — a fixture, perhaps even a basis, for new media. The screen is relatively new on the scene, however. Early interaction with computers happened largely on paper: on paper tape, on punchcards, and on print terminals and teletypewriters, with their scroll-like supplies of continuous paper for printing output and input both.

By looking back to early new media and examining the role of paper (in both punning senses) we can correct the "screen essentialist" assumption about computing and understand better the materiality of the computer text. While our understanding of "materiality" may not be limited to the physical substance on which the text appears, that substance is certainly part of a work's material nature, so it makes sense to comment on that substance.

There were important screen-based systems early on — Spacewar, the first modern video game, developed at MIT in 1962; Ivan Sutherland's Sketchpad, also developed at MIT in 1962; Doug Englebart's NLS (oNLine System), developed at SRI and shown in the "mother of all demos" in 1968; and Grail, developed at the RAND Corporation in 1969 — but these were the high-budget exceptions to the rule. Most computer users, including those who were developing early electronic literature, did not have access to screens until at least the mid-1970s. I'll describe two early computer programs that were developed and experienced using an ink-and-paper interface:
electronic_literature  writing  media_literature  paper  materiality 
25 days ago
At Cortlandt Street Subway Station, Art Woven From Words - The New York Times
The Cortlandt Street station on the No. 1 line, which was wiped off the subway map on Sept. 11, 2001, will be much more than a local stop in Lower Manhattan when it reopens in 2018.

As a gateway to destinations of worldwide significance — the World Trade Center and the National September 11 Memorial and Museum — it will be weaving together past and present, present and potential, underground and surface, commuter trains and subway service, deep-rooted memory and momentary impatience.

Weaving. Not threads or reeds. But words.

Weaving is the symbolism behind the $1 million art project being designed for the Cortlandt Street station by Ann Hamilton, who was chosen by the Metropolitan Transportation Authority Arts and Design program. Ms. Hamilton, 58, a professor of art at Ohio State University in Columbus, creates large-scale multimedia installations.

Her Cortlandt Street project was approved on Wednesday by the authority board.

The construction of the station, which is to cost about $101 million, is expected to begin in mid-May.

In Ms. Hamilton’s concept, which is still evolving, texts would fill about 70 percent of the station’s walls in the form of an elaborate concordance, something like a crossword puzzle.

Text fragments reading horizontally would probably come from documents of international significance, like the Universal Declaration of Human Rights and the United Nations Declaration on the Rights of Indigenous Peoples.

At intervals, certain words from the horizontal texts would align to form vertical spines. Those words — like “human” and “justice” — would be common to passages from national documents like the Declaration of Independence, the Constitution and the Declaration of Sentiments, adopted in 1848 in Seneca Falls, N.Y., which held that “all men and women are created equal.”
text_art  transportation  subways  ann_hamilton 
25 days ago
David Theo Goldberg on Wallcraft: The Politics of Walling - Theory, Culture & Society
Historically, political walls were constructed around cities or their privileged inner core of political power to protect inhabitants or rulers from physical or political threat, from outsiders and (potential) troublemakers, disease and thieves. Political walls as fortifications, then, have tended to be constitutively linked to the life and risks of the city as/and state.

Walls have served also as solidifications against the inside turned out, against pollution of the body politic from within, against the paranoia of “self”-debasement, and the intrusion of the everyday, its bothers, conflicts and hazards. They stand as hedges against the uncertainties, indecipherabilities, and ultimately unknowability of the constitutive outside. They enclose off the “non-belonging,” the demarcating reminder that the other side of the wall conveys opacity, illegibility, and menace, physical or ideological, to those properly within the wall’s boundaries. The relative opacity of the outside reinforces insecurity and anxiety on the inside, licensing more or less limitless technologies of securitization.  But they secure too, in the extreme, against departure of the disaffected within, against a brain or broader workforce drain.

Fortification of the polity through apparatuses of walling–what Eyal Weizman calls “wallfare” (Weizman 2012)–against threat and uncertainty tended to last from antiquity until the nineteenth century. In the seventeenth and eighteenth centuries, as Mintzker elaborates, cities were in fact explicitly conceived and defined in terms of their walled circumscription. Cities were “razed” not by their complete destruction but by pulling down their fortified walls. Settlements without walls were no longer considered urban, reduced to rural villages by nothing more than removal of their fortifications. The lack of boundary walls erased the demarcation between town and countryside, lived space and commerce shading into field. Razing the wall emasculated the city, sapping its power and making it as vulnerable to scorching as the rural....

This “modernizing” political defortification of the urban, Mintzker further shows, was pursued by the absolutist state in solidifying its power both across the landscape under its internal control and against the threats of potential external enemies. As absolutizing states sought to de-wall the cities within, they fortified national boundaries, walling and moating guard-post cities at the state borders, clearing the land beyond to defend against invasion from without. City walls came down within the state in the name of nationalizing coherence, unity, and administrative reach, and were erected around cities marking the boundary limits of state reach and power. ...

By the nineteenth century, city-circumferencing walls had tended to fall into disrepair at worst, remnant monuments to the past at best. Seen as disruptions of the expanding culture of technologically enhanced commerce and barriers to national unification within state borders, those cities that remained walled in nineteenth century Europe tended to be “defortified,” the walls either dismantled or willfully ignored in the city’s expansive commercial openness and spatio-demographic spread. National boundaries came to be cartographically marked less by the occasional walled city than by naturalizing landscapes of rivers, mountains, and seas that in principle appeared more obvious markers of division and in any case—at least until the proliferation of more mobile military technologies–easier to defend against (potential) invasion.

The challenge of constructing endless borderline walls of solid materials became moot after 1873 when readily usable, more mobile and malleable barbed wire became marketable, first for enclosing cattle and then for borders and warfare. Wiring’s prior porosity precluded its use in “national walling.” ... After World War II, solid political walls return to vogue, linked first to the perceived threat of communism, then to plugging the holes of unwanted migrations and threatening movement from wars, postcolonial struggles, and the proliferating political economy of privatization. Less surrounding than partitioning, political walls after World War II are forcefully wedged between polities, designed to cut off intercourse between peoples bent on challenging each other. ...

Political walls as apparatuses of securitization thus shift from rampart technologies of defense in warfare to political divider, from circum-fortifications, as barricades against invasion, to politicized partitions and privatizing insurance against criminalized intrusion. The shift, in short, is from keeping at bay the exterior enemy to estranging and externalizing the familiar and one-time neighbor. ...

Political walls thus materialize ideas about nationhood and sovereignty as much as they are pragmatic interventions in a political field. But they may also be metaphorical realizations of the material. When the stone, wood, brick, cement, barbed wire, glass, and steel from which walls are usually built actualize self-determination over territory, they materialize the political. Here, political walls are material manifestations of existing conflicts. They are cementing embodiments of existing conflicts even as they are concrete interventions in them.

Equally, though, political walls may be imaginary projections of political or legal conditions, enacting directed restrictions on targeted population groups, as in “Fortress Europe” or the “Iron Curtain.” Here, the fortification is not manifest in stone but as a mix of enforced legality and symbolic power....

In any case, political walls embed and embody symbolic representations of the ideological investments underpinning established conflicts prompting the walls’ production. Political walls thus fit into a political geography of checkpoints, access roads and highways, legalities of land clearance, an ecology of ideology, developing technologies and construction materials, and the legacies of historical walling. Walls route the passage of people, goods, and traffic, giving shape to legislation and regulation as well as definition to the reach of control and subjection....

The infrastructural ecology in which political walls are embedded, including the labor power necessary to maintain the wall’s force field structurally and politically is embedded and embodied in the wall too. The watchtower as physical structure, surveillance technology, work site, and expression of subjugating power serves not simply as an addendum but as a constitutive feature of political wall making....

The wall itself issues conviction: who belongs and who does not, the character of the polity, its extension and delimitation. In short, who across the political landscape are to be sacrificed for the sake of stating and sustaining power. Political walls shape and cement community, fortifying not just territoriality and social extension but commanding the very idea of the polity and its culture of inhabitation....

To satisfy their instrumental purpose, political walls invariably demand supplementation: barbed wire, spikes, electrocution, moats, glacis (cleared spaces), policing and patrols, surveillance and searchlights, manpower and firepower. This suggests that as political orders come to be managed through the technologies of and supplementing walls, the wall requires constant maintenance. Political walls are intended as regimes of “humanitarian management” (Weizman 2012, pp. 81ff) of populations. They massage the flows of capital, financial and human, of goods and services, of information and political messaging. Yet, at the same time, walls themselves require round-the-clock management both materially and politically....

In their expansive surface-making, however, the possibility of a counter-logic exists, both potentially and actualized. Walls’ extensive exteriority not only conveys foreboding and forbidding, commanding imperatives and imposing segregation. It also issues an enticing call to engage, a surface to address those on the outer side of the wall, whether a medium for market expansion or political expression....

Militarizing discipline, marketing and message-making meet in the wall. Apparatuses of enclosure thus likewise make for screens of disclosure. Building designers and renovators are now factoring these commercial possibilities into the design features of buildings’ exteriors. Projection screens for news relays and commodity marketing are being built into and onto public-facing walls (as they are too into well-used building elevators).

So walls are also, at least potentially, surfaces of resistance, surfaces for resistance, surfaces on which resistant expression can be announced, made public, sometimes subtly and sometimes loudly. As constructions of repression, confinement, and delimitation, political walls ironically also provide the means for a call to arms, to resist, to imagine alternative ways of being than that represented by the repressing wall....

The inward turning or circumscribing also always has an outer “skin”. Walls’ exteriority offers an expansive canvas for commercial and critical message-making, for making walls both prolifically profitable and potentially self-conscious and self-critical. Writing on the wall projects a key technology of privatizing immediately into and onto public screen.
walls  architecture  cultural_technique  logistics  maintenance  labor  borders  graffiti  protest  categorization  classification 
4 weeks ago
Urban Giants on Vimeo
Between 1928 and 1932, Western Union and AT&T Long Lines built two of the most advanced telecommunications buildings in the world, at 60 Hudson Street and 32 Avenue of the Americas in Lower Manhattan. Nearly a century later, they remain among the world's finest Art Deco towers—and cornerstones of global communication. “Urban Giants” is an 9-minute filmic portrait of their birth and ongoing life, combining never-before-seen-construction footage, archival photographs and films, interviews with architectural and technology historians, and stunning contemporary cinematography.
media_archaeology  media_city  telecommunication  telco_hotels  data_centers  telegraph  video  infrastructure  media_history 
4 weeks ago
Aggregate – The Demise and Afterlife of Artifacts
Destruction of architecture and art objects is an ancient practice. From Troy and Tenochtitlan to Dresden and Munich and on to Bamiyan and Palmyra today, the obliteration of historic cities and heritage sites has taken place throughout history and across cultures...

only a handful of the twentieth century military conflicts sought the destruction of significant buildings for the sake of destroying them alone.4 In most cases, the main objective was to kill those who occupied the particular target site or building or to eliminate the function to which the building was put to use, such as manufacturing or storage of strategic material. Sometimes, the destruction of certain monuments, such as national memorials and government headquarters, was meant to weaken the morale of the “enemy,” but that was the exception rather than the general aim of bombing. In contrast, the militant group known as the Islamic State in Iraq and Syria (ISIS) today aims to erase certain buildings and artifacts based on their specific meaning according to the militants’ own obscurantist interpretation. In other words, for ISIS, the ravaging of irreplaceable antiquities in Syria and Iraq is dictated by an understanding of their deviant referential significance much like the relentless slaughter of “undesirable” people (such as those deemed unbelievers, members of ethnic minorities, and homosexuals) is doctrinally justified....

In addition to these recent calls, there have been many historic instances of active iconoclasm in the Middle East. The foremost is the one ascribed to the Prophet Muhammad himself, who reportedly ordered the idols in the ancient Kaaba smashed, which has helped attributing the deliberate destructions that followed in Islamic history to a widespread and religiously sanctioned iconoclastic urge. Thus, modern acts of demolition are often presented as stemming from an impulse to return to the example of the Prophet—a belief that is often ascribed to the Salafi ideology (which includes the much less prevalent and militant Jihadism).9
However, a deeper look into each act of deliberate destructions indicates that they are much more complex than a pure imitation of the Prophet or other precedents of supposed paradigmatic iconoclasm.....

“Although iconoclasm is often stigmatized as an act stemming from ignorance, this was a gesture that was particularly well informed about its own historical precedents.”12 He goes on to shed light on an even more significant cause, elucidating how iconoclasm was above all rooted in contemporary conflicts between Hindus and Muslims in India. ...

Echoing Flood’s sentiments, historian Elliott Colla maintains that most of the responses to ISIS’s destructions fail to adequately contextualize them. “There is nothing uniquely ‘Islamic’ about the ISIS attacks on pagan statues or antiquities sites,” he writes. “Just as there are long histories of vandalism and iconoclasm in the Arab and Muslim worlds, there are even older ones in the West, as the origins of the term iconoclasm should remind us.”15 In fact, if we go by historical evidence, the Islamic lands seem more tolerant of other cultures’ remains than many Christian territories. As historian of Islamic Art Oleg Grabar asserts, however ironic it might sound to some, the medieval Muslim world would have actually served as a haven for incarnation iconodules, or those who supported icons and their veneration, such as St. John of Damascus. ...

The region that ISIS controls or threatens, known as Bilad-al-Sham or the Levant (modern-day Syria, Lebanon, Jordan, Israel, and the Palestinian Territories) and Northern Iraq (Ancient Mesopotamia), is thus particularly significant because it has layered material evidence, from Islamic and non-Islamic traditions. Zainab Bahrani, a scholar of the ancient Near East, writes, “ISIS isn’t just focused on the pre-Islamic past; they’ve also destroyed so many Muslim shrines and mosques… we focus more on their destruction of pre-Islamic sites here in the States and in Western Europe, but they’ve actually destroyed a lot of Islamic and Christian and Yazidi and Sufi temples.”...

not all destructions stem from ignorance, ideological stances, or a shortage of resources, financial or otherwise. In truth, many progressive and modernist views have negatively affected the cultural heritage of the Middle East. Colonial urban renewal—for example, the destruction of Algiers’ historic core in the early years of French occupation—counts as destruction, as do the urban renewal schemes of the 1950s and 1960s that were carried out by newly independent governments with the help of European architects and planners.26 Indeed, the creation of many nation-states coincided with a simultaneous celebration of some monuments and a deliberate obliteration of others. ...

Other examples include historic sites that were damaged in wars waged by Western powers. ...

Undeniably, as Colla reminds us, museums and archeology itself were mainly Western imports and Western concerns, which were then embraced by the region’s autocrats and non-autocrats alike, such as Ataturk, Saddam Hussein, Anwar Sadat, and the Shah of Iran, as well as the Western-educated class and the intelligentsia.31 But they do not seem to have penetrated beyond that to the rest of the people. “The result of this history of colonial and despotic rule in the region,” Colla writes, “is that, despite the efforts of generations of well-meaning educators, indifference toward antiquities reigns,” not only because interest in antiquities is a Western import, but mainly because certain sites, artifacts, and entire historic periods did not make it to the national memory as it was being constructed in the colonial and post-colonial period.3...

Following a similar line of reasoning, certain destroyed or threatened sites, monuments, and artifacts receive more attention than others due to the interest of the tourist industry and museums in the West today. Thus, while many ancient and pre-Islamic artifacts and monuments of the Middle East are listed as world heritage sites, others, equally important but less touristically desirable, are not, and hence they do not receive much global attention. Moreover, while the media turns its full attention to the harm done to the world heritage sites such as Palmyra, it remains oblivious to the demolished spaces within which people carry out their everyday existence and the things they hold dear. In this regard, contributor Esra Akcan aptly asks, “Isn’t it a contradiction to mourn the destruction of monuments of cultural heritage but not the destruction of Palestinian villages?” ...

When it comes to the reproduction of lost artifacts, three-dimensional (3D) printing technology, rather than two-dimensional images, should be credited for opening up new possibilities.
war  destruction  cultural_heritage  ISIS  preservation  digital_archaeology  archaeology 
4 weeks ago
Digital Objects and Metadata Schemes - Journal #78 December 2016 - e-flux
ontologies can be simply described as metadata schemes, which define and hence give meaning to data. Beware: the term “ontology” here is different from how it is randomly used in the humanities today. I describe this evolution of metadata schemes as a genesis of digital objects, and we can see that with the ontologies of the semantic web, descriptions of data are more refined, and the objectness of these entities becomes very clear. I remember already in 2010, during a conference on the semantic web, an engineer said that we were no longer dealing with mere data, but things, in the sense that data had become things. And if we pay attention to what this means, we see that it is not simply about how to do categorization—though categorization remains a crucial question and practice. It is also that categorization becomes productive. It produces objects in their own right, like Kant’s concepts, and these objects are both real and material. In this sense we can talk about the onto-genesis of digital objects....

As an outsider to the main international standards organization for the World Wide Web, the W3C (World Wide Web Consortium), I have witnessed a move away from the semantic web towards a much more political aim of “re-decentralizing” the web, particularly in the post-Snowden period. Tim Berners-Lee was the original inventor of the web, back in 1991. His proposal for a new way to organize knowledge on the web, outlined in his 2001 article “The Semantic Web,” failed because of its inability to understand language (as Bernard Stiegler and others claimed). My interpretation would be that the naive multi-stakeholder approach got stuck in the monopolistic power politics of the stacks—Google, Facebook, Apple, and Microsoft—which demonstrated that they were uninterested in the formalistic, scientific rearrangement of protocols. In the end, the scientists were pushed aside....

The semantic web was intended to be a “world-building” project, and this is the reason Tim Berners-Lee called for “philosophical engineers,” who would not only reflect on the world but build the world—an echo of Marx’s thesis on Feuerbach. The semantic web aims for a world of automation. However, a world is more than automation; it also has politics, which the semantic web doesn’t take into account. I don’t think this is because the semantic web doesn’t understand language—and we have to admit that machines don’t deal with language in the way we do. ...

Contrary to what you have said, I am rather sure that Google, Facebook, Apple, and Microsoft are all interested in “the formalistic, scientific rearrangement of protocols”; however, they all want their own protocols, and so they are reluctant to all use the same standards. We have to recognize that there is an institutional politics between the W3C and its business members. I think someone who looked more deeply into the history of the W3C would have better insight on this. It is true that since the Snowdon affair, the W3C has launched the Magna Carta project and the campaign “Web We Want.” However, since its launch it doesn’t appear to me that there has been much progress.
The other reason for the “failure” that we have described—and Stiegler has been claiming this for years—is that the semantic web did not allow for a “social web,” since its ultimate aim was the automation and standardization of data schemes. This is a different issue than the “cyber-libertarian” project of Julian Assange. Rather, it is a question of social organization and the organization of the social.
ontology  classification  semantic_web  stack 
4 weeks ago
Plastiglomerate - Journal #78 December 2016 - e-flux
What is a beach actually? It is marginalia, a footnote to the essay that is the ocean. Beaches are many things and can range from rocky outcrops to lush vegetation. But the sandy beach of popular imagination is made up of sediment, of particles coming from eroded coral reefs in the ocean, sediment from the sea floor, eroded sections of the continental shelf, or weathered and eroded rocks from nearby cliffs.2 In Hawai’i, volcanic basalt sometimes contributes to the mix, creating black beaches of small-to-tiny particles that are eroded by the constant, lapping wave action of the ocean. Beaches are far from sedentary. They are in constant motion, as wind and water wear away at rocks, coral, shells, and other matter. They also stretch across time as certain minerals, such as quartz and feldspar, are chemically stable and strong enough to last well through erosion, often forming the base of beaches millennia old.3 When plastics are released into the ocean, they join this process, being broken down into smaller and smaller parts and adding to the sand mixture on almost all coastal beaches. Note: an archive of pure sand is an impossibility.

Kamilo Beach, Hawai’i is a node where the ocean gets rid of foreign substances. The beach has long been known as a way station: stories are told that pre-contact, native Hawai’ians used the beach to harvest logs that had drifted into Kamilo from the Pacific Northwest, and that shipwrecked bodies often turned up there.4 Currently, Kamilo is a terminal point in the circulation of garbage. The beach and adjacent coastline are covered in plastic: as much as 90 percent of the garbage accumulated in the area is plastic. So much garbage collects here that Kamilo Beach can be found on Atlas Obscura’s compendium of bizarre and obscure places to visit, where it is described as “constantly covered in trash like some sort of tropical New York City gutter.”5 It is a site of immense efforts at cleanup organized by the Hawaii Wildlife Fund, a group that must constantly contend with the ocean’s supply of new materials.
***
In 2012, geologist Patricia Corcoran and sculptor Kelly Jazvac travelled to Kamilo Beach, following a tip from oceanographer Charles Moore that the beach was covered in a plastic-sand conglomerate. Moore suspected nearby volcanoes were to blame. In fact, the plastic and beach detritus had been combined into a single substance by bonfires. Human action on the beach had created what Corcoran and Jazvac named “plastiglomerate,” a sand-and-plastic conglomerate. Molten plastic had also in-filled many of the vesicles in the volcanic rock, becoming part of the land that would eventually be eroded back into sand.
The term “plastiglomerate” refers most specifically to “an indurated, multi-composite material made hard by agglutination of rock and molten plastic. This material is subdivided into an in situ type, in which plastic is adhered to rock outcrops, and a clastic type, in which combinations of basalt, coral, shells, and local woody debris are cemented with grains of sand in a plastic matrix.”6 More poetically, plastiglomerate indexically unites the human with the currents of water; with the breaking down, over millennia, of stone into sand and fossils into oil; with the quick substration of that oil into fuel; and with the refining of that fuel into polycarbons—into plastic, into garbage. From the primordial muck, to the ocean, to the beach, and back to land, plastiglomerate is an uncanny material marker. It shows the ontological inseparability of all matter, from the micro to the macro....

The naming and dating of the Anthropocene, an as-yet formally unrecognized and heavily debated term for a geologic epoch evidencing human impact on the globe, relies “on whether humans have changed the Earth system sufficiently to produce a stratigraphic signature in sediments and ice that is distinct from that of the Holocene epoch.”7 While it is incontrovertible that humans have impacted the planet, the strata to measure that impact in the global geological record remains controversial. Is the signature change a layer of plastic sediment from the mid-twentieth century’s “Great Acceleration” of population growth? Does it begin with the Industrial Revolution’s massive deposits of CO2 into the atmosphere? Or perhaps it is lithospheric, with evidence found in the rise of agriculture some twelve thousand years ago? Maybe the start date of the Anthropocene can be traced to a single day, that being the first nuclear test—the Trinity test—in 1945, which deposited an easily measured layer of artificial radioactivity into the global soil.8 The term “Anthropocene” remains stable/unstable, “not-yet-official but increasingly indispensable,” writes Donna Haraway; near “mandatory” in the humanities, arts, and sciences, if not elsewhere.9 Whichever (if any) start date is chosen, plastiglomerate—a substance that is neither industrially manufactured nor geologically created—seems a fraught but nonetheless incontrovertible marker of the anthropogenic impact on the world; it is evidence of human presence written directly into the rock....

Noted for its convenience and durability, plastic emerged in part as a promise to displace other products that relied on animal remains and natural resources: bone, tortoiseshell, ivory, baleen and whale oil, feathers, fur, leather, cork, and rubber. “As petroleum came to the relief of the whale,” stated one pamphlet advertising celluloid in the 1870s, so “has celluloid given the elephant, the tortoise, and the coral insect a respite in their native haunts; and it will no longer be necessary to ransack the earth in pursuit of substances which are constantly growing scarcer.”...

Invented just after the turn of the twentieth century, the mass production of the synthetic organic polymers of plastic only began in the 1950s. Bakelite®, Styrofoam®, and Nylon® gave way to thermoplastic polymers, which could be molded and melted and remolded.11 Roland Barthes starts his meditation on plastic in Mythologies by noting, “Despite having names of Greek Shepherds (Polystyrene, Polyvinyl, Polyethylene), plastic … is in essence the stuff of alchemy.” Plastic is the “transmutation of matter,” the transformation of primordial sludge into the modern, malleable, and convenient. Every fragment of plastic contains the geologic memory of the planet: “at one end, raw, telluric matter, at the other, the finished, human object.”...

Plastic soon shed its utopian allure, becoming hard evidence for the three c’s—the triple threat of capitalism, colonialism, and consumerism—as well as a kind of shorthand for all that was inauthentic and objectionable about postwar everyday life. Plastic was just the latest evidence of bio-cultural cynicism. As earlier forms of extraction—such as the exploitation of rubber from trees and animals for their products—became unfeasible, the continued expansion of the three c’s was made possible through new forms of extraction, such as resource mining and oil-field development....

The combination of rock sediment and plastic creates a charismatic object, a near luminous granite, pockmarked with color. ...

Plastiglomerate clearly demonstrates the permanence of the disposable.41 It is evidence of death that cannot decay, or that decays so slowly as to have removed itself from a natural lifecycle. It is akin to a remnant, a relic, though one imbued with very little affect. As a charismatic object, it is a useful metaphor, poetic and aesthetic—a way through which science and culture can be brought together to demonstrate human impact on the land. Thus, to understand plastiglomerate as a geological marker is to see it as unchanging. Plastiglomerate speaks to the obduracy of colonialism and capitalism. The melted veins of plastic that actually become the rock speak to how difficult it is to undo unequal relations of destruction.
materiality  anthropocene  geology  rocks  chemistry  matter  waste  garbage  plastic  temporality  colonialism 
4 weeks ago
Permanent Collection - Journal #78 December 2016 - e-flux
I’m deep underground, inside the Ōtsuka Museum of Art. Built into a hillside at Naruto, a small coastal town in southeast Japan, the museum has more than a thousand iconic works on permanent display. There’s da Vinci, Bosch, Dürer, Velázquez, Caravaggio, Delacroix, Turner, Renoir, Cézanne, van Gogh, Picasso, Dalí, Rothko—all the Western canon’s greatest hits. Even Michelangelo’s Sistine Chapel frescos are here, lining the walls of a custom-built hall.
To “acquire” the works in this collection, a technical team prints photographs of them, in full scale, onto ceramic plates. They then fire the plates at 1,300 degrees centigrade and follow with some hand-painted touch-ups. According to the museum’s marketing material, these painted-photographed-printed-baked-painted pictures will then survive for several millennia. “While the original masterpieces cannot escape the damaging effects of today’s pollution, earthquakes and fire,” reads a statement from the museum director Ichiro Ōtsuka, “the ceramic reproductions can maintain their color and shape for over two thousand years.”....

The novelty of touching the art soon wears away, because every surface is so neutralized. The artworks start to feel like one big piece of worn-out sandpaper—and the surface of time itself is flattened into a mythic, homogeneous continuity. This is what art worthy of preservation looked like to the Ōtsuka team at the end of the twentieth century, and—if everything goes according to plan—nothing is ever going to change.
In the 1990s, while the Ōtsuka Museum was amassing its collection of everlasting copies, Jean Baudrillard was decrying what he called “the Xerox degree of culture,” where “Nothing disappears, nothing must disappear.”2 With the Lascaux caves as his recurring example, Baudrillard questioned our increasing proclivity for preservation-by-substitution, where things that would otherwise be allowed to pass are forced into artificial longevity, via their simulacra....

The latest acquisition for the permanent collection is their first copy of a work of art that does not exist: a painting of sunflowers in a vase, by Vincent van Gogh, which was destroyed in Japan in 1945. Along with everything around it, the painting was turned to smoke and ash during a US air raid over Ashiya on August 5–6—around the same time as the first atomic bomb exploded over Hiroshima.
But according to the brightly colored ceramic plate now on show at Naruto—which was rendered from photographs that predate the picture’s incineration—World War II never happened....

Of course, the more expansive any attempt at a total comprehensive overview is, the more its inherent incompleteness will show through. At Ōtsuka the feeling is one of overwhelming excess—it’s the largest museum in Japan and seeing everything means walking for almost four kilometers—as well as alarming omission. For instance, there are hundreds and hundreds of works, but the female artists who have been invited into this grand narrative can be counted on one hand. ....

This is a version of art history with no sculpture, no video art, no performance or installation art, no ready-mades—only flat photographically reproduced paintings and some other things that are made to look like flat photographically reproduced paintings. A selection of medieval tapestries and Byzantine mosaics are included, as photographs fired onto ceramic boards—their textures completely flattened out. Stranger still are some Ancient Greek vases which have been photographed from all sides and printed as two-dimensional rectilinear planes, with shadows from the handles included as part of the image surface, indicating their former three-dimensionality. But although everything here depends on photographic technology, this is a history of art in which photographs have never featured as artworks in themselves. The camera is simply a vehicle that transfers images from surface to surface; it does not make its own images....

But if this is really about increased accessibility, we might wonder why the artworks that are selected for reproduction are already some of the most widely reproduced and accessible images of all time. ...

Writing in the 1940s, Malraux observed that the photographic document can liberate the object from its context and hierarchical positioning, as well as from its physical volume and prescribed dimensions.4 But unlike Malraux’s “museum without walls”—and unlike Taschen books or Google Art Project—the Ōtsuka team returns volume, weight, and location-specificity to the mechanically reproduced work of art. They turn dematerialized images back into singular, heavy objects with fixed dimensions and spatial positions, so the images don’t travel to us—we have to travel to them....

There is a broader issue here, which is about finding ways to look at artworks without taming their dynamic and durational capacities. When art historians seek to pin down works of art to a single date of authorial inception, the temporal multiplicity of the work is denied. Likewise, when conservators imagine returning a work to the condition of the “artist’s original intentions,” they fight against the ongoing durations of art objects—objects which always accumulate marks of their historical and material realities....

Such objects go to the museum when they are ready to withdraw from life. In Adorno’s words, “They owe their preservation more to historical respect than to the needs of the present.”5 But is there not also potential for strategies of reactivation within the museum-mausoleum? Can’t we try to think about ways of setting its contents in motion, in accordance with the needs of the present?
art  preservation  materiality 
4 weeks ago
Intelligence and Autonomy | Data & Society | An AI Pattern Language
In an AI Pattern Language, we present a taxonomy of social challenges that emerged from interviews with a range of practitioners working in the intelligent systems and AI industry. In the book, we describe these challenges and articulate an array of patterns that practitioners have developed in response. You can find a preview of the patterns on this page, and you'll find more context, information, and analysis in the full text.

The inspirational frame (and title) for this project has been the unique collection of architectural theory by Christopher Alexander's A Pattern Language (1977). For Alexander, the central problem is the built environment. While our goal here is not as grand as the city planner, we took inspiration from the values of equity and mutual responsibility, as well as the accessible form, found in A Pattern Language. Like Alexander's patterns, our document attempts to develop a common language of problems and potential solutions that appear in different contexts and at different scales of intervention.
artificial_intelligence  data_ethics 
4 weeks ago
How Do You Map the Character of a City? A New Tool Offers Solutions - The Urban Edge
Over the past few years, members of the National Trust’s Preservation Green Lab team have developed a concept they call a community’s “Character Score.” They are now using the recently launched Atlas of ReUrbanism to map that score to the streets of 50 major cities across America.

Character Score

Character Score is determined by combining three elements of any community’s built environment: the median age of its buildings, the diversity of their ages, and its granularity—a measurement of density of building and lots. Areas with older buildings, a greater mix of old and new buildings, and smaller structures have higher Character Scores. The cities in the Atlas are broken down into 200-meter-square grids, each of which receives a score. Areas with the highest scores are often hubs of activity, usually located near the historic cores of towns, and often among the most economically productive in the city.
mapping  smart_cities  data_visualization  quantification  methodology  preservation 
4 weeks ago
At a Tokyo Museum, the Design Is in the Details - The New York Times
The process of a design anatomy begins when Mr. Satoh’s research team secures a manufacturer’s agreement allowing them to analyze all of the available information on a product and its history, including inspections of laboratories, factories and warehouses, and interviews with people involved in all stages of its development and distribution. Even the most humdrum elements are included, because Mr. Satoh believes that every stage of a product’s evolution has meaning.

Each design anatomy begins with branding and marketing, on the grounds that they are the first aspects of a product that most consumers encounter. The researchers then work backward through manufacturing and development, to sourcing raw materials, before presenting the results.
design_studies  genealogy  methodology  discourse 
4 weeks ago
LAURIE FRICK
We’re halfway thru the decade when humans shift from mysterious beings - to big data algorithms, where everything about us will be known. Rather than worry, I envision a time when personal data is a unique glimpse into our hidden personality. Patterns of behavior will become patterned artworks and the mass of data will predict our lives.
data_visualization  tactility  haptics 
4 weeks ago
2017 01 CRA Supermarket of the Future Milan - Google Drive
Working with supermarket chain COOP Italia, Ratti wanted to explore whether introducing digital information into food stores would affect the way that people interact with and select food.

The result is the Future Food District – a pavilion that functions as a real supermarket where visitors can purchase items. The difference is that the 1,500 products on display are positioned beneath digital mirrors that present information about the origins, ingredients and manufacturing of the foods.
food  merchandising  augmented_reality  data_visualization  media_space  provenance 
4 weeks ago
Future Food District | Carlo Ratti Associati
The Future Food District (FFD), a 7000 sq. m. thematic pavilion that explores how digital technology can change the way that people interact with food, will be unveiled at tomorrow’s opening of Expo Milano 2015 “Feeding the Planet, Energy for Life”. Designed by Italian design firm Carlo Ratti Associati, together with supermarket chain COOP Italia, the pavilion — lying at the heart of the exhibition grounds — explores how data could change the way that we interact with the food that we eat, informing us about its origins and characteristics and promoting more informed consumption habits.
 
“Every product has a precise story to tell,” says Carlo Ratti, founding partner of Carlo Ratti Associati and a professor at the Massachusetts Institute of Technology. “Today, this information reaches the consumer in a fragmented way. But in the near future, we will be able to discover everything there is to know about the apple we are looking at: the tree it grew on, the CO2 it produced, the chemical treatments it received, and its journey to the supermarket shelf.”
 
The Pavilion at Expo 2015 is a real Supermarket, where people can interact with – and buy – products. Its interior will resemble a sloping warehouse, with over 1500 products displayed on large interactive tables. As people browse different products, information will be visible on suspended mirrors augmented with digital information. “It will be like seamless augmented reality, without Google Glasses or any other cumbersome interface, where people can meet and exchange products and ideas,” said Andrea Galanti, project leader at Carlo Ratti Associati. “In a way, it is like a return to the old marketplace, where producers and consumers of food saw each other and had actual interactions”.
 
“We were inspired by Mr. Palomar from Italo Calvino’s book of the same name, who enters a fromagerie in Paris and thinks that he’s in a museum,” adds Ratti. “ ‘Behind every cheese there is a pasture of a different green under a different sky. Mr. Palomar feels as he does in the Louvre, seeing behind every object the presence of the civilization that has given it form.’ We believe that tomorrow’s markets will make us feel a bit like Mr. Palomar. Every product will have a story to tell.” This enhanced knowledge of products can, in turn, create new social links among people. “Think about leveraging the sharing economy and peer-to-peer dynamics to create a free exchange area where everyone can be both a producer and a consumer – almost an AirBNB of home-made products,” explains Giovanni de Niederhausern, COO of Carlo Ratti Associati.

The outside of the pavilion features the world’s largest plotter. The plotter, made of mechanical arms that move along two-axes, draws on the facade using spray paint of different colors, transforming it into a dynamic data visualization fed by visitor-generated contents. Again, information flows help reconfigure space. The Plaza outside the FFD supermarket will also showcase new ways of producing food, such as vertical hydroponic systems for growing vegetables, and algae and insect harvesting.
augmented_reality  food  provenance  merchandising 
4 weeks ago
A TSA Checkpoint for your Internet? - Public Knowledge
ruling that data sent over the Internet counted as “importation of articles” into the United States.

The case dealt with patents over Invisalign braces, which are made by scanning a person’s teeth, using a computer to generate several models for the plastic retainers, and then manufacturing the retainers. Obviously, if the plastic retainers themselves were being imported into the United States, then it would make sense for the ITC to hear this case. But in this case, the data files of the models were being sent into the United States, and all the manufacturing was done inside the country. The ITC reasoned that those digital files were imported articles, so those digital files were subject to the same customs process as plastic retainers or rubber ducks would be.

Usually a decision by a court or agency is meant to answer a dispute, but this decision just raises more questions. Is every digital data transmission into the United States an “importation”? What about telephone calls? Broadcast television? Satellite radio?
copyright  borders  intellectual_property  customs  infrastructure 
5 weeks ago
Back to the City | Mediapolis
I’m suspicious though about whether emphasising media or mediation as a kind of separate element which is “added” to the understanding of the city in the service of discovering something entirely “new” will get us very far. Lest we end up in a blind alley where mediation (whether as representation or technology in use) is contrasted to any “real”/”actual” city (whatever that might be), we might better keep in mind that there hardly ever was a moment in urban history that wasn’t mediated. Monumental constructions in ancient cities symbolised gateways to deities; the organisation of medieval towns expressed internal hierarchies, with the wall and the gates acting as both physical and mental thresholds; the renaissance mobilised the ornament and stained glass as a spatialised narrative; modern architecture declared its service to allegedly universal “human needs”; while postmodern moving image surfaces dramatized the communicative dimension (intrinsic to all forms) of built space, by showing images of other places (those beyond physical reach, such as live news) as well as being (relatively) open to interpretation....

As Morley points out, in some areas of media studies more than in others, sociological relevance tends to be drawn primarily from technological innovation. In studies of mediated cities, this tendency has led to the assumption that media matter only when they are “new” (or, currently, digital).1 In addition to this curious ahistoricism, the symbolic and technological have been given more relevance than anything else in assessing the ongoing transformation of urban living.
media_city  media_archaeology  urban_history 
5 weeks ago
Augmented Reality, Hologram-like Images Enter the Workplace - WSJ
The future of data visualization is unfolding on the factory floor of AGCO Corp., a manufacturer of agricultural equipment. Factory workers in Jackson, Minn., don augmented-reality glasses that display diagrams and images of instructions to help them conduct quality checks on tractors and chemical sprayers. Logging quality checks is up to 20% faster with the use of Google Glass, said Peggy Gulick, director of business-process improvement.

Next year, the company, based in Duluth, Ga., will experiment with computer-generated hologram-like images, using the three-dimensional images to help guide workers through the process of welding 30-foot booms to chemical sprayers.

The use of augmented reality, which superimposes digital content including hologram-like images onto a user’s view of the real world, is in the earliest stages of commercial development. But researchers at the Massachusetts Institute of Technology say improvements in the performance of AR equipment, like the Microsoft HoloLens, and expected reductions in its cost, will help drive the technology into the mainstream, specifically in the supply chain.

MIT is working to hasten and improve the process by constructing a multimillion-dollar Visual Analytics Lab where corporations and researchers there can experiment with computer-generated hologram-like images and interactive touch-screen walls embedded with layers of supply-chain data that is often obscured. That information could range from customer and product information to population, socioeconomic data and real-time traffic, weather and social-media data.
augmented_reality  data_visualization  logistics  supply_chain 
5 weeks ago
The Library of Congress Is Putting Its Map Collection on the Map | Smart News | Smithsonian
Recently, the Library of Congress (LOC) signed a memorandum of understanding stating that the world’s largest library will start sharing parts of its digital collection with the Digital Public Library of America (DPLA). As part of an effort to make these documents easily available from a central location, the LOC will begin by uploading 5,000 maps from three collections covering the Revolutionary War, the Civil War, and panoramic maps, Meier reports.
maps  archives 
5 weeks ago
Podcast | Yale Program in the History of the Book
Peter Stallybrass, Annenberg Professor in the Humanities and Professor of English, Department of English, University of Pennsylvania
What Is a Letter?

Joseph Howley, Assistant Professor of Classics, Columbia University
Inheritance, innovation, or inevitability?
The ancient table of contents at the dawn of print
book_history  textual_form  material_texts  embodiment 
5 weeks ago
« earlier      
academia acoustics advising aesthetics_of_administration algorithms archaeology architecture archive_art archives art audio big_data blogs book_art books bookstacks bookstores branded_places branding cartography cataloguing cell_phones china cities classification collection collections comics computing conference craft criticism curating data data_centers data_visualization databases dead_media design design_process design_research digital digital_archives digital_humanities digitization discourse diy drawing ebooks education embodiment epistemology exhibition exhibition_design filetype:pdf film formalism furniture geography geology globalization google graduate_education graphic_design guerilla_urbanism hacking historiography history home illustration information information_aesthetics infrastructure installation intellectual_furnishings interaction_design interface interfaces internet koolhaas korea labor landscape language learning lettering liberal_arts libraries library_art listening little_libraries little_magazines locative_media logistics machine_vision magazines making mapping maps marketing material_culture material_texts materiality media media:document media_archaeology media_architecture media_city media_education media_form media_history media_literature media_space media_theory media_workplace media_workspace memory methodology models multimodal_scholarship museums music music_scenes my_work networks new_york newspapers noise notes nypl object_oriented_philosophy objects organization palimpsest paper pedagogy performance periodicals phd photography place pneumatic_tubes poetry popups postal_service presentation_images preservation print printing privacy professional_practice public_space public_sphere publication publications publishing radio reading rendering research satellites screen security sensation sensors signs smart_cities smell social_media sound sound_art sound_design sound_map sound_space sound_studies space storage surveillance sustainability syllabus tactility teaching telecommunications telegraph telephone television temporality text_art textual_form theory things tools transportation typewriter typography ums urban_archaeology urban_design urban_form urban_history urban_informatics urban_media urban_planning urban_studies video visualization voice wedding word_art workflow writing zines

Copy this bookmark:



description:


tags: