Skip to main content

The press release is the foremost tool of disruption

4 min read

I'd never heard of Audrey Watters before today; after reading this brilliant dissection of ed-tech futures, I hope to hear a great deal more from her in future.

Here’s my “take home” point: if you repeat this fantasy [of education-sector disruption through technology], these predictions often enough, if you repeat it in front of powerful investors, university administrators, politicians, journalists, then the fantasy becomes factualized. (Not factual. Not true. But “truthy,” to borrow from Stephen Colbert’s notion of “truthiness.”) So you repeat the fantasy in order to direct and to control the future. Because this is key: the fantasy then becomes the basis for decision-making.

Fantasy. Fortune-telling. Or as capitalism prefers to call it “market research.”

Cf. a favourite riff from a few years ago: "investor story-time".

But there's more good stuff:

It’s both convenient and troubling then these forward-looking reports act as though they have no history of their own; they purposefully minimize or erase their own past. Each year – and I think this is what irks me most – the NMC fails to looks back at what it had predicted just the year before. It never revisits older predictions. It never mentions that they even exist. Gartner too removes technologies from the Hype Cycle each year with no explanation for what happened, no explanation as to why trends suddenly appear and disappear and reappear. These reports only look forward, with no history to ground their direction in.


“The best way to predict the future is to invent it,” computer scientist Alan Kay once famously said. I’d wager that the easiest way is just to make stuff up and issue a press release. I mean, really. You don’t even need the pretense of a methodology. Nobody is going to remember what you predicted. Nobody is going to remember if your prediction was right or wrong. Nobody – certainly not the technology press, which is often painfully unaware of any history, near-term or long ago – is going to call you to task. This is particularly true if you make your prediction vague – like “within our lifetime” – or set your target date just far enough in the future – “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

This is the core trick of the huckstery end of futurology (which is, regrettably, the thicker, more visible and well-funded end); it is also, and not at all incidentally, the core trick of marketing and politics. "What I tell you three times is true."

And here's the glorious rabble-rousing closer:

... I don’t believe that there’s anything inevitable about the future. I don’t believe that Moore’s Law – that the number of transistors on an integrated circuit doubles every two years and therefore computers are always exponentially smaller and faster – is actually a law. I don’t believe that robots will take, let alone need take, all our jobs. I don’t believe that YouTube has been rendered school irrevocably out-of-date. I don’t believe that technologies are changing so quickly that we should hand over our institutions to entrepreneurs, privatize our public sphere for techno-plutocrats.

I don’t believe that we should cheer Elon Musk’s plans to abandon this planet and colonize Mars – he’s predicted he’ll do so by 2026. I believe we stay and we fight. I believe we need to recognize this as an ego-driven escapist evangelism.

I believe we need to recognize that predicting the future is a form of evangelism as well. Sure gets couched in terms of science, it is underwritten by global capitalism. But it’s a story – a story that then takes on these mythic proportions, insisting that it is unassailable, unverifiable, but true.

The best way to invent the future is to issue a press release. The best way to resist this future is to recognize that, once you poke at the methodology and the ideology that underpins it, a press release is all that it is.



The Fortune at the Edge of the Network [Venkatesh Rao, annotated]

Fresh Venkatesh Rao newsletter instalment that does a pretty good job of teasing out the implications of taking a tektological look at infrastructure through the lens of network theory... so good a job, in fact, that I'm going to grab and notate the whole thing, because he's managed to capsule a bunch of points I've been struggling to phrase clearly.


1/ “The last mile” is a phrase used by engineers to talk about the last (“leaf”) like segments of large networks with approximate center-to-edge topologies.

2/ In all sorts of network logistics (transport, telegraph, telephone etc), historically the "last mile" has been the bane of infrastructure. It’s where the messiest practical issues live.

3/  Right-of-way/eminent domain issues are politically/legally more complex (10 miles of cable laying in the countryside is easier than 1 block in a major city)

4/ Physical issues are more complex as well (water pipes, package deliveries, and fiber optics have different needs but often share pathways for geometry reasons).

[The above covers the basics, though it's far from basic -- see Keller Easterling's Organisation Space.]

5/ Last-mile regimes need not look like “paths” at all: waterways, spectrum rights, line-of-sight (view obstruction in real estate, glide paths for airplane landing approaches, building shadows) 

6/ In the future, drone landing/takeoff logistics, Pokemon Go type AR-conflict rights, bikes vs self-driving cars, will present novel, subtle last-mile issues.

7/ Generally though, the bottleneck is increasingly moving from literal last mile to literal last inch. Phone-to-ear, UPS-truck parking spot to porch, NFC/bluetooth, cafe power outlets.

[In my own taxonomy, this means the bottleneck has moved to the interface layer.]

8/ In raw flow volume terms, the last mile probably accounts for the bulk of actual miles traveled by anything on a network due to sheer number of endpoints.

[Note this is the exact opposite of the way in which money tends to be allocated to network development and maintenance.]

9/ The last mile is the typically the last to go hi-tech. Containerization still stops and turns into break-bulk at city limits. Fiber optics still turns into local-loop copper (DSL) in many places.

10/ As the red !!! show in the cartoon, issues get more tricky in last-block to last-inch land. It's still physically and legally complex, but that isn't the hardest part anymore.

11/ Two forces make the last block especially hard: increased demand and inequality. The case of physical packages illustrates this well.

12/ Increased demand is obvious: postal systems/FedEx etc weren't built with this much small-package flow in mind. Neither were front porches or mailboxes.

13/ Inequality is less obvious: in an unequal society there is more incentive for low-level theft and pilfering, easiest at the last block.

[Less obvious to those of us used to taking a systems perspective, perhaps; the incentive factor demonstrates just how obvious it is to those who live at the ragged edges of networks.]

14/ Anecdotally, theft from porches etc. has risen: more temptation, more people in an economic condition where they can be tempted. But careful how you interpret this. 

15/ As Anatole France sardonically observed, “The law, in its majestic equality, forbids the rich as well as the poor to sleep under bridges, to beg in the streets, and to steal bread.”

16/ Concierge services for accepting packages are now increasingly a necessity in bigger cities in middle class apartment buildings. More people are getting personal packages delivered at workplaces.

[Note that this may be a convenience issue as much as a security issue, at least in the UK context... I'd happily take the risk on the occasional pilfered package if it meant I never had to arrange another red-card redelivery, but YMMV, obvs.]

17/ You also increasingly have both large, low-value packages (e.g. cat litter) that are awkward for small locker-based systems or stairwells, and small jewelry-level value packages (iPhones)

18/ Buildings change slowly, especially in old cities with civic gridlock. It will take a decades for new buildings to reflect last-block needs. Follow the writing of Kim-Mai Cutler for this action in San Francisco.

[So now we shift from (relatively) simple material logistics and on to service and data logistics...]

19/ Similar issues occur in other networks. Consider net metering models for solar power, charging needs of electric vehicles, shopping cart services, 1-hour delivery, meal-kit businesses, etc.

20/ There are now fights over charging in charging stations, homeowners are setting up informal charging services on lawns. Blue Apron customers pile up ice packs.

21/ Even more subtleties at the informational level: Airbnb etc. require more sophisticated security for the last block: key transfers, digital locks etc. Your wallet needs RFID scanner protection.

22/ And as more and more value in flow (VIF) is in the last block at any given time, incentives for conflict and crime increase.

23/ "Stealing" cable or electricity required some sophistication, "stealing" wifi was much easier…for a while. The opportunity space will increase at all levels of difficulty.

[Ubiquity of infrastructures plus proliferation of multi-system interfaces divided by privatisation/unbundling/splintering of 'utilities'... when markets encounter habituation, ugliness happens.]

24/ The Dyn DDoS attack relied heavily on IoT devices, particularly insecure surveillance cameras. The “attack surface” as security people call it, will only increase.

[Every new interface device is potentially an interface to any other networked interface. Chips with everything, as the headlines used to go.]

25/ ATM card fraud now uses very sophisticated last-inch tech: molded plastic fake keypads, fake stripe readers on top of real ones, tiny cameras. I recently had an ATM card compromised that way.

26/ The last block/inch is also has a non-criminal economy developing: from unlocking smart-contract rental cars to power outlets in cafes that charge for a charge.

[Criminal economies are a signal of opportunity; this is just as true at the edge of the network as it is at the centre.]

27/ A lot is low-value/high volume so online micropayments arguments ("just make it free"/"not worth financializing") apply. But not all.

[Note that in this case it can be obfuscatory to focus overmuch on the material technology involved; what's interesting about these cases is how the technology gets folded into a service offer. Ownership and control over the interface layer is the opportunity recognised by criminal an non-criminal economic actors alike.]

28/ Frederik Pohl once said “the job of the sci-fi writer is to predict not the automobile but the traffic jam." Traffic jams are usually at the leaves of infrastructure trees.

[Smart guy, Pohl. Good writer, too.]

29/ Literal traffic jams happen most near/in city downtowns.  As s/w eats any network-provisioned service, traffic jams moves further down into capillaries.

[s/w = software, I think?]

30/ I like the holographic principle as a metaphor for for thinking about the effects of s/w-eats-a-network: more of the valuable information within a  volume of space can live on its surface. 

[OK, so this is where Rao's metaphor and one of my own come so close together that they almost bump noses: the infrastructural metasystem is also the metamedium, the medium of all media; hence all media is infrastructurally mediated; hence the metasystem is the veil upon which the Spectacle is projected. Logic of the Spectacle, cf. Debord: "that which is good appears, and that which appears is good"; extended by McKenzie Wark via William Gibson, "that which is secret is better [...] the secret is to the spectacle as art once was to culture. The secret is not the truth of the spectacle, it is the aesthetic form of the spectacle." So when "s/w-eats-a-network", what's really happening is that software is wrapping the deep function of the network up in a glossy package which takes Clarke's Third Law as its primary design principle.]

31/ For a network, the “volume” is the part behind the endpoints, which usually converges on one or more back-end centers. The “surface” is the set of all endpoints.

[This metaphor is really, really useful to me.]

32/ As a result, there is a LOT of economic value in the last block to last inch zone. C. K. Prahlad’s famous fortune at the bottom of the pyramid idea generalizes to “edge of any network.”

33/ In future, if current progress in brain implants continues, there may be an even bigger fortune in the “negative 1 inch” that goes into your head (disclosure: company mentioned in that article, Kernel, is a client).

[That's a pretty big 'if', IMO. But Rao knows his wider audience well, I suspect.]

34/ A general topological theory why this happens is that a more informationally powerful technology induces a higher-resolution network structure.

35/ World-eating new technologies extend the resolution of basic infrastructure networks: tens of miles for trains/planes, miles for cars, blocks for electricity, inches for wireless


36/ A network core can be defined as the low-resolution backbone where economics allows aggregation leverage, and low transaction costs for huge financial flows.

37/ This is anything you can call a “cloud” in some sense: a datacenter, a large dam, a power plant, a major interstate highway, a rail depot. I wrote about this idea in my Aeon essay American Cloud

[Personal aside: Rao's American Cloud essay was part of the inspiration for m'colleague Adam Rakunas's second novel, Like A Boss.]

38/ At the edge otoh technology stops being organized by economics, and starts being organized by social norms at its resolution limit set by transaction costs: the price of an in-app purchase for example.

39/ So sociologically, the last mile/block/inch is where the market stops and what I call an economics of pricelessness, based on values and norms, starts to kick in.


40/ When large-scale disruption happens due to a major technology like s/w, social-norms space gets systematically pushed back by market space.

[Cf. Uber, Airbnb etc etc.]

41/ The ultimate reason is physics: this is tendency towards "plenty of room at the bottom" (Feynman). As the market occupies that room, sociology (and in the future, psychology) yields to economics

42/ The transient is ugly because while you're shifting regimes, you’re converting social capital into financial capital, hurting social-capital-rich types (think priests) and enriching platform builders (think unicorn CEOs).

43/ The urban manifestation of these dynamics is gentrification: technology extending the power of markets into our community lives at increasing resolution.

44/ But if you think this process is almost over, think again. It's just beginning. You could say iOS and Android represent gentrified and slum-like digital neighborhoods in the last inch.

[There's a side-spur argument to be made about FOSS and open systems in general, here; as Rao is suggesting, FOSS can't remove these tendencies from networks, but can make it easier for people to have some control over their interfaces.]

45/ You know the old saying, "your freedom of action ends where my nose begins”? This is about to get pretty literal. There is a power struggle right by your nose/ear.

46/ But it isn’t between free individuals and an enslaving techno-capitalist cloud. You never were that free an inch from your face. You were merely the captive of non-economic forces.

47/ At worst the struggle is between the tyranny of markets and the tyranny of unchosen neighbors. The tyranny of money and the tyranny of taboos.

[Scylla and Charybdis, eat your heart out.]

48/ At best though, what we have here is technology liberating you from the tyranny of neighbors. And which view is true for you is more within your control than you think.

49/ If you see technology as potential for increased agency, you can learn to rule the last mile like a gritty cyberpunk novel protagonist, even if you don’t own a billionaire platform.

50/ If you see technology as increasing agency only for privileged others, it will become a self-fulfilling prophecy and you will end up on the losing side of this process.

51/ You will also be on the losing side if you don’t recognize that tyranny of neighbors (“hell is other people”) is a factor, a dynamic the dystopian show Black Mirror explores well.

52/ In the Black Mirror future, technology does not contend with the power of communities. It becomes allied with it to suppress individual freedom even more.

[As the title of the series makes clear: it is merely reflecting society back at itself. Brooker repeatedly makes the point that he's not writing about technology, but that technology has become a handy way to enable plot points that would have been impossible just a decade ago (though the same phenomenon has killed off older plots, e.g. the missed phonecall). The (largely good-natured) joshing that BM has become "what if phones, but too much?" misses the point; BM's not about the phones, it's about the too much, and that's not a function of the phones.]

53/ If you think this is unlikely in the real world, think again, entire countries like France seem to be exploring that direction of evolution.  

[UK, also.]

54/ This is not to absolve infrastructure titans and CEOs of big platform companies from all responsibility, or to abandon everybody to their own devices (heh!)

[No, but their position effectively denies us the possibility of taking that responsibility for ourselves; networks perform optimally as organisational monopolies, and as such are fundamentally incompatible with private ownership.]

55/ My buddy Tristan Harris has good thoughts on ethics in design for technology builders. I don’t always agree with the details of his thinking, but he’s right that with last-inch power comes great responsibility.

56/ If you’ve already decided “infrastructure creep” is bad, you’ll use dystopian metaphors like “tentacles of capitalism” or “eye of Sauron” or “the participatory panopticon” (for Black Mirror version).

57/ I personally tend to think of technology as ideology agnostic: this would happen even if we had a different ideology than neoliberal clickbaitism driving it. 

[We part ways a bit here: I'm with Kranzberg regarding the agnosticism or neutrality of technology, not least because technology is people and practices as well as material things, and people and practices are never ideologically neutral. However, I agree that a lot of the functions Rao is talking about here are endemic characteristics of networks in general, and would as such tend to occur even under different regulatory or socioeconomic regimes... but would they occur to the same extent, or at the same rate? I'm not sure, but I think it's a good question.]

58/ My preferred metaphor is the fingers/eyes of technology itself, considered as a whole (what Kevin Kelly calls the ‘technium’). 

[Ugh, Kevin Kelly. Swap all of this guff out for Haraway's cyborg metaphor, which does all the same work without trying to pretend that people and the technologies they use in their daily lives are analytically separable in any useful or believable way.]

59/ The “eyes” (or senses more generally) are getting incredibly precision in what they can see. I think of last-inch/click-tracking level “seeing” as “retina logistics” by analogy with Mac displays.

60/ The “fingers” of technology are getting increasingly delicate and precise as well. If the last-mile actuation capacity of the cloud was a sledgehammer, we’re at needlepoint now. Did your phone ding when this email arrived?

61/ This is scary to a majority, exhilarating to a minority, and as is the case for all big technology shifts, an existential crisis to those who don’t break smart.

62/ And consistent with the general political/ideological position I generally adopt in breaking smart writings, overall, increasing sensing/actuation resolution of infrastructure is a good thing.

63/ The more fine-grained the presence of technology in our lives, the more generative potential there is for humans to level-up to new, more powerful modes of being.

[Generative potential is a double-edged sword.]

64/ Whether powerful technology existing an inch from your face is good or bad depends on how good you are at using it from that locus.

[True enough. Cropping off the last few points, which are mostly marketing, but the last one's worth saving for the first sentance in particualr:]

70/ There is a nonzero-sum fortune to be created at the edge of the network...

[Yes... yes, there is. But it's slipping away, moment by moment.]



In which I find Amitav Ghosh's missing monocle, and return it to him that he might see more clearly

5 min read

Poor old Amitav Ghosh is wondering where all the fiction about climate change might be... when in fact it's right under his nose, and he simply chooses to disregard it as being insufficiently deserving of the label "literature".

Right in the first paragraph, he answers his question and immediately discards the answer:

... it could even be said that fiction that deals with climate change is almost by definition not of the kind that is taken seriously: the mere mention of the subject is often enough to relegate a novel or a short story to the genre of science fiction. It is as though in the literary imagination climate change were somehow akin to extraterrestrials or interplanetary travel.

If for "literary imagination" we substitute "bourgeois imagination", that last sentence is no surprise at all -- because this is about genre, which is a proxy for class.

And when Ghosh surveys the few examples of supposedly literary fiction that have dealt with climate change, look what happens:

When I try to think of writers whose imaginative work has communicated a more specific sense of the accelerating changes in our environment, I find myself at a loss; of literary novelists writing in English only a handful of names come to mind: Margaret Atwood, Kurt Vonnegut Jr, Barbara Kingsolver, Doris Lessing, Cormac McCarthy, Ian McEwan and T Coraghessan Boyle.

Now, I'll concede that most of them have preferred generic labels other than science fiction for their works at one time or another, but it's very hard to make the case that Atwood, Vonnegut and Lessing haven't written works that slip very easily into the sf folksonomy, while McCarthy has written a very successful dystopia. So that's half of Ghosh's successes demonstrably working in the speculative fiction tradition... but they can't be speculative fiction, because they're too good for that trash. They've won awards and stuff -- awards that aren't rocket-shaped. Ipso facto, no?

To his credit, Ghosh gets pretty close to the technical distinction in narrative strategy that demarks the dichotomy he's observing, via one of Moretti's more interesting theory-nuggets:

This is achieved through the insertion of what Franco Moretti, the literary theorist, calls “fillers”. According to Moretti, “fillers function very much like the good manners so important in Austen: they are both mechanisms designed to keep the ‘narrativity’ of life under control – to give a regularity, a ‘style’ to existence”. It is through this mechanism that worlds are conjured up, through everyday details, which function “as the opposite of narrative”.

It is thus that the novel takes its modern form, through “the relocation of the unheard-of toward the background ... while the everyday moves into the foreground”. As Moretti puts it, “fillers are an attempt at rationalising the novelistic universe: turning it into a world of few surprises, fewer adventures, and no miracles at all”.

I offer that the absence of Moretti's fillers -- often but not always replaced with anti-fillers designed to re-enchant the novelistic universe, and make of the universe a character in its own right -- is a way to describe one of the more fundamental strategies of speculative fictions, where it is preferable to have a world with more surprises, more adventures, and more than the occasional deus ex machina). Moretti's fillers are basically the opposite of worldbuilding; they remove complexity, rather than adding it.

And here we see the true root of the problem, the reason no one who identifies as a writer of "serious" "literary" fiction can handle climate change in their work -- look at Ghosh's language, here, and tell me he doesn't feel the class pressure of genre (my bold):

To introduce such happenings into a novel is in fact to court eviction from the mansion in which serious fiction has long been in residence; it is to risk banishment to the humbler dwellings that surround the manor house – those generic out-houses that were once known by names such as the gothic, the romance or the melodrama, and have now come to be called fantasy, horror and science fiction.

It's clearly not that "the novel" as a form can't handle climate change: science fiction novels routinely invert the obstacles set out in Ghosh's piece in order to do their work. It's that to upset those particular obstacles is to break the rules of Literature Club, to go slumming it with the plebes of genre fiction: literary fiction can't write about climate change, or about any other topic that requires an understanding of the storyworld as a dynamic and complex system, because -- as a self-consciously bourgeois genre in its own right -- it cannot commit the sin of portraying a world where the bourgeoise certainties no longer pertain, wherein hazard and adventure and unexpected events are revealed to be not merely routine, but to be the New Normal.

Take it from a squatter in the generic out-houses, Amitav old son: there's only one way you'll ever get literary fiction that deals with climate change -- and that's by acknowledging, however grudgingly, that not only was science fiction capable of being literature all along, but that science fiction began by asking the question whose suppression is the truest trope of the literary: what if the world were more important than the actions of individuals?


Move slow and fix things

3 min read

There's been a slew of recent good pieces coming from Aeon's partnership with The Maintainers, and this one by Patrick McCray is a doozy. Read the whole thing; I'm mostly putting these quotes here for my own ease of access, rather than trying to distill the essay.

Efficiency, therefore, is not some timeless universal value but something grounded deeply in particular historical circumstances. At various times, efficiency was a way of quantifying machine performance – think: steam engines – and an accounting principle coupled to the new applied sciences of mechanics and thermodynamics. It was also about conservation and stability. By the early 20th century – the apogee of Taylorism – experts argued that increases in efficiency would realise the full potential of individuals and industries. Dynamism and conservatism worked together in the pursuit of ever-greater efficiency.

But a broad look at the history of technology plainly shows that other values often take precedence over efficiency, even in the modern era. It would, for example, offer several advantages in efficiency if, instead of every apartment or home having its own kitchen, multiple families shared a communal kitchen, and indeed in some parts of the world they do. But in the prevalent ideology of domesticity, every family or even single person must have their own kitchen, and so it is.

Nor, despite what Silicon Valley-based techno-libertarians might argue, does technological change automatically translate to increased efficiency. Sometimes, efficiency – like the lone eccentric innovator – is not wanted. In the 1960s, for instance, the US military encouraged metal-working firms, via its contracting process, to adopt expensive numerically controlled machine tools. The lavish funding the Department of Defense devoted to promoting the technology didn’t automatically yield clear economic advantages. However, the new machines – ones that smaller firms were hard-pressed to adopt – increased centralisation of the metalworking industry and, arguably, diminished economic competition. Meanwhile, on the shop floor, the new manufacturing innovations gave supervisors greater oversight over production. At one large manufacturing company, numerical control was referred to as a ‘management system’, not a new tool for cutting metal. Imperatives besides efficiency drove technological change.

Bonus snippet:

Our prevailing focus on the shock of the technological new often obscures or distorts how we see the old and the preexisting. It’s common to hear how the 19th-century telegraph was the equivalent of today’s internet. In fact, there’s a bestseller about it, The Victorian Internet (1998) by Tom Standage. Except this isn’t true. Sending telegrams 100 years ago was too expensive for most people. For decades, the telegraph was a pricey, elite technology. However, what was innovative for the majority of people c1900 was cheap postage.


The arena of acceleration

5 min read

Fairly chewy here by Aaron Vantsintjan, in which he does a little comparison of accelerationsim and degrowth. Utopian visions (and the appropriate delimiting thereof) appears to be an important axis of difference... and both introduce the notion of desire into their theories of change, albeit in very different manifestations.

... it seems that a key uniting principle between accelerationism and degrowth is their promotion of utopian ideas. This might come as a surprise with those unfamiliar with the degrowth literature—recently, a whole book was dedicated to attacking the degrowth hypothesis as anti-modern and a form of “austerity ecology”. However, the fact is that degrowth thinkers have put a lot of thought into how to go beyond primitivist flight from the modern and envision a future that is low-carbon, democratic, and just. Despite the negative connotations that may come with a word like ‘degrowth’, there have been many positive, forward-looking proposals within the movement. Key concepts here include “desire”—that is, the emphasis that a just transition should not be forced but should come from people’s own political will; “commoning”—in which wealth is managed collectively rather than privatized; the support of innovative policies such as basic and maximum income as well as ecological tax reform; the resuscitation of Paul Lafargue’s demand for ‘the right to be lazy’; the embracement of ’imaginaries’ inspired by ‘nowtopias’—actually existing livelihood experiments that point to different possible futures.

The same is true for the accelerationists. Indeed, the launching point of Snricek and Williams’ book is that much of leftist activism in the past decades has forsaken the imaginative, creative utopias which characterized left struggles of the past. Indeed, progressive activism, to them, has largely been limited to what they call “folk politics”—an activist ideology that is small in its ambit, focuses on immediate, temporary actions rather than long-term organizing, focuses on trying to create prefigurative perfect ‘micro-worlds’ rather than achieving wide-ranging system change. This, they argue, is symptomatic of the wider political moment, in which a neoliberal consensus has foreclosed any ability to think up alternative policies and worlds. And so they propose a vision of the future that is both modern and conscious of current economic trends. Like the degrowth movement, they propose that the dominant pro-work ideology must be dismantled, but unlike degrowth, they take this in another direction: proposing a world where people don’t have to submit to drudgery but can instead pursue their own interests by letting machines do all the work —in other words “fully automated luxury communism.”

What unites the two is a counter-hegemonic strategy that sets up alternative imaginaries and ethics, that challenges the neoliberal moment by insisting that other worlds are possible and, indeed, desirable.

Fast-foward to some concluding remarks:

Perhaps this is the key ideological difference: accelerationists make such an extreme modernist gesture that they refuse the need to limit their utopia—there are only possibilities. In contrast, degrowth is predicated on politicizing limits that, until now, have been left to the private sphere. This might involve saying, in the words of one Wall Street employee, “I would prefer not to” to some technologies.


Through [Paul] Virilio’s eyes, the history of Europe’s long emergence out of feudalism into 20th century modernity was one of increasing metabolism of bodies and technologies. Each successive regime meant a recalibration of this speed, accelerating it, managing it. For Virilio, political systems—be they totalitarian, communist, capitalist, or republican—emerged both as a response to changes to this shift in speed and as a way to manage human-technologic co-existence.

What’s important for this discussion is that Virilio does not separate the two types of speed: changing social relations also meant changing metabolic rates—they are the same, and must be theorized simultaneously.

Doing so could be useful for both degrowth and accelerationism. While degrowth does not have a succinct analysis of how to respond to today’s shifting socio-technical regimes—accelerationism’s strong point – at the same time accelerationism under-theorizes the increased material and energetic flows resulting from this shifting of gears. Put another way, efficiency alone can limit its disastrous effects. As degrowth theorists have underlined, environmental limits must be politicized; control over technology must therefore be democratized; metabolic rates must be decelerated if Earth is to remain livable.

It strikes me that what both accelerationism and degrowth lack, and what Virilio was implicitly arguing in favour of, is a better theory of infrastructure, given that infrastructure is the medium of metabolism, the arena of acceleration.

But then I would say that, wouldn't I? ;)


Dispatches from the Last Mile

2 min read

Plucking out a few important and (hah!) connected points from a Jan Chipchase splurge on travelling through "Kyrgyzstan, Tajikistan’s GBAO [Gorno-Badakhshan Autonomous Region] and China’s western provinces":

12: The premium for buying gasoline in a remote village in the GBAO is 20% more than the nearest town. Gasoline is harder to come by, and more valuable than connectivity.

42: After the Urumqi riots in 2009 the Chinese government cut of internet connectivity to Xinjiang province for a full year. Today connectivity is so prevalent and integrated into every aspect of Xinjiang society, that cutting it off it would hurt the state’s ability to control the population more than hinder their opposition. There are many parts to the current state strategy is to limit subversion, the most visible of which is access to the means of travel. For example every gas station between Kashi and Urumqi has barbed wire barriers at its gates, and someone checking IDs.

60: The difference between 2.5G and 3G? In the words of a smartphone wielding GBAO teenager on the day 3G data was switched on her town, “I can breathe”.

Incredibly rapid habituation to networked and wireless IT among hinterland populations; technoscientific "seeing like a state" governance paradigm continues apace; controlling movement of physical materials in space easier and more effective than controlling movement of information in context of governing hinterlands. (Look out for that latter one in Brexit Britain; the weak signals have been there for a long, long time.)


Your humble servant: UI design, narrative point-of-view and the corporate voice

5 min read

I've been chuntering on about the application of narrative theory to design for long enough that I'm kind of embarassed not to have thought of looking for it in something as everyday as the menu labels in UIs... but better late than never, eh?

This guy is interested in how the labels frame the user's experience:

By using “my” in an interface, it implies that the product is an extension of the user. It’s as if the product is labeling things on behalf of the user. “My” feels personal. It feels like you can customize and control it.

By that logic, “my” might be more appropriate when you want to emphasize privacy, personalization, or ownership.


By using “your” in an interface, it implies that the product is talking with you. It’s almost as if the product is your personal assistant, helping you get something done. “Here’s your music. Here are your orders.”

By that logic, “your” might be more appropriate when you want your product to sound conversational—like it’s walking you through some task. 

As well as personifying the device or app, the second-person POV (where the labels say "your") normalises the presence within the relationship of a narrator who is not the user: it's not just you and your files any more, but you and your files and the implied agency of the personified app. Much has been written already about the way in which the more advanced versions of these personae (Siri, Alexa and friends) have defaults that problematically frame that agency as female, but there's a broader implication as well, in that this personification encourages the conceptualisation of the app not as a tool (which you use to achieve a thing), but as a servant (which you command to achieve a thing on your behalf).

This fits well with the emergent program among tech companies to instrumentalise Clarke's Third Law as a marketing strategy: even a well-made tool lacks the gosh-wow magic of a silicon servant at one's verbal beck and call. And that's a subtly aspirational reframing, a gesture -- largely illusory, but still very powerful -- toward the same distinction to be found between having a well-appointed kitchen and having a chef on retainer, or between having one's own library and having one's own librarian.

By using “we,” “our,” or “us,” they’re actually adding a third participant into the mix — the people behind the product. It suggests that there are real human beings doing the work, not just some mindless machine.


On the other hand, if your product is an automated tool like Google’s search engine, “we” can feel misleading because there aren’t human beings processing your search. In fact, Google’s UI writing guidelines recommend not saying “we” for most things in their interface.

This is where things start getting a bit weird, because outside of hardcore postmodernist work, you don't often get this sort of corporate third-person narrator cropping up in literature. But we're in a weird period regarding corporate identities in general: in some legal and political senses, corporations really are people -- or at least they are acquiring suites of permissible agency that enable them to act and speak on the same level as people. But the corporate voice is inherently problematic: in its implication of unity (or at least consensus), and in its obfuscation of responsibility. The corporate voice isn't quite the passive voice -- y'know, our old friend "mistakes were made" -- but it gets close enough to do useful work of a similar nature.

By way of example, consider the ways in which some religious organisations narrate their culpability (or lack thereof) in abuse scandals: the refusal to name names or deal in specifics, the diffusion of responsibility, the insistence on the organisation's right to manage its internal affairs privately. The corporate voice is not necessarily duplicitous, but through its conflation of an unknown number of voices into a single authoritative narrator, it retains great scope for rhetorical trickery. That said, repeated and high-profile misuses appear to be encouraging a sort of cultural immunity response -- which, I'd argue, is one reason for the ongoing decline of trust in party political organisations, for whom the corporate voice has always been a crucial rhetorical device: who is this "we", exactly? And would that be the same "we" that lied the last time round? The corporate voice relies on a sense of continuity for its authority, but continuity in a networked world means an ever-growing snail-trail of screw-ups and deceits that are harder to hide away or gloss over; the corporate voice may be powerful, but it comes with risks.

As such, I find it noteworthy that Google's style guide seems to want to make a strict delineation between Google-the-org and Google-the-products. To use an industry-appropriate metaphor, that's a narrative firewall designed to prevent bad opinion of the products being reflected directly onto the org, a deniability mechanism: to criticise the algorithm is not to criticise the company.


In the golden era of British railways, the rail companies -- old masters of the corporate voice -- insisted on distinctive pseudo-military uniforms for their employees, who were never referred to as employees, but as servants. This distinction served largely to defray responsibility for accidents away from the organisation and onto the individual or individuals directly involved: one could no more blame the board of directors for an accident caused by one of their shunters, so the argument went, than one could blame the lord of the manor for a murder commited by his groundskeeper. 


All Problems Can Be Illuminated; Not All Problems Can Be Solved [Ursula Franklin]

While producing wonderful artifacts and mind-blowing techniques, prescriptive technologies create a world in which it’s normal to do what we’re told, and to do so without the ability to control and shape the process or the outcome. They also require a command and control structure. A class of experts—the architects, the planners—and others who follow the plans and execute the tasks. This structure creates a “culture of compliance . . . ever more conditioned to accept orthodoxy as normal and to accept that there is only one way of doing ‘it.’”8 A view through Franklin’s lens reveals that, as a “byproduct” of what we call progress, we have created societies easily ruled and monitored— and accustomed to following orders whose ends they don’t question.


Innovation dynamics in the metasystemic stack

2 min read

Joi Ito expresses some misgivings (far milder than my own) about "the Bitcoin community", and along the way provides this gem of a case-study:

One of the key benefits of the Internet was that the open protocols allowed innovation and competition at EVERY layer with each layer properly sandwiched between standards developed by the community. This drove costs down and innovation up. By the time we got around to building the mobile web, we lost sight (or control) of our principles and let the mobile operators build the network. That's why on the fixed-line Internet you don't worry about data costs, but when you travel over a national border, a "normal" Internet experience on mobile will probably cost more than your rent. Mobile Internet "feels" like the Internet, but it's an ugly and distorted copy of it with monopoly-like systems at many layers. This is exactly what happens when we let the application layer drag the architecture along in a kludgy and unprincipled way.

Historically, the application layer of a network system pretty much always drags the architectural layer, because the application (or interface) layer is governed by commercial incentives to innovate; those commercial incentives may result in improved functionality, but they are just as likely (if not depressingly more so) result in the appearance of improved functionality (which is a very different thing, and sometimes the exact opposite).

This isn't to say that the architectural (or infrastructural) layer has no influence in the other direction, of course, but infrastructure is by necessity a very slow game: big-ticket projects on the largest of geographical scales. The interface layer is inevitably more nimble, more able to iterate quickly; when the interface layer in question is pretty much pure software (as in the example of the blockchain), that is even more the case, because the opportunity cost of iteration and testing is so low, and the potential rewards so ridiculously high. (However, the infrastructural layer is far from innocent, as the battles over Net Neutrality indicated very clearly.)

As Ito indicates, and historical evidence supports, open protocols and shared standards between sociotechnical systems lower costs and open up the field for innovation to *all* players in the stack, not just to the interface developers.

That alone should tell you exactly why Silicon Valley dropped the Open Web.


Leading with an apology: some thoughts on innovation in communications

5 min read

Something I'm finding interesting about the New Newsletter Movement (which isn't really a movement, but is surely a definite phenomena in a certain slice of the internets) is the normalisation of the Extended But Friendly Unsubscribe Disclaimer, wherein profuse preemptive apologies are made for the possible cluttering of inboxes, and the ease of avoiding such is highlighted. It's not surprising -- on the contrary, it serves to highlight that the move to newsletters was driven at least in part by a sense that there are an excess of push-notification demands on people's attention, and that we all know they're no fun any more (even if we're still occasionally unwilling to say so).

Email is a fairly pushy medium too, of course (which is why it's such a popular topic for those work/life balance articles), but it seems to me to have two main merits in the context of the current communications retrenchment: firstly, there are a lot more third-party tools and techniques for managing email as multiple flows and categories of comms (including, crucially, easy blocking and blacklisting); secondly, no one can envisage being able to give up email forever, so the inbox is both a comfortable and secure place in which to set up one's ultimate data redoubt. Hence newsletters: they're a one-to-many subscriber-based push medium, much like socnets, but -- crucially -- the interface through which both the sender and the receiver mediate and adjust their experience of communicating via newsletters, namely the inbox, does not belong to the company providing the transmission service. 

Sure, that interface may well belong to someone other than the end-user -- most likely G**gle or another webmail provider -- but the point is that the route between sender and receiver has a whole bunch of waypoints, seams between one system or platform and another where one or another of the communicants can step in and control their experience. With FarceBork or Twitter, that communicative channel -- the interface apps, the core protocol and its design principles -- is all in-house, all the time, a perfect vertical: it works this way, that's the only way it works, take it or leave it. (Note that it takes either network effects or addicition mechanisms, or possibly both, to build the sort of product where you can be so totalitarian about functionality; note further that network effects are easier to achieve in closed and/or monopoly networks.) So the newsletter is a point of compromise: a one-to-many-push model which retains plenty of control at both the author and reader ends. 

And so we have a situation where one of the most common features of the use of a particular opt-in medium is a disclaimer about how easy it is to avoid further messages from the same source. I find this of some considerable interest -- not least because rather than being a technical innovation, it's actually a reversion to older technologies which have been rearticulated through a new set of social protocols and values.

That said, it's a little odd that we've jumped all the way back to email, skipping over the supposedly-failed utopia that was the Open Web (or whatever we're now calling it in hindsight): y'know, blogs, aggregators, pingbacks, RSS, all that jazz. I do hear some lamenting for the Open Web, but it tends to be couched in a way that suggests there's no going back, and that the socnets pushed all that out of the way for good. And while that may be true in commercial terms, it's not at all true in technical terms; I can't speak to the change in running overheads, especially for anyone running anything more than the website equivalent of a lemonade stand, but all that infrastructure is still there, still just as useable as it was when we got bored of it. Hosting is cheaper and more stable than it was a decade ago; protocols like RSS and pingbacks and webmentions only stop being useful when no one uses them.

So why didn't we go back to blogging? After all, the genres of writing in newsletters are very similar to those which were commonplace on blogs, it's a one-to-many-pull medium (so no accidental inbox invasions), and the pertinent protocols are just sat there, waiting to be written into software and used again.

But it's a lot more effort to run even a small blog than to run a newsletter (you effectively outsource all the work besides the writing to your newsletter provider, for whom it's less a matter of work and more a matter of maintaining automated capacity), and you still have to go "somewhere else" (whether directly to the site, or to an RSS aggregator) to catch up with the news from others. Newsletters are just easier, in other words -- sufficiently easy that the inherent deficiencies of the medium don't seem too much of a chore to manage, for sender or receiver.

Whether that remains the case for newsletter authors with very large audiences, I have no idea -- and how long it will remain the case is just as open a question, as is the question of where we'll move our discourse to next. However, it's pretty clear that the newsletter phenomenon thumbs its nose at the standard models of innovation, wherein we transition to new technologies on the basis of their novelty and/or technological advantages. This is good news, because it means that we're perfectly capable of rearticulating the technological base of the things we do in response to changing social meanings and values -- and perhaps it even suggests that those meanings and values are more influential than the supposed determinism of the technological stack itself.

We can but hope, I guess.