Skip to main content
 

Hanjin: optimisation is the enemy of resilience

3 min read

So a big shipping company went bankrupt -- why should you care? Because it's a sign of serious trouble in the global infrastructural metasystem also known as "the supply chain":

With little or no inventory of essential goods and raw materials retailers and manufacturers are subject to disruptions all along their supply chains which reach around the globe. A breakdown at any step can quickly bring activity to a halt on the factory floor or on the sales floor.

Just-in-time is very efficient financially (until, of course, it isn't). Little money is tied up in inventories or the space to warehouse them. But just-in-time is not very resilient. It used to be that businesses stockpiled goods and critical resources to ensure against disruptions. But the advent of computerized tracking combined with more efficient shipping practices worked to end the stockpiling of inventories.

[...]

The Hanjin bankruptcy also calls into the question the wisdom of allowing so much freight--7.8 percent of all trans-Pacific U.S. freight--to be handled by one carrier. And yet large size and just-in-time systems create what economists like to call economies of scale. Goods and services are provided more cheaply.

But such systems are not resilient. Resilience often requires redundancy and that spells inefficiency in today's business climate.

This problem is endemic to the majority of infrastructures, if not all of them. Optimisation is the enemy of resilience -- and, indeed, can end up being counterproductive. All complex systems end up with a certain amount of loss to noise and friction, and it is often possible to iterate much of that lossiness away by tweaking the system, adding feedback loops, that sort of thing. But there's a problem not unlike the EROEI problem in energy extraction, in that once the major problems are fixed, the minor problems that remain become ever more subtle and difficult to work on, and you eventually reach a tipping-point where you're expending as many resources on trying to squelch the noise as you expect to recover by squelching it (which takes you into Red Queen's Race territory, wherein you're running as fast as you can simply to stay in place).

This is compounded by an approach to systems management that indulges in what Haraway indentified as the God's-eye view -- it is impossible to truly understand any system to which you perceive yourself as being somehow external or superior.

But mostly it's a bottom-line thing: businesses like Hanjin compete on capacity, as pointed out above, which means that profit margins are very, very thin (a fact obscured by the sheer number of transactions), and the arbitration systems on which the market is based keep a downward pressure on price (to the extent that it is often possible to find shipping capacity available at negative prices -- capacity which the shipper will effectively compensate you for using). The Hanjin bankruptcy may mean we've reached a point where the profit margin of running a sizable shipping company has reached parity with the inescapable losses from noise in the system: they effectively cancel each other out, and the organisation runs at a net loss.

What happens when there's no money to be made in moving matter around?

 

Infrastructure as community

1 min read

Quoth Adam Rothstein:

... I think this is one thing we are doing when we look at infrastructure--we are looking for the beginning of a way out. We are looking for a way to build a new community, by finding new infrastructural tools that would describe a different sort of community, not just one more, sequentially next faction centered around a few particular goals. Our societies are more than capable of replicating themselves. What we are looking for, perhaps, is a way to change the direction of our own production, not by demanding that change from people oppressed by production, but by looking for the designs within the infrastructure of that production itself.

When we talk about seizing the means of production, we ought not to be simply talking about the production of commodities, but about the production of our communities. And when we say seize, we ought to mean the whole thing, from our bodies to the pipes that connect our bodies to each other. And we ought to do it together.

 

'Innovation' must die / that infrastructure might live

3 min read

Via Deb Chachra, an excellent essay by Andrew Russell on the overlooked issue of the age: maintaining the infrastructural metasystem we've got (as opposed to fantasising about the infrastructure we'd build if physics and socioeconomics didn't matter).

I commend the whole piece to you, assuming you're even vaguely interested in my own field of research; it speaks the great policy-unspeakable of infrastructure, namely the fragility of the legacy systems upon which the cutting edge is always-already functionally dependent, and the thinning and effacement of the (often low-paid, low-rights) labour that keeps it running.

I'm going to pick out one of its subthemes for closer inspection, however, as it echoes an argument which has been emerging from my own research: that somewhere along the line, we came to the damaging conclusion that 'innovation' is best defined as 'something that technology entrepreneurs (might) do (provided they're appropriately incentivised)'. Take it away, Russell:

...it is crucial to understand that technology is not innovation. Innovation is only a small piece of what happens with technology. This preoccupation with novelty is unfortunate because it fails to account for technologies in widespread use, and it obscures how many of the things around us are quite old. In his book, Shock of the Old (2007), the historian David Edgerton examines technology-in-use. He finds that common objects, like the electric fan and many parts of the automobile, have been virtually unchanged for a century or more. When we take this broader perspective, we can tell different stories with drastically different geographical, chronological, and sociological emphases. The stalest innovation stories focus on well-to-do white guys sitting in garages in a small region of California, but human beings in the Global South live with technologies too. Which ones? Where do they come from? How are they produced, used, repaired? Yes, novel objects preoccupy the privileged, and can generate huge profits. But the most remarkable tales of cunning, effort, and care that people direct toward technologies exist far beyond the same old anecdotes about invention and innovation.

Innovation is people doing things. Seriously, that's it. Sure, they may end up doing those things in ways that are enabled by technologies and infrastructures, and some of those technologies and infrastructures may indeed have emerged first and foremost from entrepreneurial activity rather than collective sociopolitical action (though, uh, probably not as many as you'd like to think?)... but people innovate all the time in places where infrastructures and/or the appropriate interfaces through which to explot them are absent or beyond their reach. Superflux are relentless in their advocacy of jugaad, and with good reason: it's how the majority of human challenges have been solved, and likely always will be. No MBA required.

But back to Russell for a final sharp poke at the semantic bubble of 'innovation':

... emphasising maintenance involves moving from buzzwords to values, and from means to ends. In formal economic terms, ‘innovation’ involves the diffusion of new things and practices. The term is completely agnostic about whether these things and practices are good. Crack cocaine, for example, was a highly innovative product in the 1980s, which involved a great deal of entrepreneurship (called ‘dealing’) and generated lots of revenue. Innovation! Entrepreneurship! Perhaps this point is cynical, but it draws our attention to a perverse reality: contemporary discourse treats innovation as a positive value in itself, when it is not.

*pop*

 

On the seductive obduracy of infrastructure fictions

7 min read

If there's one good thing to come out of the current race-for-the-gutter in Western political discourse, it's that we're starting to talk about rhetoric and narrative with a sense of urgency. Better late than never, eh?

Here's a bit from a Graun piece on Trump, Brexit et al:

The fourth force at work is related to our understanding of how persuasive language works. Over the course of the 20th century, empirical advances were made in the way words are used to sell to goods and services. They were then systematically applied to political messaging, and the impressionistic rhetoric of promotion increasingly came to replace the rhetoric of traditional step-by-step political argument. The effect has been to give political language some of the brevity, intensity and urgency we associate with the best marketing, but to strip it of explanatory and argumentative power.

"The impressionistic rhetoric of promotion"; make a note of that phrase. Note also that advertising and marketing -- those colourful Mad Men! -- were industries that emerged very directly from the propaganda machineries of the second world war, on both sides. (It wasn't just Nazi rocket scientists who found new gigs on the other side of the Atlantic.)

The political aspect is ugly enough, but there's an extent to which that particular nastiness is at least a known quality, even if it's only responded to with a sort of nihilistic mistrust rather than vigorous critique: to say that politicians purvey bullshit is such a truism that even the cynical tend to act as if embarrassed that you saw fit to raise the point at all. Of course politics is performed like marketing now; what did you expect?

However, the corrolary of that observation -- that marketing is performed like politics -- is a somewhat harder sell (if you'll excuse the deliberate pun). But it's no less true for that: as I've argued elsewhere, political narratives and the narratives of advertising both fall under the metacategory of narratives of futurity:

... “futures” are speculative depictions of possibilities yet to be realised, as are “designs” [...] in this, they belong to a broader category of works that includes product prototypes, political manifestos, investment portfolio growth forecasts, nation-state (or corporate) budget plans, technology brand ad spots, science fiction stories, science fiction movies, computerised predictive system-models, New Year’s resolutions, and many other narrative forms. While they may differ wildly as regards their medium, their reach, and their telos, all of these forms involve speculative and subjective depictions of possibilities yet to be realised; as such, labelling this metacategory as “narratives of futurity” avoids further diluting the (already vague) label “futures”, while simultaneously positioning “futures” among a spectrum of other narrative forms which use similar techniques and strategies to a variety of ends.

To avoid further self-citation, that paper goes on to outline some basic components of the rhetorics of futurity: the techniques through which narratives of futurity are shaped in order to achieve certain effects. These can be observed in political narratives and in advertising... but they can be (and should be!) observed in the popular technoscientific discourse, whether in the form of formal "futures scenarios", or the less formal pronouncements of Silicon Valley's heroic CEO class.

So it's of great relief to me that people are starting to do so. Here's a bit on the fintech industry's revival of the "cashless society" dream, for example:

This is the utopia presented by the growing digital payments industry, which wishes to turn the perpetual mirage of cashless society into a self-fulfilling prophecy. Indeed, a key trick to promoting your interests is to speak of them as obvious inevitabilities that are already under way. It makes others feel silly for not recognising the apparently obvious change.

To create a trend you should also present it as something that other people demand. A sentence like "All over the world, people are switching to digital payments" is not there to describe what other people want. It's there to tell you what you should want by making you feel out of sync with them.

To make a "future" happen, in other words, one should aim to convince one's audience that a) it already is happening, and that b) they're missing out.

(Those who share my misfortune in having read a number of novels by arch-libertarian fantasist Terry Goodkind may recognise this as a variation on the 'Wizard's First Rule' -- a topic which I keep meaning to rant about at greater length.)

But how to give the as-yet-unrealised a sheen of plausibility? Here's another (different) piece at the Graun on technological mythmaking:

... most technological myths mislead us via something so obvious as to be almost unexamined: the presence of human forms at their heart, locked in combat or embrace. The exquisite statue, the bronze warrior, the indestructible cyborg – the drama and pathos of each plays out on a resolutely individual scale. This is how myths work. They make us care by telling us a story about exemplary particularities.

It’s a framing epitomized not only by poems and movies, but also by the narratives of perkily soundtracked adverts. You sit down and switch your laptop on; you slip into your oh-so-smart car; you reach for your phone. “What do you want to do today?” asks the waiting software. “What do you want to know, or buy, or consume?” The second person singular is everywhere. You are empowered, you are enhanced, your mind and body extended in scope and power. Technology is judged by how fast it allows you to dash in pursuit of desire.

(Don't even get me started on the total absence of desire from the popular models of "innovation" or "technological transitions", or whatever we're calling it this week.)

A successful narrative of futurity can be astonishingly obdurate. When I gave my "Infrastructure Fiction" talk to Improving Reality 2013, I was lucky enough to have been gifted a perfect example by no less generous a man than Elon Musk, in the form of his 'transportation alpha concept', Hyperloop. Three years on, and despite countless engineers and architects and planners pointing out the insoluble flaws in the idea, the Hyperloop zombie shambles on... and the damned thing is even raking in investment from people who, if they don't know better themselves, should surely at least be employing some people who do know better.

But why is that a problem? Am I not just pooh-poohing a brilliant visionary who's trying to make a difference to the way we run the world, and those trying to make his dreams a reality?

We just can’t sustain economic growth without improving our infrastructure. Any government that takes the Hyperloop hype that “this is happening now” at face value risks wasting precious resources on an idea that may never become reality – all the while, not spending those resources on technologies, like high-speed rail, that exist and deliver real benefits.

Leaving aside the shibboleth of economic growth for another time, that's the problem right there: narratives of futurity occlude the reality of the lived present. Marketing and adverts seduce; futurity is the plane onto which desire is projected. Meanwhile, the success and acclaim of narrators like Musk add cachet and appeal to their stories; after all, the guy founded Amazon, right? Well, you wouldn't want to miss out on his next great success, now would you?

I think it telling that neither of the groups trying to develop Hyperloop are funded by Musk, who presumably has the sense to get someone to run a CBA before he starts spending money: he critiqued his own story, in other words, and revealed it to be wanting.

But don't for a moment imagine that he and others like him aren't aware of the seductive power of narratives of futurity. They are, in truth, the only thing that Silicon Valley has ever sold.

 

Sf and solutionism / QuantSelf and behaviourism

2 min read

Evidence, if such were needed, that C20th science fiction and the solutionist impulse are two prongs of the same fork:

Technologically assisted attempts to defeat weakness of will or concentration are not new. In 1925 the inventor [and popularisor of pulp science fiction] Hugo Gernsback announced, in the pages of his magazine Science and Invention, an invention called the Isolator. It was a metal, full-face hood, somewhat like a diving helmet, connected by a rubber hose to an oxygen tank. The Isolator, too, was designed to defeat distractions and assist mental focus.

The problem with modern life, Gernsback wrote, was that the ringing of a telephone or a doorbell “is sufficient, in nearly all cases, to stop the flow of thoughts”. Inside the Isolator, however, sounds are muffled, and the small eyeholes prevent you from seeing anything except what is directly in front of you. Gernsback provided a salutary photograph of himself wearing the Isolator while sitting at his desk, looking like one of the Cybermen from Doctor Who. “The author at work in his private study aided by the Isolator,” the caption reads. “Outside noises being eliminated, the worker can concentrate with ease upon the subject at hand.”

(I'm fairly sure there are still a few big names in sf whose approach to writing and life very much resembles resembles Gernsback's Excludo-Helm(TM), if only metaphorically so.)

The above is excerpted aside from a pretty decent New Statesman joint that makes a clear and explicit comparison between the Quantified Self fad and B F Skinner's operant conditioning; shame they didn't reference any of the people who've been arguing that very point for the past five years or so, but hey, journalism amirites?

 

Lessons from infrastructural history: Angkor Wat edition

1 min read

Perhaps Ozymandius died of thirst?

Evans, however, now believes that environmental factors played a significant part [in the collapse of Angkor Wat]. “Looking at the sedimentary records, there is evidence of catastrophic flooding,” he says. “In the expansion of Angkor, they had devastated all of the forests in the watershed, and we have detected failures in the water system, revealing that various parts of the network simply broke down.” With the entire feudal hierarchy reliant on the successful management of water, a break in the chain could have been enough to prompt a gradual decline.

Optimisation is the enemy of resilience. And if you think that you don't live in a feudal hierarchy reliant on the successful management of water, I recommend that you look at capitalism from a slightly different angle.

 

Your humble servant: UI design, narrative point-of-view and the corporate voice

5 min read

I've been chuntering on about the application of narrative theory to design for long enough that I'm kind of embarassed not to have thought of looking for it in something as everyday as the menu labels in UIs... but better late than never, eh?

This guy is interested in how the labels frame the user's experience:

By using “my” in an interface, it implies that the product is an extension of the user. It’s as if the product is labeling things on behalf of the user. “My” feels personal. It feels like you can customize and control it.

By that logic, “my” might be more appropriate when you want to emphasize privacy, personalization, or ownership.

[...]

By using “your” in an interface, it implies that the product is talking with you. It’s almost as if the product is your personal assistant, helping you get something done. “Here’s your music. Here are your orders.”

By that logic, “your” might be more appropriate when you want your product to sound conversational—like it’s walking you through some task. 

As well as personifying the device or app, the second-person POV (where the labels say "your") normalises the presence within the relationship of a narrator who is not the user: it's not just you and your files any more, but you and your files and the implied agency of the personified app. Much has been written already about the way in which the more advanced versions of these personae (Siri, Alexa and friends) have defaults that problematically frame that agency as female, but there's a broader implication as well, in that this personification encourages the conceptualisation of the app not as a tool (which you use to achieve a thing), but as a servant (which you command to achieve a thing on your behalf).

This fits well with the emergent program among tech companies to instrumentalise Clarke's Third Law as a marketing strategy: even a well-made tool lacks the gosh-wow magic of a silicon servant at one's verbal beck and call. And that's a subtly aspirational reframing, a gesture -- largely illusory, but still very powerful -- toward the same distinction to be found between having a well-appointed kitchen and having a chef on retainer, or between having one's own library and having one's own librarian.

By using “we,” “our,” or “us,” they’re actually adding a third participant into the mix — the people behind the product. It suggests that there are real human beings doing the work, not just some mindless machine.

[...]

On the other hand, if your product is an automated tool like Google’s search engine, “we” can feel misleading because there aren’t human beings processing your search. In fact, Google’s UI writing guidelines recommend not saying “we” for most things in their interface.

This is where things start getting a bit weird, because outside of hardcore postmodernist work, you don't often get this sort of corporate third-person narrator cropping up in literature. But we're in a weird period regarding corporate identities in general: in some legal and political senses, corporations really are people -- or at least they are acquiring suites of permissible agency that enable them to act and speak on the same level as people. But the corporate voice is inherently problematic: in its implication of unity (or at least consensus), and in its obfuscation of responsibility. The corporate voice isn't quite the passive voice -- y'know, our old friend "mistakes were made" -- but it gets close enough to do useful work of a similar nature.

By way of example, consider the ways in which some religious organisations narrate their culpability (or lack thereof) in abuse scandals: the refusal to name names or deal in specifics, the diffusion of responsibility, the insistence on the organisation's right to manage its internal affairs privately. The corporate voice is not necessarily duplicitous, but through its conflation of an unknown number of voices into a single authoritative narrator, it retains great scope for rhetorical trickery. That said, repeated and high-profile misuses appear to be encouraging a sort of cultural immunity response -- which, I'd argue, is one reason for the ongoing decline of trust in party political organisations, for whom the corporate voice has always been a crucial rhetorical device: who is this "we", exactly? And would that be the same "we" that lied the last time round? The corporate voice relies on a sense of continuity for its authority, but continuity in a networked world means an ever-growing snail-trail of screw-ups and deceits that are harder to hide away or gloss over; the corporate voice may be powerful, but it comes with risks.

As such, I find it noteworthy that Google's style guide seems to want to make a strict delineation between Google-the-org and Google-the-products. To use an industry-appropriate metaphor, that's a narrative firewall designed to prevent bad opinion of the products being reflected directly onto the org, a deniability mechanism: to criticise the algorithm is not to criticise the company.

#

In the golden era of British railways, the rail companies -- old masters of the corporate voice -- insisted on distinctive pseudo-military uniforms for their employees, who were never referred to as employees, but as servants. This distinction served largely to defray responsibility for accidents away from the organisation and onto the individual or individuals directly involved: one could no more blame the board of directors for an accident caused by one of their shunters, so the argument went, than one could blame the lord of the manor for a murder commited by his groundskeeper. 

 

The end of the codex and the death of Literature

2 min read

Interesting (and appropriately rambling) talk by Will Self, expanding on his recent thesis that a) the technology of the codex is on the way out, and thusly b) so is capital-L literature. I'm not sure I buy it completely, but his argument goes to lots of interesting places, and I recognise a lot in his description of the academy as a sort of care-home for obsolescing art-mediums such as the modernist novel.

(The audience, on the other hand, replete with writers and teachers of writing -- two categories that overlap a great deal, as Self points out -- fails to recognise his description with such venom that it's hard not to characterise their response as classic denial. That said, these are anxious times in the academy, and particularly at the arts and humanities end of it, and being lectured about the demise of your field of expertise by a man still managing to make a living producing that which you study must be a bit galling; in essence, Self does here to literary scholars what Bruce Sterling repeatedly does for technologists and futures types. The difference appears to be that literary scholars know a Cassandra when they hear one.)

Also of interest is Self's characterisation of the difference between literary fiction and genre fiction, perhaps because it is both vaguely canonical and seemingly unexamined: that old tautologous chestnut about literary fiction not being a genre because it doesn't obsess over reader fulfilment and boundary-work. That may be true of literary writers, perhaps (though Barthes is giving me some side-eye for saying so), but it is to ignore the way the publishing industry deals with the category, which is almost entirely generic... and that's a curious oversight for someone who predicates their argument about literature's decline on explicitly technological dynamics. Nonetheless, well worth a watch/listen.

 

Narrative strategies in prose and cinema

4 min read

Some interesting and practical material in this interview with Alex Garland regarding the different narrative affordances of prose and cinema:

DBK: I can imagine a more robust form of that argument just being: A book can deal with ideas, a novel can deal with ideas, in a much more robust way than a film can, so express the ideas in a book.

AG: In its best medium.

DBK: In its best medium, right.

AG: And then I’d say, “Well, it probably depends on the idea. And it depends on the way you want to explore the idea.” If you want to explore it in a forensic way, then what you said is probably true, because just in terms of information, you can get much more information into a novel. Rather, you can get explicit information into a novel that allows you, in a concrete way, to see exactly what the sentence is at least attempting to say, within reason. In film, the ideas are more often alluded to. In the film I just worked on, which is an ideas movie, I would say some of the ideas are very explicitly put out there and literally discussed, and others of them are there by illustration or by inference, just maybe simply in the presentation of a thing. Of a robot that looks like a woman, but isn’t a woman, but maybe it is a woman. There’s an idea contained within that. There is, in fact, a brief discussion about it. But, broadly speaking, in a novel, you would be able to have much more full and forensic-type explanations or discussions.

Film relies much more on inference, but that’s its strength, too. I’ve often thought, as someone who has worked in books and film, about what you can do in a film by doing a close-up, or even a mid-shot, of a glance where somebody notices something, and how easy it is to pack massive amounts of information into that glance in terms of what the character has just seen, or what they haven’t seen. And in a book, how you can never quite throw the moment away, and yet contain as much within it as you can with film. The thing I like most about film is probably that thing. It has this terrific way of being able to load moments that it’s also throwing away, and that’s harder in a novel.

DBK: To be contrarian about that, for a second though . . .

AG: Cool. [Laughter]

DBK: In a book you can actually get inside someone’s head and just tell the reader what they’re thinking or inhabit their consciousness.

AG: Absolutely.

DBK: In a film, everything that the character is thinking has to be conveyed through their facial expression or body language.

AG: Or a bit of voiceover, yeah.

[Note how rare a technique the voiceover is in modern cinema. Note also, by comparing the original cinematic release of Blade Runner with the director's cut, the extent to which the addition or removal of a first-person voice-over completely changes the affect of a film.]

DBK: One thing that strikes me a lot about movies is that the character is deceiving other characters in the scene, but they have to be doing it in a way that’s obvious enough that the audience sees through them, whereas, why don’t the characters in the scene see through them?

AG: Well, it’s funny you should say that, because actually inEx Machinathe characters are often simultaneously deceiving the audience and the other characters. One of the conversations with the actors, prior to shooting, was about making sure that we didn’t telegraph in the way that film often does, in exactly the way you said, that you abandon that relationship. Now, that’s problematic in some ways, because it makes character motivation more ambiguous, but in other ways, that’s also a strength. That may be something I’m pulling from novels, I don’t know, but I didn’t think I was. I thought it was a more explicit version of show-don’t-tell. It was taking show-don’t-tell to a sort of extremist degree, or something like that. But interestingly, there are many, many times inEx Machinawhere a lot of effort is made to not have a complicit understanding, or an implicit understanding, between the audience and a character.

 

Innovation

Frank Cottrell Boyce:

Innovation doesn’t come from the profit motive.

Innovation comes from those who are happy to embark on a course of action without quite knowing where it will lead, without doing a feasibility study, without fear of failure or too much hope of reward. The engine of innovation is reckless generosity...

This. A thousand times, this.