Skip to main content
 

Narratology of fine art

1 min read

Later [portraits] were painted for posterity, offering the evidence of the once living to future generations. Whilst still being painted, they were imagined in the past tense, and the painter, painting, addressed his sitter in the third person -- either singular or plural. He, She, They, as I observed them. This is why so many of them look old, even when they are not.

-- John Berger, from Portraits, 2015

(Compare with advertising, which is a present- or future-tense narrative form; "this is why so many of them look new, even when they are not".)

 

In which I find Amitav Ghosh's missing monocle, and return it to him that he might see more clearly

5 min read

Poor old Amitav Ghosh is wondering where all the fiction about climate change might be... when in fact it's right under his nose, and he simply chooses to disregard it as being insufficiently deserving of the label "literature".

Right in the first paragraph, he answers his question and immediately discards the answer:

... it could even be said that fiction that deals with climate change is almost by definition not of the kind that is taken seriously: the mere mention of the subject is often enough to relegate a novel or a short story to the genre of science fiction. It is as though in the literary imagination climate change were somehow akin to extraterrestrials or interplanetary travel.

If for "literary imagination" we substitute "bourgeois imagination", that last sentence is no surprise at all -- because this is about genre, which is a proxy for class.

And when Ghosh surveys the few examples of supposedly literary fiction that have dealt with climate change, look what happens:

When I try to think of writers whose imaginative work has communicated a more specific sense of the accelerating changes in our environment, I find myself at a loss; of literary novelists writing in English only a handful of names come to mind: Margaret Atwood, Kurt Vonnegut Jr, Barbara Kingsolver, Doris Lessing, Cormac McCarthy, Ian McEwan and T Coraghessan Boyle.

Now, I'll concede that most of them have preferred generic labels other than science fiction for their works at one time or another, but it's very hard to make the case that Atwood, Vonnegut and Lessing haven't written works that slip very easily into the sf folksonomy, while McCarthy has written a very successful dystopia. So that's half of Ghosh's successes demonstrably working in the speculative fiction tradition... but they can't be speculative fiction, because they're too good for that trash. They've won awards and stuff -- awards that aren't rocket-shaped. Ipso facto, no?

To his credit, Ghosh gets pretty close to the technical distinction in narrative strategy that demarks the dichotomy he's observing, via one of Moretti's more interesting theory-nuggets:

This is achieved through the insertion of what Franco Moretti, the literary theorist, calls “fillers”. According to Moretti, “fillers function very much like the good manners so important in Austen: they are both mechanisms designed to keep the ‘narrativity’ of life under control – to give a regularity, a ‘style’ to existence”. It is through this mechanism that worlds are conjured up, through everyday details, which function “as the opposite of narrative”.

It is thus that the novel takes its modern form, through “the relocation of the unheard-of toward the background ... while the everyday moves into the foreground”. As Moretti puts it, “fillers are an attempt at rationalising the novelistic universe: turning it into a world of few surprises, fewer adventures, and no miracles at all”.

I offer that the absence of Moretti's fillers -- often but not always replaced with anti-fillers designed to re-enchant the novelistic universe, and make of the universe a character in its own right -- is a way to describe one of the more fundamental strategies of speculative fictions, where it is preferable to have a world with more surprises, more adventures, and more than the occasional deus ex machina). Moretti's fillers are basically the opposite of worldbuilding; they remove complexity, rather than adding it.

And here we see the true root of the problem, the reason no one who identifies as a writer of "serious" "literary" fiction can handle climate change in their work -- look at Ghosh's language, here, and tell me he doesn't feel the class pressure of genre (my bold):

To introduce such happenings into a novel is in fact to court eviction from the mansion in which serious fiction has long been in residence; it is to risk banishment to the humbler dwellings that surround the manor house – those generic out-houses that were once known by names such as the gothic, the romance or the melodrama, and have now come to be called fantasy, horror and science fiction.

It's clearly not that "the novel" as a form can't handle climate change: science fiction novels routinely invert the obstacles set out in Ghosh's piece in order to do their work. It's that to upset those particular obstacles is to break the rules of Literature Club, to go slumming it with the plebes of genre fiction: literary fiction can't write about climate change, or about any other topic that requires an understanding of the storyworld as a dynamic and complex system, because -- as a self-consciously bourgeois genre in its own right -- it cannot commit the sin of portraying a world where the bourgeoise certainties no longer pertain, wherein hazard and adventure and unexpected events are revealed to be not merely routine, but to be the New Normal.

Take it from a squatter in the generic out-houses, Amitav old son: there's only one way you'll ever get literary fiction that deals with climate change -- and that's by acknowledging, however grudgingly, that not only was science fiction capable of being literature all along, but that science fiction began by asking the question whose suppression is the truest trope of the literary: what if the world were more important than the actions of individuals?

 

Freeman, 2016 -- Why Narrative Matters: Philosophy, Method, Theory

The necessity of narrative (and narrative hermeneutics) in 'understanding the human realm' is threefold:

1) Philosophical

Relates to alterity, 'the Otherness within'; cf Freud, we are mysteries to ourselves; viz Ricouer, 'the hermeneutic dimension of the human situation is insurpassable'.

We cannot know with any certainty how an event or constellation of events works itself out in a life; all we can do is interpret. [...] as we engage in the arduous process of self-understanding, our only recourse is to turn to "signs scattered in the world" -- our hope being that, somehow, they might find a suitable home in story.

2) Methodological

Relates to fidelity; there is 'no more fitting and appropriate vehicle for exploring the otherness of both others and oneself'

Example: why did author become a scholar of narrative, rather than some other sort of scholar, or indeed something other than a scholar?

--> Deep question: 'How do we become who we are? [...] How deep do the reasons go?'

The narrative unconscious: '... those aspects of our lives bound up with history and culture, the tradition into which we are thrust and which, in its own obscure ways, infiltrates and constitutes being.'

So, personal factors and life-events, certainly, but also 'supra-personal' factors (e.g. 'intellectual climate', traditions).

Point being: there are many reasons why we become what we are, and those reasons, proximal and distal, and extended in time 'can only come together in and through the process of interpretation'.

However, hermeneutic circle -- with its 'mutually constructive relationship' between episode and plot -- means that it's very problematic to talk about objectivity. Hence fidelity:

The "faithfulness" it connotes is not just a matter of interpretive adequacy, but also one of interpretive _care_, of a sort that preserves the otherness of the past as well as the Otherness of those -- including oneself -- whose past it is.

Hermeneutics [...] is a form of constructionism that maintains an effort to speak the _truth_ -- one, indeed, that insists that truth can only emerge in and through the interpretive constructions one fashions.

So, finitude and certainty are not possible... but interpretation and hindsight might combine to produce insight, which is neither a finding or a making, but a 'finding-through-making'.

Therefore fidelity is 'tied to that kind of respectful beholding that lets the text of the past appear as other -- even if this "other" is none other than oneself.'

3) Theoretical

Relates to 'ex-centricity' -- 'locating those sources of "inspiration" outside the self that condition the stories we tell about ourselves.

Three dimensions of narrative hermeneutics:

a) Relational dimension: 'our stories are intimately bound up with those of others'.

b) Existential dimension: 'others -- especially, but not exclusively, human others -- provide the "motive fuel" [...] for the stories we tell about ourselves.'

c) Ethical dimension: 'stories we tell [...] are always, to a greater or lesser extent, fuelled by the people and "projects" to whom and which we are most responsible'.

Therefore the combination of narrative hermenetics with the project of self-understanding 'serves to show that there is _no_ self, no story of the self, apart from the myriad relationships within which they take form'.

'Thinking Otherwise' (--> reframing narrative hermeneutics)

The standard riff is that narrative hermeneutics is a process of meaning-making; meaning-making is clearly necessary, but perhaps not sufficient.

... suggesting that the subject is not only a meaning-maker [...] but is also him- or her-self "made" -- _given_, as Marion (2002) puts it -- constituted by the myriad phenomena, both human and nonhuman, encountered in experience.

If the proximal source of one's narrative is the self, therefore the distal source is the Other.

... narrative hermeneutics might itself become more Other-directed and "ex-centric", more attuned to the ways in which meanings [...] become inscribed in the movement of subjectivity. [In doing so, the subject] remains the site within which the world is refigured and reimagined. And narrative remains its primary language.

###

Lots of interesting ideas in here. Most pertinent to current interests: a more 'ex-centric' hermeneutics of narrative offers opportunities to look at the role played by non-human others (e.g. institutions, organisations, systems?) in the construction of the self; can such a role in narrative self-construction be identified for new technologies and infrastructures? Where would one look for such material? How would that influence manifest?

 

On the seductive obduracy of infrastructure fictions

7 min read

If there's one good thing to come out of the current race-for-the-gutter in Western political discourse, it's that we're starting to talk about rhetoric and narrative with a sense of urgency. Better late than never, eh?

Here's a bit from a Graun piece on Trump, Brexit et al:

The fourth force at work is related to our understanding of how persuasive language works. Over the course of the 20th century, empirical advances were made in the way words are used to sell to goods and services. They were then systematically applied to political messaging, and the impressionistic rhetoric of promotion increasingly came to replace the rhetoric of traditional step-by-step political argument. The effect has been to give political language some of the brevity, intensity and urgency we associate with the best marketing, but to strip it of explanatory and argumentative power.

"The impressionistic rhetoric of promotion"; make a note of that phrase. Note also that advertising and marketing -- those colourful Mad Men! -- were industries that emerged very directly from the propaganda machineries of the second world war, on both sides. (It wasn't just Nazi rocket scientists who found new gigs on the other side of the Atlantic.)

The political aspect is ugly enough, but there's an extent to which that particular nastiness is at least a known quality, even if it's only responded to with a sort of nihilistic mistrust rather than vigorous critique: to say that politicians purvey bullshit is such a truism that even the cynical tend to act as if embarrassed that you saw fit to raise the point at all. Of course politics is performed like marketing now; what did you expect?

However, the corrolary of that observation -- that marketing is performed like politics -- is a somewhat harder sell (if you'll excuse the deliberate pun). But it's no less true for that: as I've argued elsewhere, political narratives and the narratives of advertising both fall under the metacategory of narratives of futurity:

... “futures” are speculative depictions of possibilities yet to be realised, as are “designs” [...] in this, they belong to a broader category of works that includes product prototypes, political manifestos, investment portfolio growth forecasts, nation-state (or corporate) budget plans, technology brand ad spots, science fiction stories, science fiction movies, computerised predictive system-models, New Year’s resolutions, and many other narrative forms. While they may differ wildly as regards their medium, their reach, and their telos, all of these forms involve speculative and subjective depictions of possibilities yet to be realised; as such, labelling this metacategory as “narratives of futurity” avoids further diluting the (already vague) label “futures”, while simultaneously positioning “futures” among a spectrum of other narrative forms which use similar techniques and strategies to a variety of ends.

To avoid further self-citation, that paper goes on to outline some basic components of the rhetorics of futurity: the techniques through which narratives of futurity are shaped in order to achieve certain effects. These can be observed in political narratives and in advertising... but they can be (and should be!) observed in the popular technoscientific discourse, whether in the form of formal "futures scenarios", or the less formal pronouncements of Silicon Valley's heroic CEO class.

So it's of great relief to me that people are starting to do so. Here's a bit on the fintech industry's revival of the "cashless society" dream, for example:

This is the utopia presented by the growing digital payments industry, which wishes to turn the perpetual mirage of cashless society into a self-fulfilling prophecy. Indeed, a key trick to promoting your interests is to speak of them as obvious inevitabilities that are already under way. It makes others feel silly for not recognising the apparently obvious change.

To create a trend you should also present it as something that other people demand. A sentence like "All over the world, people are switching to digital payments" is not there to describe what other people want. It's there to tell you what you should want by making you feel out of sync with them.

To make a "future" happen, in other words, one should aim to convince one's audience that a) it already is happening, and that b) they're missing out.

(Those who share my misfortune in having read a number of novels by arch-libertarian fantasist Terry Goodkind may recognise this as a variation on the 'Wizard's First Rule' -- a topic which I keep meaning to rant about at greater length.)

But how to give the as-yet-unrealised a sheen of plausibility? Here's another (different) piece at the Graun on technological mythmaking:

... most technological myths mislead us via something so obvious as to be almost unexamined: the presence of human forms at their heart, locked in combat or embrace. The exquisite statue, the bronze warrior, the indestructible cyborg – the drama and pathos of each plays out on a resolutely individual scale. This is how myths work. They make us care by telling us a story about exemplary particularities.

It’s a framing epitomized not only by poems and movies, but also by the narratives of perkily soundtracked adverts. You sit down and switch your laptop on; you slip into your oh-so-smart car; you reach for your phone. “What do you want to do today?” asks the waiting software. “What do you want to know, or buy, or consume?” The second person singular is everywhere. You are empowered, you are enhanced, your mind and body extended in scope and power. Technology is judged by how fast it allows you to dash in pursuit of desire.

(Don't even get me started on the total absence of desire from the popular models of "innovation" or "technological transitions", or whatever we're calling it this week.)

A successful narrative of futurity can be astonishingly obdurate. When I gave my "Infrastructure Fiction" talk to Improving Reality 2013, I was lucky enough to have been gifted a perfect example by no less generous a man than Elon Musk, in the form of his 'transportation alpha concept', Hyperloop. Three years on, and despite countless engineers and architects and planners pointing out the insoluble flaws in the idea, the Hyperloop zombie shambles on... and the damned thing is even raking in investment from people who, if they don't know better themselves, should surely at least be employing some people who do know better.

But why is that a problem? Am I not just pooh-poohing a brilliant visionary who's trying to make a difference to the way we run the world, and those trying to make his dreams a reality?

We just can’t sustain economic growth without improving our infrastructure. Any government that takes the Hyperloop hype that “this is happening now” at face value risks wasting precious resources on an idea that may never become reality – all the while, not spending those resources on technologies, like high-speed rail, that exist and deliver real benefits.

Leaving aside the shibboleth of economic growth for another time, that's the problem right there: narratives of futurity occlude the reality of the lived present. Marketing and adverts seduce; futurity is the plane onto which desire is projected. Meanwhile, the success and acclaim of narrators like Musk add cachet and appeal to their stories; after all, the guy founded Amazon, right? Well, you wouldn't want to miss out on his next great success, now would you?

I think it telling that neither of the groups trying to develop Hyperloop are funded by Musk, who presumably has the sense to get someone to run a CBA before he starts spending money: he critiqued his own story, in other words, and revealed it to be wanting.

But don't for a moment imagine that he and others like him aren't aware of the seductive power of narratives of futurity. They are, in truth, the only thing that Silicon Valley has ever sold.

 

Your humble servant: UI design, narrative point-of-view and the corporate voice

5 min read

I've been chuntering on about the application of narrative theory to design for long enough that I'm kind of embarassed not to have thought of looking for it in something as everyday as the menu labels in UIs... but better late than never, eh?

This guy is interested in how the labels frame the user's experience:

By using “my” in an interface, it implies that the product is an extension of the user. It’s as if the product is labeling things on behalf of the user. “My” feels personal. It feels like you can customize and control it.

By that logic, “my” might be more appropriate when you want to emphasize privacy, personalization, or ownership.

[...]

By using “your” in an interface, it implies that the product is talking with you. It’s almost as if the product is your personal assistant, helping you get something done. “Here’s your music. Here are your orders.”

By that logic, “your” might be more appropriate when you want your product to sound conversational—like it’s walking you through some task. 

As well as personifying the device or app, the second-person POV (where the labels say "your") normalises the presence within the relationship of a narrator who is not the user: it's not just you and your files any more, but you and your files and the implied agency of the personified app. Much has been written already about the way in which the more advanced versions of these personae (Siri, Alexa and friends) have defaults that problematically frame that agency as female, but there's a broader implication as well, in that this personification encourages the conceptualisation of the app not as a tool (which you use to achieve a thing), but as a servant (which you command to achieve a thing on your behalf).

This fits well with the emergent program among tech companies to instrumentalise Clarke's Third Law as a marketing strategy: even a well-made tool lacks the gosh-wow magic of a silicon servant at one's verbal beck and call. And that's a subtly aspirational reframing, a gesture -- largely illusory, but still very powerful -- toward the same distinction to be found between having a well-appointed kitchen and having a chef on retainer, or between having one's own library and having one's own librarian.

By using “we,” “our,” or “us,” they’re actually adding a third participant into the mix — the people behind the product. It suggests that there are real human beings doing the work, not just some mindless machine.

[...]

On the other hand, if your product is an automated tool like Google’s search engine, “we” can feel misleading because there aren’t human beings processing your search. In fact, Google’s UI writing guidelines recommend not saying “we” for most things in their interface.

This is where things start getting a bit weird, because outside of hardcore postmodernist work, you don't often get this sort of corporate third-person narrator cropping up in literature. But we're in a weird period regarding corporate identities in general: in some legal and political senses, corporations really are people -- or at least they are acquiring suites of permissible agency that enable them to act and speak on the same level as people. But the corporate voice is inherently problematic: in its implication of unity (or at least consensus), and in its obfuscation of responsibility. The corporate voice isn't quite the passive voice -- y'know, our old friend "mistakes were made" -- but it gets close enough to do useful work of a similar nature.

By way of example, consider the ways in which some religious organisations narrate their culpability (or lack thereof) in abuse scandals: the refusal to name names or deal in specifics, the diffusion of responsibility, the insistence on the organisation's right to manage its internal affairs privately. The corporate voice is not necessarily duplicitous, but through its conflation of an unknown number of voices into a single authoritative narrator, it retains great scope for rhetorical trickery. That said, repeated and high-profile misuses appear to be encouraging a sort of cultural immunity response -- which, I'd argue, is one reason for the ongoing decline of trust in party political organisations, for whom the corporate voice has always been a crucial rhetorical device: who is this "we", exactly? And would that be the same "we" that lied the last time round? The corporate voice relies on a sense of continuity for its authority, but continuity in a networked world means an ever-growing snail-trail of screw-ups and deceits that are harder to hide away or gloss over; the corporate voice may be powerful, but it comes with risks.

As such, I find it noteworthy that Google's style guide seems to want to make a strict delineation between Google-the-org and Google-the-products. To use an industry-appropriate metaphor, that's a narrative firewall designed to prevent bad opinion of the products being reflected directly onto the org, a deniability mechanism: to criticise the algorithm is not to criticise the company.

#

In the golden era of British railways, the rail companies -- old masters of the corporate voice -- insisted on distinctive pseudo-military uniforms for their employees, who were never referred to as employees, but as servants. This distinction served largely to defray responsibility for accidents away from the organisation and onto the individual or individuals directly involved: one could no more blame the board of directors for an accident caused by one of their shunters, so the argument went, than one could blame the lord of the manor for a murder commited by his groundskeeper. 

 

Narrative strategies in prose and cinema

4 min read

Some interesting and practical material in this interview with Alex Garland regarding the different narrative affordances of prose and cinema:

DBK: I can imagine a more robust form of that argument just being: A book can deal with ideas, a novel can deal with ideas, in a much more robust way than a film can, so express the ideas in a book.

AG: In its best medium.

DBK: In its best medium, right.

AG: And then I’d say, “Well, it probably depends on the idea. And it depends on the way you want to explore the idea.” If you want to explore it in a forensic way, then what you said is probably true, because just in terms of information, you can get much more information into a novel. Rather, you can get explicit information into a novel that allows you, in a concrete way, to see exactly what the sentence is at least attempting to say, within reason. In film, the ideas are more often alluded to. In the film I just worked on, which is an ideas movie, I would say some of the ideas are very explicitly put out there and literally discussed, and others of them are there by illustration or by inference, just maybe simply in the presentation of a thing. Of a robot that looks like a woman, but isn’t a woman, but maybe it is a woman. There’s an idea contained within that. There is, in fact, a brief discussion about it. But, broadly speaking, in a novel, you would be able to have much more full and forensic-type explanations or discussions.

Film relies much more on inference, but that’s its strength, too. I’ve often thought, as someone who has worked in books and film, about what you can do in a film by doing a close-up, or even a mid-shot, of a glance where somebody notices something, and how easy it is to pack massive amounts of information into that glance in terms of what the character has just seen, or what they haven’t seen. And in a book, how you can never quite throw the moment away, and yet contain as much within it as you can with film. The thing I like most about film is probably that thing. It has this terrific way of being able to load moments that it’s also throwing away, and that’s harder in a novel.

DBK: To be contrarian about that, for a second though . . .

AG: Cool. [Laughter]

DBK: In a book you can actually get inside someone’s head and just tell the reader what they’re thinking or inhabit their consciousness.

AG: Absolutely.

DBK: In a film, everything that the character is thinking has to be conveyed through their facial expression or body language.

AG: Or a bit of voiceover, yeah.

[Note how rare a technique the voiceover is in modern cinema. Note also, by comparing the original cinematic release of Blade Runner with the director's cut, the extent to which the addition or removal of a first-person voice-over completely changes the affect of a film.]

DBK: One thing that strikes me a lot about movies is that the character is deceiving other characters in the scene, but they have to be doing it in a way that’s obvious enough that the audience sees through them, whereas, why don’t the characters in the scene see through them?

AG: Well, it’s funny you should say that, because actually inEx Machinathe characters are often simultaneously deceiving the audience and the other characters. One of the conversations with the actors, prior to shooting, was about making sure that we didn’t telegraph in the way that film often does, in exactly the way you said, that you abandon that relationship. Now, that’s problematic in some ways, because it makes character motivation more ambiguous, but in other ways, that’s also a strength. That may be something I’m pulling from novels, I don’t know, but I didn’t think I was. I thought it was a more explicit version of show-don’t-tell. It was taking show-don’t-tell to a sort of extremist degree, or something like that. But interestingly, there are many, many times inEx Machinawhere a lot of effort is made to not have a complicit understanding, or an implicit understanding, between the audience and a character.

 

The uses of story: narrative strategies for speculative critical designers

1 min read

On 5 July 2016, I spent the day at the London College of Communication as a guest lecturer for a summer school on speculative and critical design. Courtesy course leaders Tobias Revell and Ben Stopher, here's a video of my lecture.

 

Synthetic space(s)

3 min read

While I will probably always be gutted that someone else has beaten me to writing a history of EVE, I can at least take comfort in the fact that the person who's done it appears to get it -- the game itself is of little interest, it's the utopian economic space-for-action which the game provides that matters:

I met these two guys from the University of Ghent who created a computer model that shows what happens to economic prices in certain parts of EVE, depending on whether or not there are battles going on nearby.

In these areas where a lot of ships are being destroyed, you would expect to see the price of materials skyrocket, because everyone’s trying to build new ships and new fleets. But what they found was that, in areas where a lot of ships are being destroyed, the prices go through the floor, because everyone in that region of space starts liquidating everything. There’s an invading alliance coming, and they’re trying to get their stuff out the door as fast as possible, to make sure their stuff doesn’t get taken or conquered. They said this is similar to what you see in the real world. In pre-war Germany, the price of gold dropped through the floor because everyone was trying to liquidate their belongings and get out of the country. …

EVE is the most real place that we’ve ever created on the Internet. And that is borne out in these war stories. And it’s borne out because these people who—you find this over and over again—who don’t view this as fictional. They don’t view it as a game. They view it as a very real part of their lives, and a very real part of their accomplishments as people.

[...]

Something that I found formed very early on in EVE was the understanding among certain leaders was that people will follow you, even if they don’t believe in what you believe in, simply because you’re giving them something to believe in. You’re giving them a reason to play this game. You’re giving them a narrative to unite behind, and that’s fun. It’s far more fun to crusade against the evil empire than it is to show up and shoot lasers at spaceships.

Now mulling over the possibilities of studying the role of infrastructure in virtual economies... anyone want to picth in on a grant application?

 

Most meaningful moments

2 min read

Article at the Graun lamenting the lack of 'art punks', and of an identifiable British art movement with convenient label attached; I am clearly more plugged into the art world than I thought, because all the artists I know are subversive in ways that owe a lot to punk ideals, if not neccessarily its aesthetics, and none of them get a mention. (Yes, I'm lamenting the lack of coverage of my friends in art-scene-overview journalism; this is because my friends are all objectively brilliant geniuses, and everyone should acknowledge the fact.)

Mostly linking for the sake of this passage, though:

 “Facebook has become this space where the most meaningful moments of one’s life are mixed with ‘corporate narrative’ adverts,” he says. “Personally, I don’t see the difference between them any more. They are all part of the same mush. I think it all has value. Today, with art and commerce constantly feeding off each other, it is a super exciting place to look for ideas.”

It's very hard for me to get a read on exactly how sincere and straight-faced a statement that is, given it's a snippet of an interview transcription... but even if being used as a careful artist's masque, it reveals the inescapable ubiquity of the postmodern condition, not as an abstract social-theoretical idea, but as a lived cultural experience. I don't think it is an ironic position, either, but that doesn't mean it can't still be critical; if there is such a thing as post-postmodernity or altermodernity, it is perhaps defined by its never having known that which preceded postmodernity, by its acknowledgement -- which is a different thing to acceptance -- that it's "turtles all the way down".

If postmodernity was the shattering of metanarratives, altermodernity is the making of mosaics from the broken pieces. Is it any wonder we're so cautious of being cut?

 

Cram (2015): "Becoming Jane: The making and unmaking of Hanford’s nuclear body."

"... building the nuclear body has ultimately meant first defining life [as being, in essence, a young white able-bodied American male], and then defining the conditions in which that life should be considered liveable." (p.802)

In this paper, Cram performs a critical archaeology of the nuclear body: "a statistically calculated human template" (p.798) used to assess the risk of radiogenic illness as a result of exposure to radioactive materials. Cram begins with the Atomic Bomb Survivor Study, through which the US government sought to exploit the "scarce and precious intellectual resource" (in their own words) represented by the hibakushas -- "the exposed ones", the survivors of the bombings of Hiroshima and Nagasaki; she then moves on to discuss Standard Man, later renamed Reference Man, created by the International Committee on Radiation Protection  (ICRP) to be the "official body through which such information [as gathered from studying the hibakushas] could be applied and understood" (p.800)

Perhaps unsurprisingly, Reference Man was not only male but young, Caucasian and able-bodied, and assumed to partake in "Western European or North American [...] habit and custom". Realising that not all people exposed to radiation quite fit the template, but unwilling or unable to develop a standardised female model, policy-makers generally utilise a tweaked model in which "they simply give Reference Man breasts, ovaries and a uterus -- creating a hermaphroditic human in order to 'solve' the problem of radioactive gender inequality" (p.801). Racial differences are similarly magicked away through the power of statistics, producing impossible "placeless bodies" -- figures without a ground, characters without a context.

"The notion that Reference Man's hermaphroditic trasnformation equalizes gender inequality in risk calculation ignores the appropriative character of his statistical sex change." (p.801)

Cram then goes on to discuss the role of the nuclear body in shaping the political and technical aspects of the remediation of the Hanford Nuclear Reservation -- land which was ceded to a number of native American tribes in 1855, but which also played host to a significant chunk of the US government's nuclear weapons program, with predictable results. The treaty entitles the indigenous population to "full access" to the land -- but the nuclear body as currently constructed makes physiological and behavioural assumptions which do not match the indigenous population and the lifestyles they wish to engage in. What this means in practice is that the "end point" of the remediation process will be defined at least in part with reference to a model of exposure risk that doesn't tally with the population whose risk is being assessed. As a result, indiginous institutions have developed their own model, based more closely on the sorts of behaviour they consider to be normal for their lifeways, only to be told that their model of "physiologically impossible"... which, while arguably true, is certainly just as true of Reference Man.

Cram's point is that the nuclear body plays an active role in remdiation projects such as that of Hannford "by fashioning subjects that can survive in the post-nuclear future. In identifying who can inhabit remediated space, cleanup renegotiates the relationship between safety, security, and the contamination it leaves behind." (pp.806-7) In other words, as part of a remediation process, models such as Reference Man inform not only the environmental standards to which a space will he held, but also the physiological standards and behaviours expected of those whose lifestyles might be safely accomodated by said space.

"... it is this simplicity -- this abstraction from the lived experience of exposure -- that makes the nuclear body politically useful. Nuclear standards must make radiogenic injury generalizable, translating from diverse and often incomplete sources into explicit statements of cause and effect. Indeed, building the nuclear body has required untangling exposure-related illnesses from the social and spatial relations that give them meaning. [...] policies that rely heavily upon biological parameters in determinign risk, ignore and thus reproduce the greater structural inequalities of exposure-related illness." (p.802)

So, while the notion of the nuclear body as a historical ontology appears to be novel, this is the sort of paper that anyone familiar with the canonical riffs of STS will recognise: technoscientific standards and statistics erasing difference. (And those familiar with the secrets of east Prussian forestry will recognise yet another manifestation of "seeing like a state".) This is a particularly interesting case due to the paradoxical and mutual physiological impossibilities of both Reference Man and the alternatives proposed by the indigenous peoples themselves; neither side in the process recognises the other's model, even as both seek to refine their models further.

This paper has particular interest to me because there's potential for tektology here: it's not that huge a metaphorical leap to see Reference Man et al as fictional characters, as is highlighted by the brief narrative describing "Jane" in the paper's introduction. The physiology and presumed behaviour patterns of these characters generate a "story" when they're introduced into a storyworld which includes parameters for radiation exposure risk; their experience is completely determined by their constitution (both literal and figurative). This in turn isn't unlike the notion of the "thin character" from modernist literary theory: the thin character isn't quite a stereotype, but is something approaching one, and while that works well for certain narrative forms (episodic forms in particular -- such as the sit-com, where the narrative arc of each episode is of a renormalisation of the characters to their stock state and circumstance), but really badly for forms where mimesis (which we might describe as a degree of fidelity to reality, or at least to broadly-held conceptions of reality) is a requirement. And more importantly, Reference Man is repeatedly introduced into narratives of futurity, which -- as I have argued elsewhere -- are a metagenre of narrative forms concerned with depicting futures, including not only science fictions, foresight scenarios and design fictions, but also forecasts and models. (A narrative does not have to be verbal: an annual profit forecast graph is as much a narrative of futurity as an H G Wells novel.)

So while I'm not certain what I can immediately do with the notion of the nuclear body (and the broader type of fictionalised subject-construction which it figures), it definitely feels like another tool for talking about the ways in which technoscience shapes narratives of futurity -- not just formally and structurally, but in terms of whose future it is that gets depicted.