The Future (of Reading) is Digital

In my experience, studying digital humanities has been a consistent eye-opener. A number of weeks ago, for instance, Tim O’Keefe casually mentioned in his lecture that the much-feted Kindle was on its way out, digital reading practises having been largely subsumed into the iPad’s monolithic reach. Now maybe I’m very old-fashioned, but this was a genuine shock (and a somewhat glum one at that – surprising, given that I have never owned nor wanted to own a Kindle). For the first time it made me feel uncomfortably old; wasn’t it only yesterday that the poor Kindle was being triumphantly hailed as the future of reading, the death of print, and all such associated apocalyptic scenarios?

I am, naturally, aware of the pace of technological change, but the Kindle had seemed like something new and properly game-changing. That I would associate these factors with longevity perhaps now shows me up for the dinosaur I am – and it is laughable that I feel pangs of nostalgia for something that once so deeply unsettled my English literature-bred morals. But I’m being premature. Of course the eponymous e-reader is a long way from being dead and buried, particularly if the battle against its decline is being fought by a corporation as powerful as Amazon. And in many ways it is the sea-change in reading practises it brought about that is the true issue (and legacy) at stake here. Wherefore the future of digital reading?

According to the Electronic Text Centre at the University of Virginia, an ebook is “any full-text electronic resource designed to be read on a screen, in something other than a web browser”. In today’s digital marketplace this has expanded beyond the tablet; as our phones grow in size and screen clarity so they encroach on the domain of reading platforms. But for the most part the ebook is associated with a standard tablet-sized device, with the interface designed to either mimic the physical page or to optimise ease of reading on the eye.

So where’s the problem? Surely for those of us wanting to spare the planet’s trees the increasing transition to digital reading platforms can only be heralded as progress? But as one astute commentator, Jonah Lehrer, has pointed out, there remains “a nagging problem with the merger of screens and sentences”. In other words, the medium influences how we read. The physical presence of text and how it is presented should not simply be taken for granted – much like how the early Russian Formalist and Structuralist critics urged that language should not be regarded as a window into reality. Rather, we should recognise that what we are using to gain access to ‘knowledge’ has a fundamental effect on our reading practices and, to some extent, vice versa.

There are parallels here also with the Modernist conception of ‘hard reading’. For authors such as Eliot and Joyce, this involved a deliberate slowing down of perception through the use of obscure language, techniques and allusions. It turned the spotlight on the practice of unconscious literary interpretation itself, forcing the reader to come to terms with, among other things, the visual representation of text on a page. In the 21st century, Lehrer claims, we risk losing all sense of difficulty and materiality about the written word, as screen-based text becomes ever smoother and more adaptable to the human eye. He uses the example where he can only seem to edit his own writing in a 3D format. “Why”, he asks, “do I only notice my mistakes after they’re printed on dead trees?”

The implied answer here has some important implications for the modern Humanities department. Our reliance on printed monographs has spanned hundreds of years, and, if librarians have anything to do with it, should last a while yet. But an increasing proportion of the printed word is now being consumed in digital format. Journals and periodicals have all but disappeared off the shelves and into the realm of online subscription packages. Academic ebooks have been gaining traction, albeit in smaller numbers that publishers would like to see (I wonder how much this has to do with the presumably sharp resistance among academics to the cause of their dwindling print runs?) But one of the most significant developments in this field is undoubtedly the rise and rise of Google – both in its Google Scholar and GoogleBooks formats. Increasingly, the first port of call of the average undergraduate in urgent need of citations is that ubiquitous search bar. What Google has arguably perfected in this process is the ease of usability and access. Most recent publications are now fully word-searchable at the tap of a screen, and with databases such as JSTOR now appearing in Google searches this trend is only set to continue.

So should we lament at the altar of our digital overlords and beg for a return to hard reading, physical search processes, or even just the ensured safety of the Kindle? Of course not, although as some hard-core nihilists (usually found festering at the back of History departments) would have you believe, it wouldn’t be worth your breath. Screen reading is here to stay, and we should deal with it. But nor is the humble paper-and-glue book dead: as one recent Guardian article put it: “the ghost of Gutenberg is still alive out there, kicking and finding a way.”

Hallelujah to that.

Advertisements

The First Step: A Response

 

Addiction is dangerous. It’s a truth self-evident, we may believe by now, but for all the advances in the science of human behaviour we just haven’t managed to shake it off. Drugs, alcohol, gaming, gambling – all of these are well-known harbingers of a worrying dependence and almost irresistible desire to return to primal comforts. Yet there persists in society the undercurrent of thought that we somehow have control over our addictive tendencies. The autonomy of the mind, this idea goes, ultimately lies with us, the human subject, and any capable adult really ought to wield the ultimate power in their decisions.

In his blog, digitalhum (https://digitalhum.wordpress.com/2015/11/04/the-first-step/), Stephen Brown has articulated some of the manifest problems with this way of thinking. Today we face possibly the most persuasive and invasive form of addiction in the technology we surround ourselves with. Not only has it infiltrated every corner and age of life – honourary mention here goes to the babies and grannies with iPads who briefly took over youtube – but we have been inculcated into believing it is only a good thing.

Such is the inherent power of corporate capitalism today that each unveiling of the newest piece of technological hardware is greeted with more breathless fervour and worship than the malaria vaccine. In the western world at least, the newest gods are those app developers, web entrepreneurs and Google minions who continue to shape the way we consume and think about technology. The digital diary that we had to complete for class was just the surface manifestation of how deep into our lives these men (and they are mainly men) have burrowed their products, to the extent that many of us capable young humans would profoundly struggle without them.

It is a sleight of hand of the highest order. ‘Of course we aren’t the architects of modern addiction’, they claim – ‘we simply create devices and codes that you can choose what do do with at your own will. We simply lay these expensive shiny things before you and give you the choice whether or not to empty your purse’. This is the value-neutrality paradigm that Stephen discusses. In the great tradition of laissez-faire economics, we are led to believe that the market exerts no force over our free will, that all decisions and the implications of such rest squarely in the hands of the consumer/user.

And at first glance, this is indeed true – or at least not wrong. Smartphones and Netflix have no biologically addictive properties: our bodies and nervous systems simply cannot become hooked on them in the same way as alcohol or nicotine. Secondly, technology is an inherently anti-social development, something that research has generally tended to posit as a demotivating factor in addiction. (Although we may associate social exclusion as major part of late-stage addiction, this tends to be an outcome and not an initial motivator.) But as all kinds of recent studies have shown, the psychological dependency on social media and web connectivity is at least as dangerous and insidious as any physical addiction.

This also raises the question of responsibility, something that Stephen remarks upon, and an area that the academic Humanities has gradually come to see as critically important. Technology, we can safely say, has now reached a point of such ubiquity that focus has shifted, not to whether or not we should be engaging with it, but how best to deal with its potential and dangers. This attitude can be seen in two recent articles from the social sciences. In last November’s issue of the journal Computers in Human Behaviour, Kathryn Greene and Kate Magsamen-Conrad published a study on ‘Technology addiction’s contribution to mental wellbeing: The positive effect of online social capital’. They investigated how consistent (some would say addictive) usage of communications media could have positive benefits for types of people with low self-esteem or who generally shy away from social interaction. The conclusion being: hey guys, maybe there is a silver lining to this ever-growing, all-knowing and powerful cloud.

In another instance, Business Source Complete recently published a guide to ‘Conquering Digital Distraction’, in which they enlisted the services of two experts in the area to advise on how best to manage the ‘overload’. The article takes as a given the fact that all of us have to cope in a world that is always switched on, where our streams of information and communication are not only increasing but multiplying in all directions. It is little wonder, then, that addiction becomes a natural by-product of the modern psyche. We are all of us implicit in technology’s designs. But in our acceptance of this fact, and the move, not to limit but to better incorporate it in our lives, are we simply playing into the hands of the multinationals who ultimately control the strings? Digital Humanities might say no (its institutional position depends upon it), but I think the skepticism that has sustained academic enquiry for centuries is most urgently required here. Capitulation to the system shouldn’t be our only option, or else institutionalised addiction will become the depressing norm. And as Orwell might have said, we certainly don’t want to live in a world like that.

 

Musings on DH Theory

Digital Humanities is an area (or discipline or ethos depending on who you speak to) fraught by anxieties of definition. What is digital humanities? It’s the question that, as a PhD student in the defiantly non-digital humanities, initially drew me to the area. But it’s also one that lingers over most theoretical writings in the area, as well as one that I’m forced to answer on an almost daily basis from my peers. A considerable body of work has developed over its implications, and it is this literature, as opposed to the question itself, to which I will dedicate the discussion here. A consensus is far from being reached, but the vigour of ongoing debate in the area point to at least a dynamic and sustained interest in teasing out the question and its many and varied answers.

One could view the intensity of self-interrogation as an integral product of digital humanities itself. The siege mentality implicit in some of its foundational writings (like those from Pannapacker and Svenson) springs from an awareness that this is an area at the vanguard of some of the most rapid and consistent changes in the university’s institutional history. Little wonder, then, that its proponents face suspicion and often downright hostility from those who would question the validity of digital scholarship in the humanities. When addressing what is at stake in its theory and practise, the use of language is therefore revealing. Susan Schreibman is one of of the areas foremost proponents, and has helped lay down some of the foundational texts of the discipline (let’s call it that). What can we read between the lines of her unquestionably thorough and rigorous scholarship? In her introduction to the 2011 issue of Profession, on ‘Evaluating Digital Scholarship’, there is a definite note of positive reinforcement. Consider the line “…to contribute to this continuing dialogue about recognizing and appropriately rewarding new types of scholarly investigation and communication made possible by digital media”. ‘Recognising’, ‘rewarding’, ‘make possible’ – all these signify to us the optimism that propels her research and output. Furthermore, there is a tone of confidence and encouragement that posits this new area of research as of fundamental and inherent value. It is perhaps exactly what we would expect from an article discussing the digital humanities as a way forward and beyond for an academy in a state of crisis.

But this assured tone of implicit confidence in the continued role and worth of DH is not one echoed across all of its writings. As mentioned before, the siege mentality discernable throughout the scholarship frequently gives rise to odd subtexts. For commentators such as Dan Cohen, there is an air of the PR man about much of his pronouncements. This can sometimes spill over into an almost militant-like stance on asserting the area’s value – ‘the lady doth protest too much, methinks’, indeed. Is this indicative of a deeper anxiety about the stability of the field within the humanities (famously resistant to any mention of change)? Without getting too Freudian in our readings of the readings, it might well be the case. Even the title of Alan Liu’s definitive 2012 text points to a rather ambivalent feeling about the direction of DH: ‘The state of the digital humanities: A report and a critique’.

So what implication has this strong undercurrent of self-probing and uneasiness got for the actual practice of digital humanities in the 21st century? One of the main responses by experts in the field has been to lay down charters, guidelines and textbook companions in an attempt to render firmer definitions and objectives as acceptable to the broader academic community. And it has certainly worked, though perhaps to a limited extent. Very few researchers in 2015 (particularly those concerned with the archives) would deny the usefulness and opportunity afforded by the expanding digital domain and all its manifold potentialities. Indeed in my case, the entire subject of my doctoral project (investigating the contribution of Irish writers to nineteenth-century British periodicals) would be almost impossible without online indexes such as Wellesley or the various ProQuest collections. But the limitations of such totalizing definitions and goals can also be very easily seen. The first Companion to Digital Humanities, published in 2004, now looks so laughably antiquated as to be entirely useless to any modern-day student of the field. What other academic area could you say the same about from the perspective of an an eleven-year timeline?

And thus the central crux of the issue asserts itself once again. The rate at which technology in all its forms is accelerating and changing defies the expectations of all but a few multinational corporations. We simply cannot know what ramifications the latest development is to have on questions such as copyright, data storage or accessibility, to name but a few. Even hardware developments, such as the introduction of tablet computing some years ago, have subtly but undeniably altered the paradigms of research processes. And this trend of unpredictability is only set to continue. So to what end can we still talk about the idea of a theory and definition (signifying stability and universality) behind Digital Humanities? It is a question which will continue to keep all of us who are invested in the area awake at night for some time to come. Watch this space.