This semester I'm using this page to blog for my Digital Humanities course (DIG 6178). Read at your own risk!

April 16th. Is it Week 14?

I already like the Electronic Literature Organization’s website better than last week’s hoax floppy disk thing-site. Fun fact: I had no idea what the purpose of that website was until class. Clearly a major design flaw. Or else I can’t read, as graduate school daily reminds me.

Re: new media, while I want to say that of course new media is a viable research method and pedagogical tool, I learned at Cs that statements like that are not yet given. I probably wrote on here once before that digital rhetoric/new media means “computer-y stuff.” And you can see from the curricula here that digital rhetoric/new media means “let’s use a computer to type the things.”

So it’s not surprising that the Electronic Literature Organization has to justify their role by referring to 2nd and 3rd and 18th century textual remediations. Will folks who think that digital rhetoric is computer-y stuff ever think otherwise?

Not important. When was this website last updated with a project? The most recent individual works look like they were uploaded in 2016, though the news/updates are current. What’s cool about this space is that it would provide a home for my weird little parking lot project. As you can see from previous entries, I’ve been wondering how I’m supposed to publish this thing on my heavily constrained google sites interface. Publishing venues like this one are an option.

This is the second time this week that “ecology” has been used to describe an online system of knowledge. As a person also guilty of hopping on the ecologies bandwagon, I’d like to point out that the metaphor of “ecology” should be chosen to describe things in terms of sustainability, thriving, etc. For example, “over time this establishes a history of the critical reception of a given work” (p. 135). Just draw out the idea of a critical reception to mean an environment that supports electronic publication, and then we’ve compared the ELMCIP to an ecology.


Some of my actual digital rhetoric friends on twitter have taught video games as composition, a la Jones (2016). From their updates, those are fascinating classes, though I can imagine they’re a bit difficult to fit into the state’s FYC requirements. Yes, “video games are quintessentially modeling systems,” but FYC requirements generally ask for the production of texts (p. 85).

“Now Katie,” says my strawman from a few weeks ago. “Video games are texts!” But even in an upper level course, I’m not sure that a semester is enough time to deal with “the basic questions of metadata ontologies” (p. 90). And while some students would get a kick out of the text-based command games like the ones I made in high school, I’m not sure that they would learn SLOs any better than they are now, which makes the digital aspect somewhat gimmicky.

As a means for research, I’m still slightly on hold from digital projects. Because although it’s really fun to use the internet to figure out how people write and think and act, some of the scholars in my field are saying that it’s wrong to use metadata without permission from those people. So while I might really design a video game that’s also a data collector, I’ll wait to see what the official position on that becomes.

Notey note: I understand that your article wasn’t exactly about using video games to teach composition, or about using video games to learn how people think and act. But those things are what I’m interested in.

Week 13. April 9th. Apparently I Needed to Learn In-Line CSS.

Last week was an interesting look into digital humanities from a literature perspective. While everyone’s using digital tools to remediate texts, there was a divergence in the types of texts being remediated. Lit folks tended to find new ways to present published or published-only-in-print texts, while I (and possibly rhet folks) seemed to remediate my own research.

What (any unexpected form of) presentation makes possible is the ability to break genre conventions. Sometimes genre conventions should be broken, or recombined, as Borsuk (2018) comments. Borsuk’s (2018) “new juxtapositions” are made possible by the opened structure of a recombinant text. In my case, what can be gained when I take my Parking Lot Paper and make it…not a paper? How do I encourage interactivity?

From Borsuk’s (2018) example of the book of poem snippets, my guess is that interactivity is supposed to mean something more than “clicking a lot.” Don’t most readers ragequit after clicking twice, anyway? Perhaps interactivity is supposed to encourage breaking the rules. In my case, “breaking the rules” could mean allowing readers to rewrite the darn policy that caused this mess.

What I’m less interested in is a site that’s a repository, like the Agrippa Files (sorry). There’s too much to choose from here, perhaps in keeping with an archive. Purpose means a lot in the creation of digital projects, and my goal is not to be an archivist here, but an author. An interesting question for tonight: do the people who are remediating already-published texts view themselves as coauthors? collaborators? archivists? Do these distinctions have any impact on purpose on the design of the artifact?

As a technical communicator, I comment that this site is too well-organized. We don’t need links in-text, a menu on the right hand side (the very last place a western reader will look for it), plus tabs up at the top. A little purpose would go a long way to streamlining reader choice on this site.

I remember Between Page And Screen from earlier in the semester. I suppose by holding the book pages up to a computer’s webcam, there exist light waves? And currently, my webcam cover. This project seems to encourage interactivity through creating a product that can only exist through mediation of computer/book. My project doesn’t do this. Sad Katie.

Stauffer (2016) nicely points out a problem with “just” digitizing archives, instead of thinking of digital humanities as a way to create new textual artifacts. Whose archives are we digitizing, and what knowledges (always knowledges) are lost when we go digital? I have a different problem with digitization: access.

Currently the internet is controlled by ISPs, who charge money for access. Thus our knowledges are also controlled by ISPs, who (in the U.S. at least) do not serve citizens. For me to support total digitization efforts, the internet would need to be guaranteed as a basic human right. Of course this universal access brings to mind all those scary manifestos about the free internet from earlier this semester.

I can (could always see) their point. For the internet to be a basic human right, it cannot be controlled by any government. That’s not going to happen, so I will settle for better laws that guarantee access to all users. Then digitization might be an option.

Week 12. April 2nd. Grumpy thoughts and html.

I'm glad I learned html. Really, I am. My first contact with it was as a technical writer in 2016, where I was told to transcribe all the MyReviewers community comments into html for publication. Henceforth I was always the person to advocate that we needed no more community comments. (in more recent, merciful times, the interface for creating MyR CCs has changed)

Still, coding's darkness comes from spending so much time hunched over a computer only to see the output as a bunch of gibberish. And then the sinking realization that I've messed up, again. Thus, not a surprise that the thought of coding makes me want to head for the hills.

While I suppose my purpose here this week is not to comment why there are so few folks doing coding, let me use my answers to Jones & Fraistat as evidence. For more on my project, refer to my post last week.

1. What is the focus of this edition?

To theorize legal erasure, and trace the concept's origins to problem policies at universities.

2. What is the markup scheme?

That's a good question. I had to google “markup schemes” which “are standards which have been developed to make markup easier to do.” So then I googled “markup” which means “[annotation of] a plain-text document's content to give information regarding the structure of the text or instructions for how it is to be displayed” and I still don't know a) what schemes do what b) how to find information on what schemes do what. Answer: No idea.

3. What is the electronic apparatus?

Does google sites allow this sort of thing to be hosted? I assume that the end result of this data visualization is a chunk of gibberish, so I guess I just copy-paste it into a blank page? How do I get a blank page? Answer: Possibly my website?

4. How do you update the thing?

Let's not put the cart before the horse. I'd rather focus on having a thing first.

5. Who do you contact for help?

I feel like anyone I contact for help has to pass the Katie Doesn't Know How To Do This But She's Really Quite Intelligent Despite Impostor Syndrome/Graduate School's Attempts to Convince Her Otherwise So Don't Be a Condescending Awful Person Test.

As evinced by the last time I asked someone for help with [redacted, but a tech tool] and they literally copy-pasted the instructions from their previous email, because that's bound to help.

I don't write all these naysayings to be grumpy, but to illustrate how easy it is to walk away despite cries (Hi, Price) to involve specialists and laypeople to meet the needs caused by the increased quantity of available information (p. 146).

Week 11. March 26th. Wireframing

Since our readings this week were practical, I’ll use them to…ahem, continue wireframing my project. Because I’ve totally been wireframing this entire time. Promise. Best student ever.

My major project this semester has been this one on problem policies. Basically, last October the police told me that a creepy person who left a note on my windshield and then waited by my car on a different day to ask me why I hadn’t responded to that note wasn’t a stalker. The police were right in terms of university policy, but wrong in terms of state law. If they had known about state law, they could have let me know to file a criminal complaint.

Unfortunately for the two policepeople who had to deal with me, I was taking a transnational feminism class at the time. Thanks to Saidiya Hartman and Chandra Mohanty, I know exactly why actions that are definitely stalking are legally not stalking. But this irritating week in October got me thinking at scale: what’s the gap between policy and law at other universities? If my case was erased due to the gap, then could articulating these problem policies help explain why sexual harassment is and remains underreported in U.S. universities? Click link above for the rest of the conference paper.

So my project is a form of mapping, in that I’m mapping the difference between policy and law at universities with technical communication programs (they’re my audience, so I picked universities for my sample that they would care about). However, I’m no “humanist [tumbling] wholesale into a mapping exercise with rampant enthusiasm for sticking virtual pins in virtual maps” (Drucker, 2016, p. 241-242). Besides, I’m not mapping where these programs are. I’m mapping their policies.

My argument is that many policies are designed around what Mohanty calls a singular representation of women that does not correspond to the material realities of groups of women. Policies are then written around that singular representation’s perceived ability to express will and consent, which makes those policies subject to gender (and a bunch of other stuff, but I digress) stereotypes. Yay.

Let’s go on a visualization journey then. I suppose the reader opens on…a policy document, a blurred outline of lorem ipsum. Because before they can get to my data, they are going to join me on a road trip through theory. So a fade-in-fade-out thing shows them what legal erasure is.

Before they get annoyed, the reader gets to see my data, which is still in a policy document form. Instead of actual policies, though, we see descriptions of discursive moves, i.e. “65% of policies in this sample require the victim to express their will for an action to be considered harassment.” We’ll do the three most prurient findings because triads are cool.

But wait, there’s more! My despairing reader can click words in the discursive moves for a theoretical examination of what I’m talking about. Also, I’m not sure anyone wants a theoretical examination, but they’re going to get one.

Clicking on words like “express their will” pulls up an infographic-like literature review that explains how these requirements often reify gendered conceptions of appropriate behavior, conceptions that particularly disenfranchise indigenous women and women of color, as Lugones is good enough to explicate.

Clicking on “be considered harassment” shows how many university policies place requirements for proving harassment on victims (because those policies are under Title IX’s jurisdiction, and for harassment to be a Title IX problem, it has to interfere with a victim’s school or work environment. So, as I found out, an action can be stalking, but stalking isn’t always harassment. It was a bad week in October.)

As for wireframing, I think I’ve just created a infographic, albeit one that can be clicked on. Or a map of problem policies.

Week 10. March 19th. An outing!

After a busy week at Cs and ATTW, I continued my conference-like activities at our DH Labs housewarming.

Week 9. March 5th. You're Going to Quote Yeats at Me, Sunshine?

I’m glad I’m not doing a pecha kucha this week because of the ever-growing risk that I’d sound like a tin foil hat person. However, Greenfield sounds so devoid of hope here that my contrarian self feels compelled to offer some notes on human indomitability. Greenfield closes his conclusion by describing four scenarios ahead (Green Plenty, The Widening Gyre ((no relation to the poem)), Stacks Plus, Perfect Harmony), and only one of them describes a machine-augmented loveliness. Which could have been California.

But in a world under capitalism, Green Plenty probably won’t happen. Instead what is more likely is scenario 3’s reliance on state-markets to regulate being. Here our fearless author pouts that “the gnawing anxieties of precarity have been replaced by something very nearly as unpleasant, and much more permanent: reconciliation to the fact that they have it as good as they ever will” (location 4732). As a precarious, anxious millennial, I’m not sure I appreciate this argument. But sure.

Greenfield has a point, though, about UBIs going toward healthcare and other services that a humane state should provide, even if provided “grudgingly, out of fear” (location 4726). The idea of UBIs for housing and medical care brings me to the concept of a humane state, as our author asks us to rethink both “humanity” and “state”. We’re hovering on the edge of humanity (location 4191). [Engineers program] AI to encroach. Greenfield provides evidence for this assertion in the form of scare-narratives about computers being able to play chess and paint Rembrandts. Art in the style of famous artists sounds like fanfiction--hasn’t hurt the publishing industry too much.

And the thought of a robot taking over my research is soothing at the moment. Greenfield is correct that many of the “tasks” that humans can do can be done by robots. But despite capitalism’s attempt to reduce existence to the completion of tasks, the “essence of learning” (location 4208), the “pinnacle of human aspiration” (location 4208), I’d like to discuss “magic,” “ghostly ‘inspiration,’” “soulfulness.”

I’m doing this because I’d rather not engage in example vs. example argumentation, but discuss humanity--apt for a digital humanities course. Also possibly because my life is a “complicated ouroboros of pointlessness dedicated primarily to the manipulation of symbols” (location 4720). Lol.


Not for the first time this semester, I’ve wondered where intuition fits into research. Positionality statements? Methods sections? That one time I used the End of Test Exhaustion Postulate [to fail] on a pre-calc exam? When we discuss epistemology, we ask how we know what we know. In research, our answers are still rational and measured. That’s fine. Our ethos depends on a logical progression of steps that construct knowledge.

Perhaps I’m the only weirdo in the room (as always), but most of the time I don’t know how I know something. Intuition guides so much of life--not streaming media’s inducement of slavering devotion via recommended videos and season binges. I contend that we don’t know why we do things, or how we know things.

I can’t prove this. I can point to the TED talk and the creepy whisper-voiced video for today, both of which mention that we don’t really notice how many ads miss the mark. We remember the one time Youtube actually showed me something that I wanted to watch. (2015, a chocolate tart. Two weeks ago, a trailer for some show with vampires and witches in Oxford’s Bodleian library. Can’t wait.) I can point you to my snarky comment in my notes for today: “the next recession will probably be because they finally figure out that none of this data actually means anything.”

Go back to the Creepy Whisper Voiced Video (CWVV), where some terrified data engineer commented that data isn’t reliable. Perhaps humans are just unpredictable? Agile=/=unpredictable, sorry.

I can’t prove this. Faith luckily isn’t subjected to reason, so I don’t have to prove why I’m not too concerned about AI taking over the world. (Constant media lulling humanity into a state vulnerable to persuasion, yup. But that ship has sailed.) Sure, AI can perform tasks for us, and maybe make too many decisions for us.

But they [evil corporate overlords] still can’t predict why we pause the Netflix binge to go into the kitchen. That’s why they built smart fridges*.

*I don’t know how I know this. Doesn’t mean I’m wrong.

Week 8. February 26th. Workism.

Today I’m interested in employing big data toward equitable ends. I don’t know how to do this, because see rant from a few weeks ago on oppression. But just for today, I will attempt to be a cyborgian philosopher-king (and yes, I can be a king). Please suspend your ire at the naivete of this entire post. Greenfield writes that "such decenterings may not be particularly upsetting to anyone who's ever sat with a set of questions we inherit from feminist, post-structuralist and ecological thought: questions about the death of 'Man,' the agency of nonhuman actors, the consequences of our decisions for the other life we share a planet with, and the extreme unwisdom of trying to articulate some kind of binary distinction about the boundary between humanity and its others in the first place" (n.p.). I will attempt to not be upset then.

Generally research starts by asking questions. Question I have: How can technology dismantle oppression? For the moment let me assume that oppression can be dismantled, and that technology can be the tool. Somewhere Audre Lorde whispers that the master’s tools will never dismantle the master’s house. Forgive me. I’m going to try.

Now what do I mean by oppression? At the moment I’ll use USF System Policy 0-007, which states that "The USF System strives to provide a work and study environment for faculty, staff and students that is free from discrimination and harassment on the basis of race, color, marital status, sex, religion, national origin, disability, age, or genetic information, as provided by law. " I’m using 0-007 because I found a bunch of scary signs up in Cooper Hall this morning, and I took them down because I believed they were creating a hostile environment for folks who identify as victims of such discrimination or harassment. Since knowledge is situated (sorry Haraway), why don’t I employ this example as a case study? Doing so focuses my question.

The obvious answer here is to use surveillance technologies to figure out who is doing this and then kick them out of my State. But the master’s tools will never dismantle the master’s house; exile only creates exiles who rise up against the State. We can’t do that. I could also figure out what printer this person used and jam it. How long will their convictions survive against an inextricably jammed printer? But no. Silencing is a tool of the master. Also, burning books never works.

So I am forbidden my epideixis—a denial of my being as a rhetorician. I will turn to feminist research methods, which prioritize listening, inclusion, and action. So Philosopher-King-Cyborg-Katie would need to listen. Can I use surveillance technologies to listen? Ethics (and also Florida 934.03) tell us that we have to be transparent about our listening mechanisms. So, step 1.

Step 1: Put up signs where the creepy signs were asking to meet with individual.

I think that will be the end of this particular conversation, because people who have divergent views aren’t likely to want to meet. Perhaps technology can enable our meeting? Would they be willing to meet with me over some sort of anonymous online chat? This could work.

Feminist research tells us that when we meet with members of marginalized populations (though what if they only think they’re marginalized?), we should ask for their participation in our research. So I’d have to tell this troll that I’m interested in ending oppression via technology. I’m sure that the troll will approve of this endeavor. Once their approval has been obtained, I now have to listen.

Step 2: Listen to the individual.

Philosopher-King-Cyborg-Katie longs for the expediency of surveillance, for the efficiency of exile. Now I’d need to find out why this troll wants to leave scary signs up around Cooper. (And maybe not call my research subjects trolls.) Latour (2007, p. 167) tells us to avoid the signposts of Theory/Context/Capitalized Things, so I’ll stay slow and myopic. I’m here to listen, to find out why.

Now we move into participatory action research, so I can suggest that the troll and I work together to end oppression through technology. Marginalized populations are supposed to explain what they need and then I am supposed to figure out how to connect them with resources.

Step 3: Provide individual with resources.

Oh, lookie. I think I created a monster. But I think I used feminist research practices to do so. Here Philosopher-King-Cyborg-Katie would like to interject that jamming the printer would have been more efficient and effective.

The issue here seems to be scale. Oppression, Technology, Capitalism. In rhetoric: Episteme, Epideixis, Demos. The usual solutions are built at scale—this being the point of research as generalizable knowledge. Dismantling capital-T-theory is going to take processes that cannot be regimented, cannot be theorized.

I’m not sure if algorithms can be designed to resist processes. Greenfield’s chapter 8 suggests that the vast amount of data we’ve collected so far is intended to design systems that respond to every known scenario. That’s not going to work if we want to actually dismantle oppression.

Conclusion (for now): I have a headache. Listening infrastructures.

Week 7. February 19th. Dystopia. And apologies.

The apologies are for the belated entry. Last week my Pecha Kucha focused on dystopian visions of crytocurrencies, and why particularly the narrative of dystopia is so popular when we discuss digital technologies’ impacts on humanity. I also asked why our dystopian fictions seem to discard technology.

At risk of sounding like a weird person who rambles about oppression all the time, I hazarded a guess that oppression is written into our infrastructures (and thus our institutions). So of course an infrastructure as pervasive as technology is going to resemble oppression at first. That technology has become linked with capitalism and oppression is no surprise.

Dystopias feature a police state in which citizens are under constant surveillance, and the results of such surveillance are used to punish citizens who disobey the demands of the institution. This is usually when a dystopian novel opens.

As a person whose favorite chapter in all tech comm literature is “Technical Communication as Management System Control,” I’ll comment that we can avoid dystopian novels if we use panopticism to convince everyone to discipline themselves. This also makes me a bad person. (Also a professor once told me that I was reading Foucault incorrectly because I was too focused on individual impacts instead of systemic effects. More apologies.)

Digital technologies are also infrastructural, so they provide a perfect opportunity for dystopias to form. Given that this blog is linked to my academic persona, I will refrain from too many scathing critiques of oppressive infrastructures. I'm a good little worker I promise.

Week 6. February 12th. Congealed Labor.

"Congealed labor" is what most people mean, I think, when they say "black box." Just like "there's technology and stuff" is possibly what most people mean when they say "cyborg," a la last class. Greenfield's chapter 4 incredibly irritated me this week. To start poking at his argument, I use the second theory from "Before You Make a Thing."

Ask who benefits most from automation and novelty.

According to Greenfield, the folks who benefit most right now are the ones who know how to make things. I googled those people. They look like the same people who have always oppressed us. Forgive me my skepticism, but I have a bucketload of theory that says those people are always going to (try to) win. Can the subaltern speak? No; we were never meant to survive. Apologies to both Spivak and Lorde for mashing their most quotable quotes together. I did it to emphasize my utter disbelief that the world without scarcity Greenfield highlights will ever materialize.

It's telling, how the biggest concerns Greenfield notes are material, not theoretical. First, the need to distribute the means of fabrication. A space/place concern. Usability concerns, largely reduced to ease-of-access or clear-language. Available materials. Assembly. We can fix all of those. What we can't fix: the way that oppression works. But surely, my strawman says, you didn't read all the way to the chapter's end, where Greenfield discusses the overthrow of capitalism! He even says "our common sense, our values, our very notions of what is and is not possible...we seem to have a particularly hard time with the notion that these intimate qualities of self might rest on anything as bathetic and concrete as the way in which we collectively choose to organize the world's productive capacity" (n.p).

See, Katie, my strawman says. It's just your fault for being closed-minded. Your fault for being "sustained, mediated and satisfied by mechanisms of market." Your fault for being "[broken] to service and [inured] to our complicity in the unspeakable" (n.p.). See, I did read the chapter. Now let's talk about complicity. Despite strategic use of "we" and "our," here, I didn't see a lot of acknowledging said complicity. Dear strawman, do you really acknowledge your complicity in the oppression of [redacted because this is a public blog linked to my name and academic presence]. Because it seems that your utopian vision of "the chance to live in an environment we've fashioned ourselves, using the tools we ourselves have created...[working] out the shape of the future" looks a lot like the past we eradicated.

So the social and intellectual heavy lifting needs to begin with that, dear strawman. And given our yet largely-unacknowledged complicity in the annihilation of so many people and cultures, I doubt this "maker community" is actually working in my best interests. Oppression isn't in the eye of the beholder, Greenfield, it's woven into our being.

I believe in ranting thusly, I have "[engaged] directly the power of technology" and specifically tangled it with [redacted because see above redaction]. "Before You Make a Thing" is great! I should have started with this, and will definitely make my students read this. I like this list (despite the profusion of bullet points which don't really need to exist. Just because <ul> is easy doesn't mean it's a good method for organizing tons of information) because it asks us first to consider the ideas that I was so angry at Greenfield for considering last. Perhaps I am only a curmudgeonly rhetorician, but purpose-audience-design is how I view the world. This list is intuitive for me.

A last note, for now: being able to 3D print more legos sounds like a great idea. I always ran out.

Week 5. February 5th. Things?

I read “By His Things You Shall Know Him” last this week and am still trying to figure out where this fits into the other readings. “From Beyond the Coming Age of Networked Matter” explored what happens when humankind eventually learns how to unmake reality, while Greenfield’s “Augmented Reality” chapter showed us how to make reality more real. Perhaps the relationship between this week’s readings is that they all deal with digital technology’s ability to make and unmake reality, or simply to mediate what is real.

Along with Greenfield, I’ll begin with my memories of Pokemon Go. As someone who 1) doesn’t own a smartphone and 2) doesn’t play games and 3) has the physical balance of someone three times her age and therefore needs to look at the ground she’s walking on to make sure she doesn’t fall over, I didn’t play. However, I remember quite clearly encountering one of my friends walking back from the library, phone in one hand, coffee in the other. He waved the phone-hand at me. I waved back. But he wasn’t looking at me, just whatever pokemonsterthing was superimposed over my image. Now, I admit that I am plagued by the tendency to over-assign importance to myself, but I’d like to ask: How can Pokemon Go claim to be an augmented reality when its headspace doesn’t include me? Or other people, a question also asked in this reddit thread.

Greenfield points out that in augmented realities, “there is a very real risk that those who are able to do so will prefer a retreat behind a wall of mediation to the difficult work of being fully present in public. At its zenith, this tendency implies both a dereliction of public space, and an almost total abandonment of any notion of a shared public realm” (n.p.). Taken shallowly, if we consider digital technology the only mediator of public spaces, then here’s the danger of ARs. Given Greenfield’s own definition of AR as a “[way] we already buffer and mediate what we experience as we move through space…[imposing] a certain distance between us and the full manifold of the environment” (n.p.), I’m not sure that we can conceive this AR danger as new or a particular quality of cyberspace.

For example, the last time I squashed a bug in my house, I apologized to the offending insect, but explained to its corpse that the state of nature is war. My inspired interpretation of Hobbes aside, was I not also augmenting my reality by “[projecting] an alternative narrative of [my] actions” (n.p)? Better AR here apparently, due to what Greenfield calls the symmetry between my psychic investment and immediate physical surroundings. Imagine if I had to squash the bug, pull up the Hobbes on a phone, and then make my grand pronouncement—somewhat a misuse of the *checks notes* joke.

Which may be why Google Glass is held up to be a better AR tool than the smartphone, due to its more-symmetrical integration into physical surroundings. Presumably Pokemon Go deployed through creepy glasses would have superimposed a pokemonsterthing on top of me as I encountered my distracted friend on my way to the library that day, and provoked some sort of mundane “Katie you’re a pokemon!” conversation. Reality (me! I think I’m real? Most days I think I’m real.) would have been augmented by whatever pokemonsterthing that Pokemon Go had created. I would have preferred chapter 3 to end with this idea, rather than being called one of the digital-technologically-unmediated in need of help.

Instead, Greenfield glimpses a future where we’re not sure what reality entails anymore, a future also hinted at in “From Beyond the Coming Age of Networked Matter.” Here Ultimate Knowledge results in Ultimate Destruction, perhaps a parallel circumstance to Greenfield’s idealized notion of AR. The integration of humanity into the universe via digital technology again fails only due to a phablet. (Fun fact, until I googled it, I thought a phablet was a tablet with glitter. I stand corrected.) Crawferd is not the first person to say something like “The universe is dark and ancient and monstrous, and hostile to our frail place within it. If we ever peek just once through a crack in the doors of perception, we shrivel into absurdist nothing, We’re cackling madmen eating flies. We’re mental mummies forever frozen in fear.” See, there was a point to bringing up Hobbes earlier.

Although I’ve written “sheeple” more times in my Greenfield book than I care to admit, I hope that I don’t come across as one of those tinfoil hat people who see in digital technologies the end of all things. Like any good little new materialist, I believe that reality is circulated through the connections we make. Digital technologies’ ability to help us make more connections make realities (Greenfield cautions: certain realities) more real. Given all this talk of connections, I’m still having trouble connecting “By His Things Will You Know Him” to this concept. Something to do with AR and things, I suspect, coming from the last line: “Things are fine.”

Will AR eliminate things? or make them better? Good little feminist new materialists don’t prioritize citing Latour anymore, but talk of making things more real means I think of his Dingpolitik (2004), and the way that “objects—taken as so many issues—bind all of us in ways that map out a public space profoundly different from what is usually recognized” (p. 15). Latour derides fundamentalists as those who “think they are safer without…those cumbersome, torturous and opaque techniques, they will see better, father, faster and act more decisively” (p. 31). As I wrote last paragraph, I hope my deep suspicions about capitalism-arbitrated AR do not make me a fundamentalist, merely concerned about access. And speaking about access, like Greenfield, Latour* concludes that reality-mediators help us participate in publics—assuming we have access to them.

*I don’t actually think that Greenfield and Latour have ever read each other, so I feel a bit guilty about drawing this connection between them. My AR is still glitchy, apparently.

Week 4. January 29th. You May Be Quantified, But I'm Not.

The Greenfield chapter we read this week was on the Internet of Things, a concept a student explained to me in less terrifying terms four years ago. Then, the IoT was simply the ability to network devices in a home so that the home ran more simply. Greenfield expands this meaning of the IOT to a "networked perception" of spaces, places, things, and bodies. As someone who has at one time at least identified as a new materialist posthumanist, I'm pretty sure that spaces, places, things, and bodies are already inextricably linked. To build on Jorgensen (2016), p. 42, information has always flowed between ourselves, the rest of the world, and technologies.

To my ever-present strawman, I point out that I borrow Ong for technology: a thing that makes another thing easier. Sorry Ong.

In the grand although slightly nebulous way of new materialists, I'll point out that haven't we always been connected? Why does it take the Internet to highlight these connections? As a cynic, I'll also point out that the IoT concept has been highly capitalized. And normalized--why is it weird for me to say that the sunlight links all of us but those internetty signal things don't? So as a way of being I'm not opposed to networked reality. What I'd like to explore in this entry is our fear of sentience.

For most of us the sun is not alive (except for this one Doctor Who episode possibly?); its energy might be everywhere but it is not a moral agent, i.e. being assigned responsibility (borrowed from Johnson & Johnson, 2018). Even though the internet's signals are (like sunlight) incapable of responsibility's assignation, somewhere a human is responsible for their being. And humans can use these signals to hurt people, as Greenfield's small case study shows. To circle back to Johnson & Johnson (2018), their answer to the question of whether objects have moral agency is that an object must be institutionalized by the state to act as a moral agent, and that in the U.S. humans are held responsible (p. 135). The early days of the internet may not have been beset by concerns about security and privacy because the internet was then not perceived to be a manifestation of the state.

So what I termed sentience up above appears to be the eye of the state? I also know that we're worried about the ways that such networks can manipulate our reality, thanks to the chapters in Schreibman et al. What makes reality real? As a possible new materialist posthumanist, isn't reality made more real by the relationships we form? So if a smart toaster is able to change your reality, your reality wasn't super real to begin with. Remaking reality is fun! We should do that! (And also, haven't we always done that?)

Week 3. January 22nd. Contestations. Totally a word.

This week the readings discussed divides in Digital Humanities disciplinarity. And alliteration, apparently. From the readings, I got the impression that DH is a young discipline: is it? If so, all this talk of manifestos and ire and unrest is to be expected from a fractious new place of inquiry. An article I should probably remember because Comps are closing in upon me once told me how to develop a discipline: 1) shared subjects of inquiry, 2) shared scholarship, 3) shared theory, 4) shared methods/language, 5) common institutional places. These things don't develop quickly, and like all epideictic endeavors, some stuff (and people) get run over along the way.

Hey! I've been looking for an example binding epistemology to epideixis for awhile. (Mostly because they sound similar, so I assumed they must be similar.) Epideixis (for me as a rhetorician) is the process of building communities. There's no inclusion without exclusion, so epideixis is always violent. Here, we're building knowledge (episteme) through various institutional and disciplinary practices (epideixis). Epistemology becomes violent because, as the readings suggest, our knowledge can't become Our Knowledge unless it crushes other knowledge. Just nod and pretend this has to do with the reading.

Barlow's (1996) "Declaration of the Independence of Cyberspace" was most dubiously received by me. How can cyberspace "provide our society with more order"? While I don't doubt that many online communities have their own social contracts, I'm not sure that the unified We of this declaration holds up to Losh et al's (2016) point that "technologies are complex systems with divergent values and cultural assumptions." And nothing designed is neutral, a thing I said in class last time that people nodded at. Apparently I stole that off Twitter: here's the original source.

But perhaps 1996 was a simpler time? Or perhaps the manifesto as a genre exists for certain purposes, to raise a hand, to draw a line ("DH Manifesto 2.0"). Questionable pictures aside, this one was a decent primer of what should be done to enact a discipline. As a tech writer, I also appreciated their bolding the important parts. Including the part where the enemies of DH were named; since I'm not a diminisher, IP trafficker, or copyright protector, I think I'm allowed to be in this class.

I may be a false fellow traveler, though, in that I'm okay with change, or continuity, or whatever the institution wants me to be okay with. Some of us would like to 1) get a job and 2) keep a job and thus I see the instrumental/utilitarian value of DH (Chun et al, 2016). It's in institutions like [redacted; see 1 and 2 above], where the majority of folks doing DH work are graduate students and NTT faculty. Some of us are told to take the courses, get the certificate in the name of professionalization. (Full disclosure, I have received certificates in both Technical Communication and Women's and Gender Studies in the name of professionalization.) Does my desire to 1) get a job and 2) keep my job cheapen the quality of work I produce? Something for my reviewers to figure out.

Done, unless you'd like a point to these thoughts.

Week 2. January 15. I Can Insult Smartphones All Day!

Best class ever. Yes, I know that the point of class wasn't to insult smartphones. I'm pretty sure the point of the reading was to explore how smartphones do (and don't) enable new ways of being. Ooh, what if the smartphone is an arbiter of ontologies? Fun words.

Before discussing Greenfield, I thought I'd stick a QR code with this story to an oak tree outside. As you were reading about Temple Terrace's dying, dangerous oak trees, perhaps you'd start to ponder the benefits of not standing beneath one. It was either this or stick a QR code over everyone's phone camera so they'd have to borrow someone else's phone to do the activity. Too much? ¯\_(ツ)_/¯

So. As a critique of the smartphone, I found this chapter interesting. Some of the utopian promises the book explores are hazy: magnets that can tell which way you're turning (see: Katie, Brooke, and Brooke's smartphone get lost together in downtown Tampa), GPS that is super accurate (see: Katie, Peter, and Peter's smartphone get lost together in a St. Petersburg parking garage). I don't know why a critique would gloss over this haziness.

Greenfield seems interested in the capability of data to make our lives...more understood? (at least by others) But my cynicism and three years engaging with MyReviewers asks whether anyone actually parses this data? If you're reading this, you may leap to discuss USF's cryptic student data thing (CSDT) that I'm probably not supposed to know about. I will point out that USF's CSDT doesn't talk to other USF CSDTs, so integrating CSDT information with other CSDT information is a long process of copy-paste, copy-paste. Then you have to analyze the results by close reading. I still can't tell you how I know this.

Sure, all this data can be integrated, but I'm not sure it is. Perhaps everyone thinks it is? In this Atlantic article, the author is both relieved and disappointed that the Internet does not know all the things about them. The author receives data without interpretation; interpretation remains the province of folks like us. Not sure how I feel about this.

We spoke in class later about privacy. When did cell phones start needing lock codes? When their applications became facets of our being? If a cell phone is a locked, then shouldn't the data on it be similarly locked? Isn't that the promise that a smart phone implies to us? And when I say Us, I mean Not Me because [insert tinfoil hat person rant here].

We use our hand (generally our dominant one) to control the device. More so, we use our fully opposable thumb--like, the capability that's assumed to proclaim our primal magnificence or something. Our passcode transforms the dark lock screen into the vibrant whatever our homescreen picture is. Scattered across our homescreen picture are applications that represent the various round-edged compartments of our lives.

Personalization, individualization, customization: the promises of the smartphone. Yet Greenfield subtitles the chapter "the networking of the self" for a reason; he analyzes the nonselves of your phone. Of course, the phone isn't exactly what's up for debate here. Up for debate are the networks that digital technologies bring into being, and whether those networks are useful for us as digital humanists.