On decolonizing my Web use

In this email I'm going to, superficially, talk about Barnabus, a piece of software I'm working on. But mostly it's going to be, as a lot of writing is, about culture, philosophy, and how those intersect with our ways of living.

(Note: this email is also the first piece of writing I'm drafting while using HashUp, and hopefully will be the first rendered through Barnabus. There might be issues with how it renders, especially since I'll end up having to copy-paste it into Substack to send it to y'all.)

As I've said before, recently I've been talking a fair bit with Phil Hagelberg about technology, and more recently Jess Mahler has been in those conversations as well: their input definitely helped shape this thinking.

One more thing before I get started: I'm tweaking how I handle my online presence: that's part of what the move to Substack is. I'm also winding down my Patreon, in favor of Ko-fi, which now lets me accept monthly donations, and also lets me operate a store. If you're a Patreon patron, please go ahead and cancel that subscription (and set up a similar one through Ko-fi, please.)

Right, on with the actual discussion: Barnabus.

Well... after a bit of exposition.

I live in this weird position of privilege, that I bet a fair number of folk reading this do.

I am online, clearly, and I have a computer, but my most recently piece of hardware is a ~6 year old smartphone.

And it feels like every day, another part of the Web increases how much power it requires to use, and effectively says "either buy new tools or be excluded." Fred Hebert, in "You Reap What You Code", relays parts of Ivan Illich's "Energy and Equity": highlighting the concept of an "oppressive monopoly."

To rephrase the rephrasing, Illich looks at a society that has developed for pedestrian transit, and later bicycles. In it, all people have pretty much equal access to the spaces the society exists in.

It's easy to view cars as a way to increase access in this society: you could go further, it was easier to maintain than a horse, and less tiring than a bicycle. Viewed this way, the society would embrace cars into their infrastructure. As Hebert puts it:

Rather than having a merchant bring goods to the town square, the milkman drop milk on the porch, and markets smaller and distributed closer to where they'd be convenient, it is now everyone's job to drive for each of these things while stores go to where land is cheap rather than where people are. And when society develops with a car in mind, you now need a car to be functional.

In short the cost of participating in society has gone up, and that's what an oppressive monopoly is.

Hebert goes into a discussion of software development, and I'll come back around to his piece in a bit, but for now I'm going to look at this from a social perspective.

I feel, more and more, like a person who has a bicycle, in a world that is rapidly transitioning to cars. However, unlike the world of roads, which fill the physical space between so many parts of our lives, the places of the Web where one is not are largely invisible: if you don't make the effort to get onto a part of the Dark Web, you're probably quite unaware what is happening there. If you aren't on the Fediverse, you probably aren't aware what people are talking about there. And, if you are there, you probably aren't aware of the conversations being conducted in formats your front-end doesn't accommodate, or in languages you don't speak.

But in real life, if there were a community of people in your neighborhood that you didn't associate with, you'd still see them: their house is physically visible from your front porch, right?

But the Internet, if it is a "space", is non-Euclidean, and in this space, your neighbors house doesn't exist. Your neighbor who uninstalled Instagram because it started draining their battery? The neighborhood doesn't now include them and wherever they're now sharing photos, if they have anyplace they can do that.

It... warps and wraps to bury and erase the existence of the neighbor, perfectly stitching over where it was with a neighborhood that is less than it was.

This has implications in a society living under physical isolation, like we're now doing because of COVID, that horrify me.

If the Web is our community, then our community has no space for people who are not Collaborators, because it is only Collaborators who can secure the technological privilege to participate.

And unlike city centers which have to struggle to obfuscate their homeless populations, the Internet has no such struggle: people are replaced by a 404 and that's that.

People working in city centers have, in my experience, a hard time remembering they're living with homeless people. People I talked to about these concepts in preparing to write this newsletter admitted to having a hard time remembering they're no longer living with people who don't have the privilege to maintain an active Web presence.

I think it's worth pausing here, to ruminate on a question: how much work would you, personally, have to do to form a friendship with someone who doesn't have Internet access, today, given COVID.

Now, consider: it requires financial privilege to secure technological privilege (e.g. Internet access). In effect, we're saying that people need financial privilege in order to be able to socialize in a society living under physical isolation.

That is, in a word, assimilation: it is using the threat of loneliness to coerce people into securing enough income to purchase technology that enables the socializing that prevents loneliness.

"Loneliness" here glosses over all the implications of social isolation: how do you get a job if you don't have a way to say "Hey, I'd like to apply!" to a businesss?

And "securing income" glosses over what I earlier stated, that the only way to get such an income is to be a Collaborator.

The effect of all this is that by using the Web, by continuing to socialize on the Web, through COVID, we are coercing people to become more active Collaborators with the contemporary kyriarchy, creating more Web users, coercing more people...

Not to understate things, but: bummer! This all paints a pretty dire view of our immediate future, one where COVID continues to affect our ability to live in physical spaces for a couple years. And it undermines a lot of thinking I've heard put forward about how to make society better during that period, which all rely on Web technologies.

Now, I might be wrong about all of the above: it's based on shoddy logic, to say the least, but I've come to believe that these situations are complex enough that some leaps of reason are necessary to progress. We live on, as far as we can tell, the edge of our temporal existence, after all.

But since it's a path I don't see many other people pursuing with their explorations and leaps, it's where I'm going to head, and that all brings us back to the topic of this newsletter: my Web presence.

One big step I'm taking is: I'm trying to decouple my communication - and other peoples' - from the Web, even if it is carried over the Web, and in fact decouple it from computers as much as I can: move syntax over to semantics. To decolonize computers, decenter them.

For example, rather than drafting this email in Substack's editor, it's saved as a file on my local computer. True to the term "Web presence," the emailed newsletter is just one presentation of the writing, which exists, off-line, here, as a file that I can use and reuse.

There's some difficulty in actually doing this: Substack's editor is only HTML, for example, and I'm writing this as plain text. I could use a different editor that produces HTML, but then I'd be left without as basic of a data resource to work with later.

Let's go back to Hebert talking about Illich's oppressive monopoly:

What are the things we do that we perceive increase our ability to do things, but turn out to actually end up costing us a lot more to just participate?

We kind of see it with our ability to use all the bandwidth a user may have; trying to use old dial-up connections is flat out unworkable these days. But do we have the same with our cognitive cost? The tooling, the documentation, the procedures?

Hebert is talking about the software development side of things, but I think this stuff is true of almost all computer use: navigating the world of Facebook apps is stunningly complicated compared to the limited pitfalls of email's From, CC, and BCC. (Not that plenty of issues were't caused by a misused carbon-copy.)

So, the software we use is created to make it easier to do things, but ends up creating difficulties.

Back to Hebert:

The key point is that the software and practices that we choose to use is not just something we do in a vacuum, but part of an ecosystem; whatever we add to it changes and shifts expectations in ways that are out of our control, and impacts us back again. The software isn't trapped with us, we're trapped with the software.

Hebert then points to the ironies of automation, which argues the point from a specifically computer-oriented perspective: the automation of tasks. (In the context of this newsletter, we're looking at the automation of social communication, which is a broad phrasing of what "social media" does)

Automation shifts the conductor to the role of monitor, who is now not doing anything but must still be aware of the task, in case something anomalous happens, so the monitor can become the conductor again.

This introduces two problems: it's hard to pay attention if you aren't doing anything, and if you only handle emergencies, you might not be well-practiced to handle them when they arise.

Bringing this to contemporary social communications on the Web: we've heavily automated huge chunks of the task, so it's hard to pay attention to many things: the gradual absence of a neighbor who hasn't upgraded their phone in a while, or the humanity of those people you are able to talk to.

Looking at my own interactions with computers: right now, almost none of it is automated, so it's important I automate correctly.

Here's where Hebert's article really picks up steam, in both providing criticism that can be applied to how we use the Web, and how I could do things better, at least for myself.

First, it highlights that our model for viewing computers is flawed, and then emphasizes the importance of correctly building and maintaining models. Our contemporary model is that “humans are better at detection, perception, judgment, induction, improvisation, and machines are better at speed, power, computation, replication, and parallelism.”

But that isn’t an accurate view of our relationship with computers. Instead, “computers have low context awareness, which humans align. Computers have low change sensitivity, which humans stabilize. Computers have limited adaptability, which humans repair.” And, in reciprocity, humans give attention to the information computers provide, recognize anomalies in that information, and change based on it.

In short, humans and computers are a team working together.

What’s the team of a human and their social media look like? To me, a lot like the first model, where the computer desperately tries to automate away your control, putting you in the position of monitor, who only steps in when it’s something the computer can’t handle.

Complicating things is that what we monitor is largely out of our control, influenced by push notifications and feeds organized by profit-oriented heuristics, and so the very problems we seek to solve change, and our conceptions of possible solutions change to match. We change our mental model to match the capabilities of our tools, and right now, our tools are ones that take our input, strip it of context, and frankly, do whatever they want with it: that they show it to “friends” is largely incidental, something to fill in the space between ads.

And for those on the Fediverse, it isn’t much better there: boosting allows posts to fill a similar role, especially with the heavily clique-oriented communities of most of the English-speaking Fediverse. You can turn it off, and then you’re just oblivious to the commentary inspiring the shitposts from people you follow who aren’t seeing it.

Popular discourse much like the election: as much as folk might want to avoid it, its prevalence causes a kind of reverse herd-immunity, where it’s impossible to avoid someone who is transmitting it.

And that’s, I believe, largely a consequence of the models that social media follows - whether it’s marketized or not. For example, I think the Web’s unquestioning love of “public-first” communications makes it nearly impossible for conversations on the Web to not boil down to whatever the topic du jour is. And our love of real-time chatter is going to make chatting with our relatives living off Earth real difficult, assuming such a thing happens.

Hebert gives some advice:

One simple step, outside of all technical components, is to challenge and help each other to sync and build better mental models. We can't easily transfer our own models to each other, and in fact it's pretty much impossible to control them. What we can do is challenge them to make sure they haven't eroded too much, and try things to make sure they're still accurate, because things change with time.

This is exactly the sort of thinking that has led me to strip down to just Email, to move from Emacs to nano (no joking!), and otherwise really break down my use of computers to the basics. The mental models I was having to work with to do things on computers were incredibly complex: concepts like “SEO” were intrinsically linked with “writing an essay” because of the latter’s dependence on the model of “blogging.”

And it’s what has led me toward looking toward information sciences, not just computer sciences, in building my own models for how to think about, and thus do, computing, and all the tasks that encorporates. (Remember how this essay started: ultimately making my computer information accessible to people without computers is a goal.)

Hebert then goes on to talk about what technical stuff individuals can do. I’ve made “doing computers better” my business, so I figure it’s worth it to go through their suggestions and see how I have, or can, incorporate them into my own models of computers and user habits.

Hebert says that users “can't just open a so-called glass pane and see everything at once. That's too much noise, too much information, too little structure,[… So t]o aid model formation, we should structure observability to tell a story.”

Even Smalltalk users would agree, a structure to one’s observation is important to one’s ability to understand a system. Hebert says, clearly, “There has to be a connection between the things that the users are doing and the impact it has in or on the system, and you will want to establish that.”

Hebert advocates for logs as being a tool for establishing this, and has some guidelines for accomplishing this. If I understood correctly, removing a lot of nuance, «log the facts of interactions between components, rather than focusing on logging interpretations or the internal mechanisms.»

I think Hebert here is echoing a lot of the sentiment of computer communications’ wisest elders and most ambitious youth, which I’ll paraphrase as “model systems as an exchange of messages between entities.” But I’ve always heard the advice directed at models of computer systems, not models of computers and their users. Implementing the advice through logging seems… almost magic in its simplicity, at least to me.

Thinking more generally, I can see how my own recent efforts have been, in part, an attempt to more explicitly recognize the human user’s place in any computer system. I’ve recently started acknowledging that almost all computer systems start with a human defining a schema for the data records that the system will operate on, and that is a resource that exists between humans: the computer, despite doing all the operation beyond “interpretation,” doesn’t actually understand what the schema represents. This has, cognitively, helped liberate data from the computer systems which handle it: they’re simply a convenience, a way to automate their handling.

Hebert also cautions that these systems, even made observable, are unlikely to be able to control for all conditions that might rise up from their use. They say:

And here we fall into the old idea that if you are as clever as you can to write something, you're in trouble because you need to be doubly as clever to debug it.

That's because to debug a system that is misbehaving under automation, you need to understand the system, and then understand the automation, then understand what the automation thinks of the system, and then take action.

If the “system” is translating information and its model to other people, then most of the computer part of it is the “automation.” And gosh, are we ever in trouble, because, speaking for myself, I am not nearly as clever as all the writing that went into making even the most basic parts of my computer, let alone everything that makes up the modern Web!

[E]ssentially, brittle automation forces you to know more than if you had no automation in order to make things work in difficult times.

I don’t think anyone would argue that the Web’s existence forces people to know a lot of things they’d never have to learn in order to be able to do relatively simple tasks like “send a letter.” Hebert advises that in order to be able to avoid this, the system needs to be written with the assumption that its automation will encounter conditions it cannot control, and control will be given to a human operator.

[M]ake it possible for the human to understand what the state of automation was at a given point in time so you can figure out what it was doing and how to work around it. Make it possible to guide the automation into doing the right thing.

Hebert, in the next section, suggests we look to accessibility for ways to enable these hand-offs, which are where a system’s new capabilities emerge, as doing so can help avoid “nasty surprises.” A lot of the systems I’m planning and developing for my own computer use are centered around text and its manipulation, which provides many interesting routes for providing access to information, but I’ve also been trying to look toward the forms of access I see needed by people around me: for example in my design of a user-account system for my MUD, I’ve been orienting it around plural, not individual, identities.

“Complexity as to live somewhere,” Hebert repeats, as so many system engineers have: “if the code is to remain simple, the difficult concepts you abstracted away still need to be understood and present in the world that surrounds the code.”

I think that intentionally partitioning the data schema into the realm of human complexity, as well as (as Hebert seems to suggest,) pushing the interactions between systems into that realm as well, leaves the internal mechanisms of the system within the code’s complexity.

Given what information has to be handed off to the other users of the system, I think that’s appropriate: whether the data structure’s complexity live in the code or in the documentation, it’s going to have to be understood by some users. The question is just, how esoteric is that knowledge? By putting it in the documentation, by modeling it as data schema, it’s made accessible even to people who don’t automate computer systems. Put in the phrasing of Hebert’s last section: segmenting the information part of computer systems away from computer science and constructing it as a part of information science “diffuses” the work of understanding a system across its users.

So, with all this in mind, what am I building?

Right now, I’m building a corpus of data, currently living in flat-files. Different pieces of information correspond to different documented schema: some pieces of information are newsletters like this one, others are lists of books I can lend out, others are about seeds that I’m storing.

My immediate next project is Barnabus, which will take take in those one or more pieces of data and render them as HTML.

In essence, a static website generator, though I plan on using it to generate single long documents that I then upload to Ko-fi as pay-what-you-want downloadable content.

Barnabus itself relies on Hashup, a subsystem that takes my bespoke text markup, so there’s a lot of messages passing around here.

I’m really excited to see how the project pans out. While it is essentially a static site generator, I think the intention I’m holding as I develop it will make it an interesting base to work from as I continue to try and make sense of my computer use.

Support my work