When User was Equal to Developer

Olia Lialina pointed me to this amazing post she wrote back in 2010 about what she has coded the “Prof. Dr.” style of webpages that can be dated back to 1993. In fact, the post is a tour-de-force of information about design in the early days of the web. She explains how she tries to convince her Interface Design students that the conception of site design is not reducible to Adobe’s Creative Suite. Rather, she pushes them to examine the early site design of the mid 1990s web:

….an era when the web was build and arranged and decorated by amateurs, when very web specific genres and looks were brought to existence, making it an incredible place to experience.

But even before the amateur free-for-all of the mid-90s there was the “Prof. Dr.” webpages, and I absolutely love the way Olia frames this early genre of web design:

“Prof.Dr” is a codeword, a tricky search request. I am aware of the fact that there are users outside of academia as well who always designed their sites in pure markup or redesigned according to 1993 standards recently. Still I suggest to use this name based on a scientific title as a tribute to the history, and reminder that all around the internet the very first pages were build at universities. To cement this term, within this article I’ll use only pages of senior academics holding a doctoral title.

How brilliant, an homage to the “Prof. Dr.”s who designed the web! But that’s not all, oh no, that’s not all:

A “~” in the URL continues to be part of the look and reminds of early computer culture, when “user” was equal to “developer”.(3) That’s why, where it is possible, I prefer to use the examples with a tilde.

Disco, this is what I have been looking for to complete the circle with Domain of One’s Own. The tilde was an indicator of a moment on the web when a user was equal to a developer! That’s it, we can’t turn back time, and I agree with Olia that this is not about a new retro aesthetic (although my impulse to make it that is very strong). This is about the recognition of a moment in time when a web space at a university was synonymous with developing the web. Helping to build it. In GeoCities terminology, homesteading. And that’s what we are trying to do with Domain of One’s Own, bring back a sense of web literacy in terms of design, history, and the possibilities to continue to particpate in it as an open, dare I say liberatory, space.

But Olia doesn’t stop there, she is light years ahead of me. She goes on to theorize the power of the primitive nature of these “Prof Dr.” pages.

Primitivity tells us the story of the browser being not only a browser, but also an editor. Every user of the early web was a producer of web content. Web pages were to be opened in the browser to look at them, but also to edit them, using existing pages as templates for new pages. The simple design of HTML made it possible for the first users to create state of the art pages with only four to five principal tags. The result was an extremely fast growing web. There were not many options, this is why we got many pages.

And she goes on…

Prof. Dr. pages look terribly the same. As if they were generated automatically by the browser, as one student said. Though, ironically, they are among the last pages generated completely by humans, not content management systems or services.

They look according to the viewer’s browser settings. This reveals the belief of the early 1990es that any visual design should be left at the discretion of the user.

A concept she later refers to as the “End User = Designer.” A reality long gone, and as she notes “it could only work for the very small web at connected universities.”

This is all gold, and there are innumerable examples of web pages from this era in her post. What’s more, she then moves on to the vernacular web which she dates at roughly 1995—the moment the web exploded well beyond connected universities. I really appreciate this broader conceptual frame for the early web spaces at universities as a frame for the user as both developer and designer. And I don’t necessarily take this to mean professional programmers and architects (even though the are Herr Doktors 🙂 ), as much as a community that is engaged and experimenting with form and connection through the tools that make it possible.

This brings us back, then, to a user innovation toolkit that enables and empowers people, which allows me to start making the link to the trailing edge of innovation that is the web. Olia surfaces so much of the design and conceptual thinking behind a web that returns us to some of its original magic as a web of links and people rather than third-party profiles and services. That said, I could easily fall into the dangerous trap of glorifying a moment wherein the potential of the read-write web was only available to the elite few. That’s what has changed over the last two decades in terms of access to these robust toolkits for innovation that universities have patently ignored when the need for a depp, critical web literacy has never been greater.

This entry was posted in Domain of One's Own and tagged , , . Bookmark the permalink.

7 Responses to When User was Equal to Developer

  1. Yeah, so this is the balance. Not everybody is a writer, or a coder. Expanding access is good, and it means this stuff goes away to some extent.

    What’s the problem we are trying to address? It’s not to get out of corporate software; that’s a cause of a problem. It’s not to get hand-coded pages back. If User = Developer and everyone is on the Internet, then design is commodified and there’s a new design bar. So User = Developer and global access are almost incompatible *by definition*.

    What is the problem? You are getting at the aesthetics of the problem — the way in which various set-ups bounce off your inner frequencies. Like a metal detector on the beach, that emotional vibration tells you something is there. But what is it exactly?

    Culture is culture. It changes. Cry me a river, etc. But what is the *capacity* we’ve lost?

    I’m tempted to say that it is a lack of hacker-ness. But the evidence can go either way on that. The early internet, as Alan observes below, was populated by hackers, because hackers were the ones suited to it. That doesn’t mean it made them hackers. Necessarily.

    So actually, forget “lost”. What is the socio-technical capacity we need? In a world that cannot be 100% hackers?

    Here’s what I see. I see world full of people who are very efficiently trading cookie recipes and engine rebuilding tips on the web. Then I see a business world and an academic world making little to no use of these things. We are more efficient at getting the word out about how to keep muffins from drying out than how to cure cancer. By an order of magnitude.

    I have a feeling if the world was populated by Prof. Dr.’s that wouln’t be the case. But Prof. Dr.’s are limited. So this is my pitch: we taught the world how to work together once before. We built this darn thing. But we’ve hit issues. There’s a way in which the institution has rejected the revolution it birthed.

    Why? And how do we fix it? How do we teach a generation of people who can help fix it? As Brian Lamb once said so beautifully, how do we stop asking how the web can fix education and start asking how education can fix the web?

  2. Tom says:

    While I still like you, a bunch of her ideas seem misguided to me.

    I learned how to write HTML and made web pages in HTML in 2000/2001 in exactly that way. Mozilla Composer and Notepad were my editors and I would download whole webpages and mess with stuff to figure things out (not too different from what I do now). I wrote “by hand” and I do now when I feel like it. I say that because I think trying to define some point where innocence was lost is somewhat misguided. There is no original sin here. There are rarely any halcyon days of yore. I don’t think universities ever taught the world to work together. A relatively tiny group of odd people made it possible for the world to work together and most of the world couldn’t and most of the portion that could, didn’t.

    Primitivity tells us the story of the browser being not only a browser, but also an editor. Every user of the early web was a producer of web content.

    So if I use WordPress as an editor, is that more or less “of the web”?

    The result was an extremely fast growing web. There were not many options, this is why we got many pages.

    Is that on a per user basis? I’m pretty sure we get a lot more pages per unit of time now. Ever per user, it’s a confused measure given that the people on the web in 1995 are going to be an odd group. My dad was on there in the very early days. He is very odd.

    For me, these things are symptomatic. Universities have decided they don’t like tilde spaces because they are blank canvases. Blank canvases don’t mesh well with brand identity guidelines and “managing the message”. They don’t convey a consistent measure of “professionalism”. IT departments’ main measures of success tend to be uptime and lack of support tickets. That means consistency and easy of support. If we involve the legal department this comment might never end. Learning has been vacuum packaged, branded, commodified up and MOOC’d out through a Coursera/Gates Foundation brand identity partnership.

    At the same time, so be it. Grow in the cracks. Erode the edges. Play in the margins. Fiddle while Rome does whatever Rome does.

  3. Chris L says:

    I have no fully-formed thoughts, but I find the last section of Lialina’s post/paper, where she talks about “zombie links” the most interesting because it implies questions of the utility of creative constraints and friction in authoring. CMSs introduced zombie links on a wide scale because it’s easier to leave them than to code exceptions to remove them. Without the CMS, creating each link takes effort, so why introduce such redundancy? Yet all of us who have hand-coded sites (not just pages) know how laborious it is to retrofit those valuable connections across multiple pages everytime a new page is added, much less accounting for tags or categories and the like.

    Tangentially, this points at the worst aspect of hypertext and the modern web: one-way links. The common link is literally the dumbest and shallowest kind of link possible and we (and our systems) spend an inordinate amount of time dealing with this unnecessary weakness. That’s not when Tim Berners-Lee (or Ted Nelson) had in mind in the beginning…but the humble link, as powerful as it is, never evolved while all the mechanisms needed to work around them, rather than fix them, did. They’re the neanderthals of the web.

    • Reverend says:

      I’m fascinated by this idea of the link being the neanderthal of the web. It’s great because it challenges some of my basic assumptions about the link still being the sinews that hold the web together. I have said this on several occasions over the last few years, and I’m struck by your idea here, and it resonates. In fact, I want more. What do you mean the link is shallow? Does the trackback part make it more robust? Building on stuff like that? Making the link more intelligent with embedded data? It shows how elemental I am, because I naively thought the link was the basic currency of the web, and everything would emerge from it. The link as fundamental not vestigial. I want more.

      • Chris L says:

        Mike hits the nail on the head below in two ways:

        1. I’m not saying links aren’t the sinew of the web. They are. They are what make the web. But they aren’t rich either. Compared to the possibilities of two-way links (and not just that: Ted Nelson, at least, had ideas about essentially unbreakable links and “connection origins” that supported relationships), they are–I think literally–the simplest possible workable technology. That simplicity is its great strength and weakness.

        2. I don’t think we really *do* have these systems in blogs and wikis right now. We have linkbacks and trackbacks and the like, which are essentially the automation of two one-way links. That’s not the same thing. “What links here,” is similarly problematic because there isn’t actually a *forged* relationship between the two documents. But even these *are* abused and problematic because, again, the entire ecosystem has been built around these stateless, information-poor one-way links. For these ideas to have a chance of working demands a fundamental re-engineering of the web and that isn’t going to happen. So we make do with these pitiful things, like starving people scarfing up Wonder bread for lack of anything–and lack of *knowing* anything–better.

        These ideas are what make Ted Nelson crazy and brilliant and in some ways a little sad. It has to hurt to have this vision of a rich buffet available to all and see all around him only an automat, with smart people doing their best to craft food that will fit into the cubbies for those anonymous consumers who reach in from the other side.

  4. Well, I could be wrong, but think Bush’s original hypertext vision was a two way link. Certainly early hypertext experiments were two way. The Dokuwiki system has a what links here page and it’s wonderful.

    Here’s a great example of what two way links could do. Say I have a resource — a worksheet for students. I put it up CC-BY. Now 100 people take that worksheet, rework it, but keep the attribution link. In a two way link world we can see all these variations on the original work by looking at our “linked from” page. In Bush’s Memex vision, we can look at a description of a historical event and understand all the pages that link to it, which is just as important conceptually as all the pages it links out to.

    But here’s where things fall apart a bit. Because we do have these systems in blogs and wikis, and they are either abused (trackbacks) or unused (wikis). So what do we make of that? I’m not sure. Spammers introduce so much noise into the system it becomes useless on the open web. And on wikis people still drill down rather than up for the most part.

    • Jim Groom says:

      Fascinating, the trackback is still a crucial tool for me on my blog, and it gets at the idea of a two way link. The potential layering of links through a wiki, or even an assignment on ds106 (which I recognize is not a link), is where community is born out of these links in my mind. The idea of digging into links beyond this idea of one-to-one connections that are unilateral makes all kinds of sense, but it’s crazy that in 2014 the idea of a trackback is an absolutely foreign concept.

Leave a Reply to Mike Caulfield Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.