In Belgium, we send letters. Every year though, the market for postal letters decreases by about 5%. Post offices close. Mail boxes disappear. So when we need to send a letter, we might not know exactly where to post it. Throughout Belgium, 13,049 red letter boxes in various designs stand in the streets; yet they are not that easy to spot, and their mail is collected at a different time of the day, anywhere from 8:00 to 19:00.
bpost, the Belgian mail company, has a specific webpage on their website for finding addresses of mail boxes. To use it, you need to know the exact address of your location or enter your postal code, which is slightly easier (Belgian postal codes cover a large area). But from a mobile phone, you can’t reach that part of the form because horizontal scrolling is disabled. Worse, once you’re in the app, it’s not exactly easy to navigate—you might find yourself in the Pacific Ocean when clicking amiss.
For Hack Belgium, me and my teammate Luc Rocher decided to tackle this problem. Hack Belgium is the first countrywide hackathon in Belgium. I was happy to run into Luc, whom I first met at a guided tour of the Internet Archive in San Francisco a few months ago. If you look at the website of the hackathon, you’ll notice that the subjects tackled are rather huge. There is a smell of technical solutionism—smart technologists and entrepreneurs will come up with tools that fix societal ills. As the days progressed, we also noticed it is more about coming up with plans, than with doing a hack. It’s fun to think about how to create a product and monetise it. But it’s even more fun to build something right now.
The first thing we need to make such an app is the information about the red boxes: where are they, and when are they open. Where do we get this data? The Open Knowledge Foundation has a page listing the availability of these kinds data for a range of countries. In Belgium, various public organisations are involved in open data initiatives. But bpost, which is an ‘autonomous state-owned company’ is not necessarily involved in them.
On Twitter, you can find a request from 2012 by Bart Rousseau, asking if there were chances to open up this data. Even if bpost agreed this was their intention for the future, “we unfortunately can not offer this information on a short term”. It has been almost five years since—it’s clear that we need to try another approach to get access to this data set.
If we navigate to the bpost webpage, and enter for example a postal code 1000, we can look at the page and it’s actually a wrapper for another webpage. If we use the browsers ‘network inspection’ function, we find out this page loads in data from a web service. Here we get the actual information we need:
<marker address1_fr="GALERIES DU VINGT-CINQ AOÛT 8" address2_fr="1000 BRUXELLES" address1_nl="25 AUGUSTUS GALERIJEN 8" address2_nl="1000 BRUSSEL" id="10000043" lat="50.8427501" lng="4.3515499" status="In Service" week="19:00" sat="10:00" />
By modifying the last URL by appending every Belgian postal code, we managed to download the location of every 13,049 mail boxes, along with their collection hours during weekdays and weekends.
We developed a simple application whose code is already available on Github, using the data set in combination with the Mapbox GL library. Everyone traveling across Belgium can now easily find not only the closest mail box, but also quickly which ones will still be fetched before the end of the day.
Want to you use the data yourself? For our app, we converted it to the GeoJSON standard that is easy to build upon.
Data like the position of a postbox and its collection hours are not protected by copyright as creative works. But a collection of such data holds database rights. By downloading all this data from bpost and redistributing it, we are probably infringing on their intellectual property. We hope that bpost will make good on their promise and release this data under an open license! By publishing this application, we want to show how every citizen can profit from the opening of public information.
Visit red-boxes.be, and let us know what you think in the comment section! Want to thank us? Use our links to subscribe to MUBI for an exquisite pick of independent streaming cinema, and eFarmz for local Belgian food delivered to your door.
]]>Hacker culture cultivates a fear of WYSIWYG editing, says habitus, linking this fear to the cultural history of Unix. I have a simpler explanation: control issues. WYSIWYG is feared because one is not directly manipulating the underlying data structure, and thus, has less control. This fear is justified. Anyone who has used Microsoft Word knows the scenario: after applying several layers of formatting, the document’s behaviour seems to become erratic: remove a carriage return, and the whole layout of a subsequent paragraph might break.
If you happen to share a studio, an office or a live with someone who used personal computers in the 1990ies, and you are at their side when this happens, there is a significant chance this person will express a desire for a certain feature of a previous generation of word processors: I am talking about WordPerfect, and its ‘reveal codes’ option.
Wordperfect’s reveal code option presents the user with a representation of the documents data structure, showing actually which formatting options are applied, where they are applied and in which order. In The Netherlands, this option is known as the ‘onderwater scherm’, or the underwater screen, as if it gives you a better view of what is below the surface of your result.
In what follows, some other underwater screens. Note that the setup provided by visual programming solutions like Processing might resemble the underwater screen, but it is not actually the same thing. With Processing, one does sees both a textual code representation and a visual representation of the result; but it is only the code that is editable. The unique character of the underwater screen is that it works both ways: the user has access both to an interface more closely resembling the visual result, and to an interface more closely resembling the underlying data. An edit in the one affects the other.
WordPerfect is now owned by the Corel Corporation, who still sell the Windows version. The screenshot above was produced with WPMacApp (WPMac Appliance): ‘a system for running WordPerfect for the Macintosh on Intel-based Macs’ which is freely downloadable from the internet. WPMacApp is produced by Eric Mendelson, Columbia professor of English and Comparative Literature. Otherwise known as the literary executor of W.H. Auden, Mendelson sports a ‘secret life’ as a tech writer and programmer. Because of his work, running this ancient version of WordPerfect is extremely easy. Thanks, Professor Mendelson!
The death of Dreamweaver as a professional tool is due in part to a change in the practice of webdesign. Dreamweaver is a tool to create webpages, not a tool to create dynamic systems. At the time it first becomes popular, many small sites are created as a series of HTML pages, with occasional updates. The person designing the site might then charge for each update. With the arrival of accesssible Content Managent Systems as Wordpress, this approach becomes less popular as a CMS allows clients to update pages themselves. Yet part of Dreamweaver’s demise might also be due to the cultural shift known as the ‘Web Standards Revolution’, in which a new generation of web developers and designers starts to clamour for clean codes. Dreamweaver, with its WYSIWYG editor, is suspect of creating ‘ugly code’. Dreamweaver, however, features both a visual editing interface and a code editor, and the effects of a visual edit upon the code are easily verified. Arguably, some of the accessibility (to designers) and the didactic quality of Dreamweaver’s approach is lost as web design has moved to favour ‘hand written html’.
Dreamweaver is still available commercially: known as Adobe Dreamweaver ever since Adobe bought Macromedia in 2006, it is now part of their Creative Cloud offering. Free and open alternatives built with the same philosophy are Kompozer and BlueGriffon .
Inkscape is an extremely interesting graphic design program. Mostly known for being a free and open source alternative to Adobe Illustrator, the comparison does not really do Inkscape justice. Whereas Illustrator’s development is tightly wed to PostScript, Inkscape as a program was born for the screen. It’s native file format is SVG, which is the standard file format for vector graphics on the web.
Inkscape’s XML view is a bonafide under water screen. We get to see the Document Object Model, the arborescence of XML nodes that makes up the vector drawing. We can edit values directly in the XML view. This is very similar functionality as allowed by the ’inspect element’ feature in modern web browsers. Inkscape’s implementation is rather bare-bones though: I am sure that this could become even more useful as a tool.
Inkscape is freely available from the projects website and easy to install under Linux, Windows and OS X. Users of a recent version of OS X will also need to install XQuartz.
Aloha is an editor for HTML that is built on HTML5’s built-in WYSIWYG support. In recent versions of HTML, any element can be changed from a static to an editable element, while keeping its visual appearance. You can actually edit this very paragraph, because I added the ‘ContentEditable’ attribute.
ContentEditable is not used all that often; consequently there are still quite some implementation differences between the browsers. There are only two editor widgets based on ContentEditable that I know of: Aloha and hallo.js. Aloha is badly documented and not easy to wrap your head around as it is quite a lot of code. Hallo.js sets out to be more lightweight, but for now is a bit too light: it lacks basic features like inserting links and images.
Because these editors are built on HTML’s native functionality, we can use all of the debugging tools the browser already ships with. Aloha promises us that its output is so good, there is no need to dive into the codes. That is not really true. Just like with Microsoft Word, there are moments where one would like to correct the automatically applied formatting commands. It is at this point that we can use the browsers functionality: we right-click on the text and choose ‘inspect element’. Once we open up the element inspection, we see an underwater view: the Document Object Model the browser has constructed, with all the nodes from the HTML with their CSS styling applied. As we edit these nodes, the visual page will change. We can even copy in HTML codes from other sources.
Aloha Editor is a JavaScript project that can be integrated into any website. Doing so does require some programming skills as the setup is slightly more involved than for most (jQuery) plugins. For several publishing platforms prepackaged plugins exist, such as for Wordpress.
]]>
In 2004 I encounter the website of the Amsterdam magazine/web platform/art organisation Mediamatic. The site is remarkable in several ways. Firstly, it shows off the potential of designing with native web technologies. Its layout is a re-appraisal of one of the core fonts available to almost all surfers: Georgia, and its Italic. The striking text-heavy layout uses this typeface for body-text, in unconventionally large headings and lead-ins. Secondly, the site opens up a whole new editing experience. In edit mode, the page looks essentially the same as on the public facing site, and as I change the title it remains all grand and Italic. I had been used to content management systems proposing me sad unstyled form-fields in a default browser style, decoupling the input of text completely from the final layout. That one can get away from the default browser style, and edit in the same style as the site itself, is nothing short of a revelation to me—even if desktop software has been showing this is possible for quite some time already.
In 2004, there are more websites with an editing experience like Mediamatic: Flickr, for example makes it possible to change the title and metadata of a photo right on the photo page itself, if one is logged in. Yet flash forward to 2014, and most Content Management Systems still offer us the same inhospitable form fields that look nothing like the page they will produce.
If we look at the experience of writing on Wordpress, the most used blogging platform, the first thing one notes is that the place where one edits the posts is quite distinct from the place that is visited by the reader: you are in the ‘back end’. There is some visual resemblance between the editing interface and the article: headings are bigger then body text, italics become italic. But the font does not necessarily correspond to the resulting posts, nor do the line-width, line-height and so forth. Some other elements are not visual at all: to embed youtube and the like one uses ‘shortcodes’.
Technologically, what was possible in 2004 should still be possible now—the web platform has since then only advanced, offering new functionality like contentEditable which allows one to easily make a part of a webpage editable, without much further scripting. So where are the content management systems that take advantage of these technologies? To answer this question, we will have to look at how web technologies come about.
An editing interface that visually resembles it visual result is know as WYSIWYG, What You See Is What You Get. The term dates from the introduction of the graphical user interface. The Apple Macintosh offers the first mainstream WYSIWYG programs, and the Windows 3.1 and especially Windows 95 operating systems make this approach the dominant one.
A word processing program like Microsoft Word is a prototypical WYSIWYG interface: we edit in an interface that visually resembles as close as possible the result that comes out of the printer. Most graphic designers also work in WYSIWYG programs: this is the canvas based paradigm of programs like Illustrator, inDesign, Photoshop, Gimp, Scribus and Inkscape.
But being the dominant paradigm for user-interfaces, especially in document creation and graphic design, does not mean the WYSIWYG legacy is the only paradigm in use. Programmer and author Michael Lopp, also known as Rands, tries to convince us that ‘nerds’ use a computer in a different way. From his self-help guide for the nerd’s significant other, The Nerd Handbook:
Whereas everyone else is traipsing around picking dazzling fonts to describe their world, your nerd has carefully selected a monospace typeface, which he avidly uses to manipulate the world deftly via a command line interface while the rest fumble around with a mouse.
Rand introduces a hypothetical nerd that uses a text based terminal interface to interact with her computer. He mentions the ‘command line’, the kind of computer interface that sees one typing in commands and which is introduced in I like tight pants and absolute beginners: Unix for Art students.
Yet who exactly is it who likes to use their computer in such as way? ‘Nerd’ is a terribly imprecise term: one can be a nerd at many things, and it is mainly a derogative term. But it seems safe to suggest that those using the command line have some familiarity with text as an interface, and with using programming codes. People that are steeped in or attracted by, the practice of programming.
Since the 1990ies desktop publishing revolution Graphic Designers have been able to implement their own print designs without the intervention of engineers. In most cases this is not true for the web: the implementation of websites is ultimately done by programmers. These programmers often have an important say in the technology that is used to create a website. It is only normal that the programmers’ values and preferences are reflected in these choices.
This effect is reinforced because the programming community largely owns its own means of production. In contrast with print design, the programming technologies used in creating web sites (the programming languages, the libraries, the content management systems) are almost always Free Software and/or Open Source. Even commercial Content Management Systems are often built upon existing Open Source components. There are many ways in which this is both inspiring and practical. Yet if this engagement with a collectively owned and community-driven set of tools is commendable, it has one important downside: the values of the community directly impact the character of the tools available.
Programming is not just an activity, it is embedded in a culture. All the meta-discourse surrounding programming attributes to this culture. A particularly influential strand of computing meta-discourse is what can be called ‘Hacker Culture’. If I were to characterise this culture, I would do so by sketching two highly visible programmers that are quite different in their practice, yet share a set of common cultural references in which the concept of a ‘hacker’ is important.
On the one hand we can look at Richard Stallman, a founder of the Free Software Movement, tireless activist for ‘Software Freedom’. Having coded essential elements of what was to become GNU/Linux, he is just as well known for his foundational texts such as the GPL license. The concept of a hacker is important to him, as evidenced in his article ‘On Hacking’.
On the other hand there is someone like Paul Graham, a Silicon Valley millionaire and venture capitalist. Influential in ‘start-up’ culture, Graham has turned his own experience into something of a template for start-ups to follow: start with a small group of twenty-something programmers/entrepreneurs and create a company that tries to grow as quickly as possible, attract funding, and then either fail, be bought, or in extremely rare cases become a large publicly traded company. His vision of the start-up is both codified in writing and brought into practice at the ‘incubator’ Y Combinator.
As different as Graham’s trajectory might be from Stallman’s, he too has written an article on what it means to be a hacker. The popular discussion forum he has run is called Hacker News. In fact, Graham refers to the people that start start-ups as hackers.
The fact that Stallman and Graham share a certain culture is shown by the fact that their conceptions of what is a hacker is far removed from the everyday usage of the world. While to most people a hacker means someone who breaks into computer systems, Stallman and Graham agree that true sense of hacker is quite different.
Thus, contesting the mainstream concept of hacker is itself important in the subculture: Douglas Thomas already describes this mechanism in his thoroughly readable introduction on Hacker Culture (2002). A detailed anthropological analysis of a slice of Hacker Culture is performed in Gabriella Coleman’s Coding Freedom: the Ethics and Aesthetics of Hacking (2012), though it seems to focus on Free Software developers of the most idealistic persuasion and seems less interested in the major role Silicon Valley dollars play in fuelling Hacker Culture. For this tension too is at the heart of hacker culture: even if Hacker Culture is a place to push new conceptions of technology, ownership and collaboration, the Hacker revolution is financed by working ‘for the man’. The Hacker Culture blossoming at universities in the 1960ies was only possible only through liberal funding through the department of Defense, today many leading Free and Open Software developers work at Google.
If we want to know about Hacker’s Culture’s attitude towards user interfaces, we can start to look for anecdotal evidence. In an interview about his computing habits, arch-hacker Stallman actually seems to resemble quite closely that of the hypothetical GUI-eschewing ‘nerd’ from Rands’ article:
I spend most of my time using Emacs [A text-editor]. I run it on a text console [A terminal], so that I don’t have to worry about accidentally touching the mouse-pad and moving the pointer, which would be a nuisance. I read and send mail with Emacs (mail is what I do most of the time).
I switch to the X console [A graphical user interface] when I need to do something graphical, such as look at an image or a PDF file.
Richard Stallman does not even use a mouse. This might seem an outlier position, yet he is not the only hacker to take such a position. Otherwise, there would be no audience for the open source window manager called ‘ratpoison’. This software allows one to control the computer without any use of the mouse, killing it metaphorically.
The mouse is invented in the early sixties by Douglas Engelbart. It is incorporated into the Xerox Star system that goes on to inspire the Macintosh computer. Steve Jobs commissions Dean Hovey to come up with a design that is cheap to produce, more simple and more reliable than Xerox’s version. After the mouse is introduced with the Macintosh computer in 1984 it quickly spreads to PC’s, and it becomes indispensable to every day users once the Windows OS becomes mainstream in the 1990ies.
The mouse is part of the paradigm of these graphical user interfaces, just like the WYSIWYG interaction model. The ascendence of these interaction models is linked to (and has probably enabled) personal computers becoming ubiquitous in the 1990ies. It is not this tradition that Stallman and likeminded spirits inscribe themselves in. They prefer to refer to the roots of the Hacker paradigm of computing that stretch back further: back when computers where not yet personal, and when they ran an operating system called ‘Unix’.
The Unix operating system plays a particular role in the system of cultural values that make up programming culture. Developed in the 1970ies at AT&T, it becomes the dominant operating system of the mainframe era of computing. In this setup, one large computer runs the main software, and various users login into this central computer from their own terminal. This terminal is an interface that allows one to send commands and view the results—the actual computation being performed on the mainframe. Variants of Unix become widely used in the world of the enterprise and in academia.
The very first interface to the mainframe computers is the teletype: an electronic typewriter that allowed one to type commands to the computer, and to subsequently print the response. As teletypes get replaced by computer terminals, with CRT displays and terminals, interfaces often stay decidedly minimal. It is much cheaper to use text characters to create interfaces than to have full blown graphical user interfaces, especially as the state of the interface has to be sent over the wire from the mainframe to the terminal. Everyone who has worked in a large organisation in the 1980ies or 1990ies will remember the keyboard driven user interfaces of the time.
This vision of computing is profoundly disrupted by the success of the personal computer. Bill Gates vision of ‘a personal computer in each home’ becomes a reality in the 1990ies. A personal computer is self-sufficient, storing its data on its own hard-drive, performing its own calculations. The PC is not hindered by having to make roundtrips to the mainframe continuously, and as processing speed increases PC’s replace text-based input with sophisticated graphical user interfaces. During the dominance of Windows operating system, for most mainstream computer users Unix seems to become a relic: after conquering the homes, Windows computers conquer the workplace as well. In 1994’s Jurassic Park, when the computer-savvy girl needs to circumvent computer security to restore the power, she is surprised to find out that it’s a Unix system.
The tables turn when in 2000 Apples new OS X operating system uses Unix. At the same time, silently but surely the Linux operating system has been building mind share. A cornerstone of the movement for Free and Open Source software, Linux is a Unix clone that is free for everyone to use, distribute, study and modify. Even if both these unixes are built on the same technology as the UNIX that powers mainframe computers, these newer versions of UNIX are used in a completely different context. Linux and OS X are designed to run on personal computers, and both come with an (optional) Graphical User Interface, making them accessible to users that have grown up on Windows and Mac OS. All of a sudden, a new generation gets to appropriate Unix. A generation which has never had to actually use a Unix system at work.
Alan Kay claims that the culture of programming is forgetful. It is true that a new generation of programmers completely forgets the rejection of UNIX by consumers just years before, let alone wonder on the reasons for its demise. Yet the cultural knowledge embodied in Unix is now part of a community. The way in which Unix is used today might be completely different from the 1970ies, but Unix itself and the values it embodies has become something that unites different generations self-identifying with ‘hacker culture’.
The cultural depth of Unix far exceeds naming conventions. Unix has been described as “our Gilgamesh epic” (Stephenson 1999), and its status is that of a living, adored, and complex artifact. Its epic nature is an outgrowth of its morphing flavors, always under development, that nevertheless adhere to a set of well-articulated standards and protocols: flexibility, design simplicity, clean interfaces, openness, communicability, transparency, and efficiency (Gancarz 1995; Stephenson 1999). “Unix is known, loved, understood by so many hackers,” explains sci-fi writer Neal Stephenson (1999, 69), also a fan, “that it can be re-created from scratch whenever someone needs it.”
p.51 Coleman, Coding Freedom - The Ethics and Aesthetics of Hacking. New Jersey, 2012.
If there is a lingua franca in Unix, it is ‘plain text’. Unix originated in the epoch that users would type in commands on a tele-type machine, and typing commands is still considered an essential part of using Unix-like systems today. Many of the core UNIX commands are launched with text commands, and their output is often in the form of text. This is as true for classic UNIX programs as for programs written today. Unix programs are constructed so that the output of one program can be fed into the input of another program: this ability to chain commands in ‘pipes’ depends on the fact that all these programs share the same format of out- and input, which is streams of text.
The most central program in the life of a practitioner of Hacker Culture is the text editor. Contrary to a program like Word, a text editor shows the raw text of a file including any formatting commands. This is still the main paradigm for how programmers work on a project: as a bunch of text files organised in folders. This is not inherent to programming (there have been programming environments that store code in a database, or in binary files), but has proved the most lasting and popular way to do so. Unix’ tools are built around and suited for plain text files, so this approach also contributes to the ongoing popularity of Unix—and vice versa.
While programming, one has to learn how to create a mental model of the object programmed. As the programmer only sees the codes, she or he has to imagine the final result while editing—then compile and run the project to see if projection was correct. This feedback loop is much slower than the feedback loop as we know it from WYSIWYG programs. Maybe it is the experience of slow feedback that gives programmers more tolerance for abstract interfaces then those of us outside this culture.
While WYSIWYG has a shorter feed-back loop, it also adds additional complexity. Anyone who has used Microsoft Word knows the scenario: after applying several layers of formatting, the document’s behaviour seems to become erratic: remove a carriage return, and the whole layout of a subsequent paragraph might break. This is because the underlying structure of the rich text document (on the web, this is HTML) remains opaque to the user. With increased ease-of-use, comes a number of edge cases and a loss of control over the underlying structure.
This is a trade-off someone steeped in Hacker Culture might not be willing to make. She or he would rather have an understandable, formal system by which the HTML codes are produced—even if that means editing in an environment not resembling at all the final web page—because they already know how to work this way from their experience in programming.
This is shown by the popularity of a workflow and type of tool that is known as the ‘static site generator’. In this case, the workflow for creating a website is to have a series fo plain text files. Some of them represent templates, others content. After a change, the programmer runs the ‘static site generator’ and all the content is pushed through the templates to produce a series of HTML files. The content itself is often written in a code language like ‘Markdown’, that allows one to add some formatting information through type-writer like conventions: *stars* becomes stars.
Because programmers are gatekeepers to web technology, and because programmers are influenced by Hacker Culture, the biases’ of Hacker Culture have an impact outside of this subculture. The world of programming is responsible for its own tools, and contemporary web-sites are built by programmers upon Open Source libraries developed by other programmers. Shaped by the culture of Unix and plain-text, and by the practice of programming, WYSIWYG interfaces are not interesting to most Open Source developers. Following the mantra to ‘scratch one’s own itch’, developers work on the interfaces that interest them. There are scores of the aforementioned ‘static site generators’: 242 of them, on last count.
Comparatively, the offer of WYSIWYG libraries is meagre. Even if HTML5’s ContentEditable property has been around for ages, it is not used all that often; consequently there are still quite some implementation differences between the browsers. The lack of interest in WYSIWYG editors means the interfaces are going to be comparatively flakey, which in turn confirms programmers looking for an editing solution in their suspicions that WYSIWYG is not a viable solution. There are only two editor widgets based on ContentEditable that I know of: Aloha and hallo.js. Aloha is badly documented and not easy to wrap your head around as it is quite a lot of code. Hallo.js sets out to be more lightweight, but for now is a bit too light: it lacks basic features like inserting links and images.
The problem with the culture of plain-text is not plain-text as a format. It is plain text as an interface. Michael Murtaugh has written a thoughtful piece on this in the context of The Institute for Network Cultures’ Independent Publishing Toolkit: Mark me up, mark me down!. Working with a static site generator, it becomes clear they are envisioned as a one way street: you change the source files, the final (visual) result changes. There is no way in which a change in the generated page, can be fed back into the source. Similarly, the Markdown format is designed to input by a text-editor, and than programmatically turned into HTML. Whereas HTML allows for multiple kinds of interfaces (either more visual or more text oriented), a programmer-driven choice for Markdown forces the Unix love of editing plain text onto everybody.
If WYSIWYG would be less of a taboo in Hacker Culture, we could also see interesting solutions that cross the divide code/WYSIWYG. A great, basic example is the ‘reveal codes’ function of WordPerfect, the most popular word processor before the ascendency of Microsoft Word. When running into a formatting problem, using ‘reveal codes’ shows an alternative view of the document, highlighting the structure by which the formatting instructions have been applied—not unlike the ‘DOM inspector’ in today’s browsers.
More radical examples of interfaces that combine the immediacy of manipulating a canvas with the potential of code can be found in Desktop software. The 3D editing program Blender has a tight integration between a visual interface and a code interface. All the actions performed in the interface are logged in programming code, so that one can easily base scripts on actions performed in the GUI. Selecting an element will also show its position in the object model, for easy scripting access.
HTML is flexible enough so that one can edit it with a text editor, but one can also create a graphical editor that works with HTML. Through the JavaScript language, a web interface has complete dynamic access to the page’s HTML elements. This makes it possible to imagine all kinds of interfaces that go beyond the paradigms we know from Microsoft Word on the one hand and code editors on the other. This potential comes at the expense of succinctness: to be flexible enough to work under multiple circumstances, HTML has to be quite verbose. Even if the HTML5 standard has already added some modifications to make it more sparse, for adepts of Hacker Culture it is not succinct enough: hence solutions like Markdown. However, to build a workflow around such a sparse plain-text format, is to negate that different people might want to interact with the content in a different way. The interface that is appropriate to a writer, might not be the interface that is appropriate to an editor, or to a designer.
The interfaces we use on the web are strongly influenced by the values of the programmers that make them, who reject the mainstream WYSIWYG paradigm. Yet What you see is what you get is not going anywhere soon. It is what made the Desktop computer possible, and for tasks such as document production, it is the computing reality for millions of users. Rather than posing a rejection, there is ample space to reinvent what WYSIWYG means, especially in the context of the web, and to find ways to combine it with the interface models that come from the traditions of Unix and Hacker Culture. Here’s to hoping that a new generation of developers will be able to go beyond the fetish for plain text, and help to invent exciting new ways of creating visual content.
Graphic design is a nostalgic field. Even in the art schools, the students want to make books and posters. Designing for the web has little prestige. I could say that I want students to design for the screen, and to actively engage with their digital tools, but I first need to know what it is that makes it so attractive to design for the printing press.
Books and posters nowadays start their lives on a computer in proprietary software. Many of my colleagues see the software as a neutral tool, subservient to their creativity. Therefore, the software can be used as-is. For me, software is a piece of culture, an embodiment of a certain way of thinking. The software partakes in the creation. A truly rich visual culture can only come about if designers manipulate, appropriate and subvert the software technology they use.
Hacking analogue technology is a physical affair. Cracking open software requires a different sort of interaction, with programming interfaces and computer files. It requires a new set of skills that takes time and enthousiasm to attain. It is this enthousiasm that is often lacking. In fact, my students often seem scared of digital technology.
So what are today's nascent designers scared of? The comparison with their attitude to printing technology shows it is not necessarily technology in general that designers are frightened of, nor is it geekiness. Designers actually take pride in the geeky details of their craft when they are related to the printing process, knowing about things like spot colours, paper stocks and binding methods. Is it a matter of differing cultures? Even though design is applied mathematics, most design majors study the humanities in high school. Code seems to belong to this other world, the world of the kids who choose mathematics. The other geeks.
If the divide is social, then a gentle introduction to the other culture, the culture of programming, itself embedded in the culture of science and mathematics, should form part of a contemporary design curriculum.
Or another strategy, can we force the students to get their hands dirty? With code? No more mockups. That's an efficient way to introduce the nature of the digital. There is always the question of whether designers should learn to code. I think they should.
As a student, I came across a printing press that worked with movable type. I spent a day setting a simple poem. It's dirty, precise, frustrating work. At the end of the day I printed my poem, and only after I had cleaned the press, I spotted the spelling error.
As tedious as the process had been, this day taught me so much about the nature of printing technology. My understanding of my profession really deepened. I know why uppercase is called uppercase (uppercase letters are stored above the lowercase ones); why leading, the space between lines, is called leading (it is strips of lead). I have an understanding of how all of the classical book layout conventions are related to the process of setting a block of movable type.
Even though I never set anything in movable type again, I understand printing technology to a further extent. The same is true for code. Going through the tedious process of writing a computer program will change your understanding of the medium you work with all the time. Your dirty hands will forever influence any interaction you have with programmers.
Every page on the world wide web is constructed using a code language (more precisely a ‘markup’ language), called HTML. Like any coded language, HTML makes some things easier to express, and other things harder—as a tool, it comes with its own set of ‘affordances’.
The possibilities of HTML are codified into standards documents. Yet like with the grammars of natural languages, those writing HTML codes do not always follow the standard. Authors may well ignore parts of it, or use elements in ways that clash with the original intended use. Idioms come up through copying and sharing, and become part of the vocabulary web authors teach each other. These usage patterns are to be taken into account for subsequent revisions in the HTML standard, and are a factor in driving web browser vendors’ choices in what features to implement next.
For Web Browsers do not follow the standards by letter either. There are parts of the HTML standard that are not actually implemented in any popular browser. Web authors have little incentive to use them. On the other hand, web browsers might implement support for idioms that are not part of a standard, but are being used by authors.
In this way, HTML’s affordances are created by what standards prescribe, what browsers allow, and what people write—and together this forms what the web can be. There is a complex interrelationship between the different parties: sometimes their interests converge, and then they drift away again. In this text we try to introduce in more detail what are some of the power struggles that underlie the continued development of HTML. We will look at HTML’s latest iteration, HTML5. We will see the way in which companies like Google try to further their own agenda’s, sometimes under the guise of following the bottom up inventions of web authors.
An interesting account of the first years of the development of HTML is to be found in Dave Ragget’s 1997 book ‘Ragget on HTML’ of which the chapter 2 ‘a History of HTML’ is available online (Ragget, 1998). In 1989, Tim Berners Lee comes up with a system to write pages that are linked together with hyperlinks, and in this way form a web. Berners Lee constructs a language to write these pages, the HyperText Markup Language, and a protocol to transfer them: the Hypertext Transfer Protocol. The transfer protocol uses the already existing infrastructure of the internet, which is being used by academics and the military to exchange information through channels such as e-mail.
The original implementation of the web is meant to exchange scientific information. The HTML language is based on the existing SGML standard, ‘extensively applied by the military, and the aerospace, technical reference, and industrial publishing industries.’ (Standard Generalized Markup Language, 2013) HTML, like SGML, is a markup language. In contrast with a page description language like PostScript or PDF, a markup language is supposed to ‘describe a document's structure and other attributes’ (Ibid), without prescribing the exact page that is produced by this markup.
The concept of a markup language has a number of great benefits. By leaving out page-setting instructions, the pages are quicker to download. With HTML, it is not the author, but the visitor’s browser which is finally responsible for the layout on the screen. Already in the 1990ies, before smartphones and ‘responsive design’, people access the web with different kinds of screens and devices: the browser makes it all fit on the screen.
Another advantage of HTML is the fact that the language is ‘plain text’: The mark up of HTML is simple and uses normal keyboard characters. The first browsers already provide an option to ‘view source’, and see the underlying tag codes. These can easily be copied and adapted into an HTML creation of one’s own. This creates a language that is accessible for experimentation and self-learning, with a high potential for ‘bricolage’ by the budding homepage creators.
The one feature of page description languages that is lacking in a markup language like HTML, though, is the possibility to easily store and transmit rich layout features. The scientific community, within which the web arose, is used to scarce layout of scientific papers (as are the other audiences in which SGML based solutions are popular). The first HTML versions provide headings, paragraphs, citation marks—but no columns, or even: images! In order to win over a larger public outside of academia, some of the visual sophistication of illustrated print media will have to find its way into the web, and thus, into HTML. This is the tension upon which some of the first battles over HTML are built.
In 1993 NCSA Mosaic is released, the first graphical web browser on the Windows platform. A research project at the time, the responsible researchers launch Netscape Navigator (1994) upon graduation. An obvious tension can be seen between the original scientific community and the entrepreneurs that see a future for the web as a mass medium. In a rush to make the web attractive for a larger public, they add a lot of graphical capabilities to HTML. ‘Academics and software engineers later would argue that many of these extensions were very much ad hoc and not properly designed.’ (Ragget, 1998)
Whatever the original community around the web might think of the changing nature of the language, the new companies are also responsible for opening up the academic invention to the public at large. In this process, the power to define what is HTML shifts away from the community in which it originated, to the new (commercial) browser developers:
Following a predictable path, Netscape began inventing its own HTML tags as it pleased without first openly discussing them with the Web community. Netscape rarely made an appearance at the big International WWW conferences, but it seemed to be driving the HTML standard. It was a curious situation, and one that the inner core of the HTML community felt they must redress.
The situation prompts the creation of a standards body, the World Wide Web Consortium. Initially, the W3C is not able to lay as much weight on the table as it would like. The development of HTML is still mostly driven by the browser vendors. In 1995, not wanting to be left behind on the web, the world’s largest software company Microsoft launches Internet Explorer, and sparks what is to become known as the ‘browser wars’:
(…) during the Browser Wars of the 1990s, Microsoft (MSFT) and Netscape each claimed close to 50% of the market, and their browsers were almost entirely incompatible. It wasn't uncommon to type in a URL and find that the site didn't work. Companies eager to open their virtual doors had to invest in multiple versions of their sites. In short, it was a bad situation for businesses and consumers alike. Yet the browser makers were behaving as many software companies do—by trying to out-feature the competition with the introduction of new proprietary technologies.
At this point, an other group with vested interests joins the debate. The web designers who have to create web pages for their living, who are having a hard time to work in this fragmented landscape, often creating multiple versions of a site that cater to multiple browsers. The idea of browsers respecting the W3C standards becomes their rallying point, in what is known as the Web Standards Movement. Businessweek runs a feature on Jeffrey Zeldman, an influential figure in this community, highlighting ‘his ability to talk about the dry and, let's face it, dull subject of standards in a way that made everyone see their importance.’ (Scanlon, 2007)
An aside, it would be interesting to look at this group of designers more precisely: it does not represent all designers, but rather a specific subset who have a hybrid design/development skill set. They operate contrasting themselves with communication agencies who have a design/development division of labour or who invest in more traditional designer-friendly tools such as Adobe Flash or Dreamweaver—in fact, the ‘Standards Aware’ designers can be seen to advocate against these kind of tools, advocating ‘hand written html’ over the ‘bloated WYSIWYG tools’. 1
At this point the W3C seems a natural ally to web developers. A standards body providing free standards—standards that became the stick to beat browser vendors with, and whose compliance becoming a mark of prestige for a new generation of web designer/developers. These makers group together as the Web Standards Project (1998).
Another party to the burgeoning ‘standards movement’ are new browser developers. These browsers with a smaller market share have a hard time competing, because most web pages are built to the whims of the Netscape or Internet Explorer’s rendering engines. Web standards will make it more easy for new browsers to gain a competitive advantage. The backgrounds of the browser manufacturers are quite varied: there is the small Norwegian company Opera, there is Mozilla, informed by ideals of an Open Web, who have created the Open Source project Firefox, and there is Apple, who have created their own browser Safari so that they don’t have to rely on third parties for a smooth web experience on their operating system.
The rise of Firefox, especially is spectacular. It is also a rare occasion for a new group to implicate themselves in the debate; the web users themselves. The ‘Get Firefox’ campaign that seeks to promote the software as it nears version 1.0 is completely volunteer run, and thousands of Firefox users contribute to a fundraising campaign that culminates in a 2-page ad in the New York Times (Mozilla, 2004).
The success of web standards is hard to quantify, so here are some indicators of its influence. The successful blogging software WordPress (launched in 2002) puts web standards right in its tag line: “WordPress is a semantic personal publishing platform with a focus on aesthetics, web standards, and usability.” (Wordpress, 2003). The Wordpress page also features a footer that notices ‘Valid CSS’ and ‘Valid HTML’. Such footers noting compliance to standards become something of a fashion. The use of Web Standards becomes part of Accessibility Guidelines that in some cases even become part of government regulation.
Standards are a work in progress, involving many actors. The confluence of browser vendors, web designers and the W3C working together has seen a great momentum, when the interests of all these parties aligned towards overcoming the power of the established browsers from Microsoft and Netscape. With the dust settled, afterwards, the way forward is less clear.
Discontents with the W3C becomes prevalent as development of XHTML2 progresses, which more clearly outlines the vision of the W3C: towards standards that require a strict adherence (i.e., the document won’t display if not fully well-formed), in order to pave the way for a future vision of the web that will allow the content of web pages to be more easily reasoned about by software programs—a future known as the Semantic Web.
Convincing arguments against a naive vision of the Semantic Web are already voiced by (Doctorow, 2001): since software can not easily deal with natural language, web pages would need some kind of structured metadata in addition to their linguistic content. Besides the inherent impossibility of objective frameworks for metadata (‘ontologies’), the quality of such metadata will always be lacking due to human laziness on the one hand, and the human desire to game the system on the other. 2
The other main argument against the new standard has been uttered in many forms around the web, among others by (Martin, 2008). This argument goes: the very fact that browsers are extremely forgiving in the way they interpret markup is the basis for the success of the internet: it has enabled copy-paste developing style that made the barrier to entry for creating web pages quite low.
The most consistent and influential counter reaction to the W3C’s direction comes from an association of browser vendors known as the WHATWG (2004). They stage a coup, proposing an alternative future standard: HTML5. The name itself suggests the promise of continuity and backwards compatibility, and the standard itself focuses on capturing existing practices, with a focus on web applications.
This coup is wildly successful. In 2007, W3C even endorses the new standard. For a while work continues on both HTML5 and XHTML2 until, in 2009 the W3C announces the decision to drop XHTML2. Many parts of the HTML5 standard are then quickly implemented in browsers. This is because the browser vendors are onboard from the beginning, but also because the standard has been based on existing practices, and does not require authors to ‘clean up their act’. HTML5 even specifies how a web browser should deal with malformed HTML tags.
The WHATWG’s more pragmatic approach to HTML standardisation has proved successful, up to the point where the notion of ‘web standards’ is now much less present in the world of web design and development than it used to be. The Web Standards Project dissolves itself in 2013, seeing its mission largely as accomplished. ‘Standards Compliant’ is no longer a unique selling point. Wordpress’ homepage, once proudly flaunting standards compliance and semantics in it tag-line, now simply states: “WordPress is web software you can use to create a beautiful website or blog.” (Wordpress, 2014). Gone too is the fashion of noting adherence to standards in website footers. It is as if web designer/developers no longer believe the syntactical strictness of XHTML will deliver them from tag soup.
As part of its pragmatism, the WHATWG favours a nimble decision making process. To understand more about who writes the web now, we have to examine who partakes in this process. As far as standards bodies go, the W3C is quite open: the cost of membership is given on a sliding scale, depending on the character of the organisation applying and the country in which it is located. That isn’t the case with the WHATWG, as written in the Charter (WHATWG, n.d.), ‘Membership is by invitation only’. Membership is also available only to browser vendors. This than makes it opportune to look, who are these browser vendors, what are their interests, and how do they come into play in the nature of HTML5.
Update 21-10-2014: Ian Hickson weighs in in the comment section to explain more on the process employed by the WHATWG and how it compares to the W3C. More feedback by those familiar with the process is very welcome!
As much as it advances the state of the web, HTML5 is definitely no longer focused on the ideology of the Semantic Web. To examine what this means in practice, let’s look at an element of Semantic Web technology called RDF/A: the W3C ’s intended mechanism to add extra metadata to your HTML pages. This metadata allows one to specify all kinds of relations that normally only are available when accessing the underlying data sources, paving the way to re-use and expose the information in new ways.
A foundational idea of XHTML is its extensibility: based on a more abstract standard, XML, other XML based formats can be mixed in. HTML5 doesn’t provide such a format for extension. The HTML5 working group hand-picked two XML formats that can be embedded in an HTML5 document: SVG drawings and MATHML mathematical formulas. RDF/A is not among the extensions allowed in HTML5. The specification’s editor, Ian Hickson, writes on the W3C mailing list about the reasoning to omit RDF/A (Hickson, 2009). He fails to see the added value of RDF/A over ‘natural language processing’:
Do we have reason to believe that it is more likely that we will get authors to widely and reliably include such relations than it is that we will get high quality natural language processing? Why?
How would an RDF/RDFa system deal with the problem of the questions being unstructured natural language?
Can an RDF/RDFa system do better from a natural language query?
People have a hard enough time (as you point out!) doing simple natural language queries where all they have to do is express themselves in their own native tongue.
With natural language processing, Hickson means search algorithms that automatically distil keywords from existing documents, without the authors adding an additional formal layer as required by the Semantic Web. With natural language query, he means that users are able to use search functionality by using phrases in regular language, or simple combinations of keywords, without having to resort to a formal query language traditionally used in databases.
Hickson seems to suggest RDF/A adds nothing new or desirable, since there is an existing solution in natural language processing and natural language queries. It is then in the interest of the community not to add it:
If features solve new problems in practical ways, then the cost is worth it. If they don't, then it behooves us not to add them, as the total cost to the community would not be trivial.
Hickson’s e-mail to the list is a wonderful example of what Americans call Astroturfing: ‘the practice of masking the sponsors of a message (…) to give the appearance of it coming from a disinterested, grassroots participant. Astroturfing is intended to give the statements the credibility of an independent entity by withholding information about the source's financial connection.’ (Astroturfing, 2013). No-one uses the terminology: ‘Searching web pages with Natural Language Queries’. They google. And Google is the employer of Hickson. Google is the company that owns the best proprietary algorithms for Natural Language search, and their own closed index of the internet on which they use them.
The argument is disingenuous. Google’s algorithms are extremely good because they are a huge company that has invested billions of dollars in them, and they train them on huge datasets they have access to because they are at the axis of most internet traffic. Other companies do not have access to algorithms and indexes of the same quality. What Hickson says: we have an existing solution that works fine, as long as you are willing to depend on the commercial company for which I work.
The very idea of standards is that they level the playing ground, and they allow a level playing field for the various stake-holders. Keeping the standards process neutral is more important still because a company like Google has a known record in stifling internet standards when they conflict with Google’s interest. In tech circles Google is infamous for having tried to sabotage the RSS syndication format. RSS is a standard by which blogs and other periodic online publications can notify readers of new articles. Google’s Chrome browser is the only main stream browser not to support RSS. Google also created a free RSS reader software, only to discontinue the reader once it had effectively extinguished the competition (Cortesi, 2013).
Like RSS, RDF/A provides a way for content creators to make links and cross references that do not need Google. Hickson not highlighting the conflict of interest inherit in his judgment on this HTML5 feature shows how the standards process remains fragile, and how the continued development HTML5 hinges on a balance of powers that can easily come undone. The fragility of the standards process is summarised by technologist Shelley Powers:
On the other, I've been a part of the HTML WG for a little while now, and I don't feel entirely happy, or comfortable with many of the decisions for X/HTML5, or for the fact that it is, for all intents and purposes, authored by one person. One person who works for Google, a company that can be aggressively competitive.
The history of the web is mirrored in the history of its main encoding language, HTML. As new parties start to have an interest in the web, they start to partake in the development of this language. From the academics that launch the web, to the commercial companies that develop the first mainstream browsers, to the web standards movement that sees designers and the developers that create web-sites join the conversation. Nowadays, the most influence comes from web browser manufacturers, most of whom are part of companies who have other, large stakes on the internet.
This history brings some questions for the current situation. With the dissolution of the Web Standards Project, it seems web designers and developers are less implicated in the development of HTML. That is a shame, because as a voice they could provide a counterbalance to the interests of the web browser manufacturers. As the conflict of interest between Google and Ian Hickson shows, the process can be far from neutral.
Theoretically, it is not just designers who should be involved, but also their clients. From businesses who sell goods, to online publications to individual bloggers, they all have an interest in being able to make the websites they want to make. Even if the standards process is messy, it still allows more influence than the ‘walled gardens’ with whom they would need to content themselves otherwise: selling and publishing through Facebook, Amazon.
Finally, with perhaps the exception of their involvement in the success of the Firefox web browser, the one figure conspicuously missing from the standards process is the web user. From its humble beginnings as a medium for exchange of scientific reports, the web has become an intrinsic part of the live of most people on this planet. Is it time for the one body that has not been seen in the history of HTML to show up—that of consumer organisations?
Astroturfing. (2013, November 19). In Wikipedia, the free encyclopedia. Retrieved November 19, 2014 from http://en.wikipedia.org/w/index.php?title=Astroturfing&oldid=582331848
Cortesi, A. (2013, March 14). Google, destroyer of ecosystems. Retrieved November 19, 2014 from http://corte.si/posts/socialmedia/rip-google-reader.html
Doctorow, C. (2001, August 26). Metacrap: Putting the torch to seven straw-men of the meta-utopia. The WELL. Retrieved November 19, 2013 from http://www.well.com/~doctorow/metacrap.htm
Martin, A. (2008, September 28). W3C go home! (C’est le HTML qu’on assasine). uZine. Retrieved November 19, 2013 from http://www.uzine.net/article1979.html
Mozilla Foundation Places Two-Page Advocacy Ad in The New York Times. (2004, December 15). Mozilla Press Center. Retrieved November 19, 2013 from https://blog.mozilla.org/press/2004/12/mozilla-foundation-places-two-page-advocacy-ad-in-the-new-york-times/.
Hickson, I. (2009, February 13). Re: RDFa and Web Directions North 2009. Public-rdf-in-xhtml-tf@w3.org. Retrieved November 19, 2013 from http://lists.w3.org/Archives/Public/public-rdf-in-xhtml-tf/2009Feb/0069.html.
Powers, S. (2009, July 2). XHTML2 is dead. Burningbird. Retrieved November 19, 2013, from http://burningbird.net/node/12#.Uouhk6Dzs60
Ragget, D. (1998). Chapter 2: A History of HTML. In Raggett on HTML 4 (Addison Wesley Longman, 1998). Retrieved November 19, 2013 from http://www.w3.org/People/Raggett/book4/ch02.html.
Scanlon, J. (2007, August 6). Jeffrey Zeldman: King of Web Standards. BusinessWeek: innovation_and_design. Retrieved November 19, 2013, from http://www.businessweek.com/stories/2007-08-06/jeffrey-zeldman-king-of-web-standardsbusinessweek-business-news-stock-market-and-financial-advice
Standard Generalized Markup Language. (2013, November 6). In Wikipedia, the free encyclopedia. Retrieved November 19, 2013, from http://en.wikipedia.org/w/index.php?title=Standard_Generalized_Markup_Language&oldid=580454005
Web Hypertext Application Technology Working Group Charter. (n.d.). Retrieved November 19, 2013, from http://www.whatwg.org/charter
WordPress — Home. (2003, June 18). Retrieved November 19, 2013 from https://web.archive.org/web/20030618021947/http://wordpress.org/.
WordPress — Home. (2014, September 28). Retrieved November 19, 2013 from https://web.archive.org/web/20140928052112/https://wordpress.org/
This article was commissioned for “Considering your tools. a reader for designers and developers” and is cross-posted here.
Update October 4, 2014: Since this article’s original publication, RDF/A has been accepted as part of HTML5: http://www.w3.org/News/2013#entry-9919.
ufo2otf
is a command line utility that takes UFO font sources and generates OTF’s and webfonts. It helps you to translate as quickly as possible your font editor’s working files into fonts one can use and install, on one’s own system, and on the web.
Especially if you are following tellyou’s lead and releasing your fonts early and often, you can profit from automating this process.
Installing ufo2otf
is quite easy. Well, you have to use the command line, but since ufo2otf
itself runs on the command line, that is fair game. It is a good idea to learn about the command line.
On a Mac, you can install it like this:
sudo easy_install ufo2otf
Then, you can run ufo2otf
and have it tell you if you have got all the dependencies set up:
ufo2otf --diagnostics
If everything works, and the folder in which you find yourself, you can create an otf as such:
ufo2otf NimbusSanL-Regu.ufo
Which will create NimbusSanL-Regu.otf
. You can also create multiple otf’s by passing multiple arguments:
ufo2otf NimbusSanL-Regu.ufo NimbusSanL-ReguItal.ufo
Which creates NimbusSanL-Regu.otf
and NimbusSanL-ReguItal.otf
. To also generate webfonts we pass the option --webfonts
:
ufo2otf NimbusSanL-Regu.ufo NimbusSanL-ReguItal.ufo --webfonts
Which will additionaly create a webfonts folder with ttf, eot and woff versions of the fonts, and a css stylesheet that links the different versions together.
From the command line there is even a way in which you can run it for all the ufos in the current folder. This is what I did when I created the webfonts and stylesheet for the article on the GhostScript fonts:
ufo2otf *.ufo --webfonts
Traditionally, computer programs are written in a language like C, which is readable to (some) humans, and then compiled to machine code, which is readable to the computer. The program that takes care of this step is called the compiler.
If you have a set of UFO source files, the same logic applies: the UFO is human-readable text (even if, in this case, you never really write it yourself), and it requires a compilation step to turn it into smaller, quicker font files that your operating system knows how to use.
A program that is widely used in the font industry is the Adobe Font Development Kit for Opentype. It comes with the makeotf command line program which is able to create otf files from a specific layout of (PostScript) source files. Font editors like Fontlab use it to generate their Opentype/CFF fonts. To use AFDKO with UFO’s, Tal Leming has created a bridge called ufo2fdk. This is a python library. You first read in the UFO using another python library, Robofab: Robofab reads in the UFO, and ufo2fdk passes it on to the AFDKO, which generates the font.
The AFDKO has some downsides when using it for a public project, where you might have a very heterogeneous working environment. It is closed source, which means there is no way that you can adapt it to new situations. For example, it is unavailable on Linux, which is not just a problem for those designers running on Linux, but also when embedding it into a web service.
The alternative comes in the form of the Open Source font editor FontForge. FontForge is able to read a large number of font formats, including UFO, and can generate fonts as well. FontForge is scriptable with python, and you can use FontForge in your own python scrips without launching the graphical user interface. This makes it especially suitable to be used in a compilation workflow.
To generate OTF’s, ufo2otf
can use both the AFDKO and FontForge. By default, it will use whichever compiler is installed (with a preference for FontForge if both are present). You can also explicitly tell it which compiler to use. This can be quite handy because different compilers might interpret UFO differently, and finding out about such inconsistencies can help to fix implementation details, or to fix ambiguities in the UFO specification.
To generate webfonts, one needs to have FontForge installed. For the webfonts, ufo2otf
will automatically perform a number of crude optimisations that are meant to make a typeface more suitable for the screen.
Asides writing OTF’s, one might want to create other kind of typefaces that cater to the environment of the web. In the early days of @font-face support, serving typefaces on the web is an onerous endeavour. Only one browser, Safari, supports regular otf and ttf fonts. Internet Explorer supports an alternative format, yet it must be created from a desktop application that only runs onder Windows. Early versions of Mobile Safari require yet another format: SVG fonts. The conversions needed to create all these versions, and the CSS rules required to get them all to work together, is daunting.
It is at this point that FontSquirrel comes to the scene. This website offers you the possibility to create @font-face-kits: for a font you upload yourself, all the necessary versions are created, along with a CSS file. Running what probably is a headless version of Fontforge, together with a slew of other tools, the service proposes all kinds of useful modifications to your files to make them more fit to the web: automatic hinting, subsetting of the font to a defined character set (easier to download).
The ease of use offered by FontSquirrel is surely an important catalyst in the way in which @font-face ends up defining the face of the web. Using Fontsquirrel, however, does has its downsides, especially when setting up an automated workflow like you do in a larger organisation or in an open source project. As a website, it is not easy to automate. Its interface is made for humans ticking the boxes, not for hooking into scripts: there is no API or Application Programmatic Interface. FontSquirrel is also always changing: this means that you can not count on an option you used yesterday to be available today. This could be OK if it were Open Source, in which case you could choose a version to run on your own computer locally, but this is not the case.
A nice bonus when generating webfonts with ufo2otf
is that it produces a more usable CSS than FontSquirrel. In CSS, one can group multiple @font-face declarations into one family. One can declare a font-file to use for bold, another one for regular and yet another for the italic. This is what you want, for example, if you use together NimbusSanL-Regu.otf
, NimbusSanL-ReguItal.otf
, NimbusSanL-Bold.otf
and NimbusSanL-BoldItal.otf
Font Squirrel produces:
@font-face { font-family: 'nimbus_sans_lregular'; src: url('nimbussanl-regu-webfont.eot'); src: url('nimbussanl-regu-webfont.eot?#iefix') format('embedded-opentype'), url('nimbussanl-regu-webfont.woff') format('woff'), url('nimbussanl-regu-webfont.ttf') format('truetype'), url('nimbussanl-regu-webfont.svg#nimbus_sans_lregular') format('svg'); font-weight: normal; font-style: normal; } @font-face { font-family: 'nimbus_sans_lreguital'; src: url('nimbussanl-reguital-webfont.eot'); src: url('nimbussanl-reguital-webfont.eot?#iefix') format('embedded-opentype'), url('nimbussanl-reguital-webfont.woff') format('woff'), url('nimbussanl-reguital-webfont.ttf') format('truetype'), url('nimbussanl-reguital-webfont.svg#nimbus_sans_lreguital') format('svg'); font-weight: normal; font-style: normal; } @font-face { font-family: 'nimbus_sans_lbold'; src: url('nimbussanl-bold-webfont.eot'); src: url('nimbussanl-bold-webfont.eot?#iefix') format('embedded-opentype'), url('nimbussanl-bold-webfont.woff') format('woff'), url('nimbussanl-bold-webfont.ttf') format('truetype'), url('nimbussanl-bold-webfont.svg#nimbus_sans_lbold') format('svg'); font-weight: normal; font-style: normal; } @font-face { font-family: 'nimbus_sans_lbolditalic'; src: url('nimbussanl-boldital-webfont.eot'); src: url('nimbussanl-boldital-webfont.eot?#iefix') format('embedded-opentype'), url('nimbussanl-boldital-webfont.woff') format('woff'), url('nimbussanl-boldital-webfont.ttf') format('truetype'), url('nimbussanl-boldital-webfont.svg#nimbus_sans_lbolditalic') format('svg'); font-weight: normal; font-style: normal; },
Instead of creating one font family, Font Squirrel uses four: nimbus_sans_lregular
, nimbus_sans_lreguital
, nimbus_sans_lbold
and nimbus_sans_lbolditalic
. Even if there are italics and bolds, they all have font-style: normal
and font-weight: normal
.
In ufo2otf
we create just one font family, Nimbus Sans L
, and change the font-weight
and the font-style
in accordance with the font variant:
@font-face { font-family: 'Nimbus Sans L'; font-style: 'normal'; font-weight: '400'; src: url('NimbusSanL-Regu.eot'); /* IE9 Compat Modes */ src: url('NimbusSanL-Regu.eot?#iefix') format('embedded-opentype'), url('NimbusSanL-Regu.woff') format('woff'), url('NimbusSanL-Regu.ttf') format('truetype'); } @font-face { font-family: 'Nimbus Sans L'; font-style: 'italic'; font-weight: '400'; src: url('NimbusSanL-ReguItal.eot'); /* IE9 Compat Modes */ src: url('NimbusSanL-ReguItal.eot?#iefix') format('embedded-opentype'), url('NimbusSanL-ReguItal.woff') format('woff'), url('NimbusSanL-ReguItal.ttf') format('truetype'); } @font-face { font-family: 'Nimbus Sans L'; font-style: 'normal'; font-weight: '700'; src: url('NimbusSanL-Bold.eot'); /* IE9 Compat Modes */ src: url('NimbusSanL-Bold.eot?#iefix') format('embedded-opentype'), url('NimbusSanL-Bold.woff') format('woff'), url('NimbusSanL-Bold.ttf') format('truetype'); } @font-face { font-family: 'Nimbus Sans L'; font-style: 'italic'; font-weight: '700'; src: url('NimbusSanL-BoldItal.eot'); /* IE9 Compat Modes */ src: url('NimbusSanL-BoldItal.eot?#iefix') format('embedded-opentype'), url('NimbusSanL-BoldItal.woff') format('woff'), url('NimbusSanL-BoldItal.ttf') format('truetype'); }
This is easier to use: we can set the font-family Nimbus Sans L
on the body, and bold text will automatically be rendered in the proper bold. With the Fontsquirrel CSS, you will need to explicitly assign a different font to the parts of the body that need to be rendered bold:
strong, b {
font-style: normal;
font-family: 'nimbus_sans_lbold'
}
Note that ufo2otf
’s approach is a lot more convenient (and more semantically correct), but less foolproof: it relies on the font providing proper metadata. Also, while providing support for quite some font weights, CSS does not allow for a lot of different font-styles: only normal
and italic
. So you can’t have, for example, a normal and a condensed version in the same family. ufo2otf
tries to counter this by creating a new font family for such variants:
@font-face { font-family: 'Nimbus Sans L Condensed'; font-style: 'normal'; font-weight: '700'; src: url('NimbusSanL-BoldCond.eot'); /* IE9 Compat Modes */ src: url('NimbusSanL-BoldCond.eot?#iefix') format('embedded-opentype'), url('NimbusSanL-BoldCond.woff') format('woff'), url('NimbusSanL-BoldCond.ttf') format('truetype'); }
If you are curious how ufo2otf
goes about its font detection, it is a simple Python based program and it is in its source code that it tells the whole story.
The metaphors we live by are different in each era, and tell you about the social movements shaping this moment. In the time of Snowden and Zuckerberg, we get our metaphors from Silicon Valley. The ‘fork’ is one such concept, originating in this case from the world of open source software development. Initially considered a negative occurrence, a fork is when someone creates a new version of an existing project that takes a different direction than the original maintainer imagined. A new style of open source collaboration, embodied in the popular code sharing platform Github, encourages forking. In Github, collaboration starts by creating a fork of a project, and adding changes to this fork. Then one either contributes these changes back to the original depot (if they accept it), or one goes ones own way—a fork proper in the traditional sense.
There is a lot to be said about this ‘bazaar’ style development model, and how a pragmatic view on originality and authorship and an embracing of redundancy can make for a culturally rich ecosystem. These ideas are inspiring enough to see how they could also work outside the realm of software development. But when it comes to type design, one need not look to software development. To see how building upon existing creations makes typographic sense, one can look at type design history itself.
Two seminal typefaces of post-war graphic design are described by their own creators as improvements upon existing fonts.
NOT THE PENGUIN YOU KNOW
Gill Sans is considered Britain’s national sans serif, as seen on Penguin books and in the BBC logo. The typeface is designed by Eric Gill, who describes it as an attempt to improve Edward Johnston’s typeface made for the Tube:
The first notable attempt to work out the norm for plain letters was made by Mr Edward Johnston when he designed the sans-serif letter for the London Underground Railways. Some of the letters are not entirely satisfactory, especially when it is remembered that, for such a purpose, an alphabet should be as near as possible ‘fool-proof’ (…)—nothing should be left to the imagination of the signwriter or the enamel-plate maker. In this quality of ‘fool-proofness’ the Monotype sans-serif face [Gill Sans] (…) is perhaps an improvement.
On Typotheque, Ben Archer reviews the way in which Eric Gill takes Johnston’s typeface as its base, and then tries to achieve more ‘fool-proofness’. For example, in letters like the b, d, p and q, which consist of stick and a belly, so to say, Gill connects the stick directly to to the belly (‘flattening of the bowl’). At the same time, Gill is not entirely consistent in this effort as he adds additional curves (cusps) to some letters (the a) while removing them in others (the i). In Archers’s opinion, this means Gill did a bad job in re-adapting the design. One can argue with his reasoning—consistency is not necessarily what makes a great typeface, and having a set of consistent letter shapes are by no means a guarantee that these letters will work well together as a typeface.
Yet it is the idea that Gill Sans represents a proposition of improvement over an existing typeface, that interests me. And we can now all try and do a better job than Gill, because Greg Fleming released the source files for Justin Howes’ digitisation of Johnston’s typeface under an open source license as Railway Sans.
‘In 1956, Edouard Hoffmann, of the Haas Type Foundry in Switzerland, decided that the more natural typeface, Akzidenz Grotesk, needed to be reworked for a new century.’ (source) The new typeface comes at the exact right moment to be adopted by the burgeoning Swiss International Style, and another famous 20th century typeface, is born, Helvetica.
The changes are subtle, as you can see in Joe Swanson’s installation that is shown in the masthead of this article. Whereas Helvetica has its share of adoration, like in the case of Gill, the lesser well known predecessor has its fans too. Martin Majoor is a Dutch type designer who is known for his font families that contain both sans serif and serif variations of the same skeleton, and in which the sans serifs are accorded features that are normally reserved for serif designs, like true italics (not slanted). Majoor sounds off in Eye Magazine about the dull eyesore that is Helvetica, and contrasts it with the more likeable Akzidenz. Ironically, whereas Archer accuses Gill of being inconsistent, Majoor accuses Miedinger of being too consistent in his adaptation of Akzidenz:
Compared to Akzidenz Grotesk, Helvetica has hardly any new features. Though claimed to be an improvement on Akzidenz Grotesk, it lacks all the character and charming clumsiness of Akzidenz Grotesk. Helvetica is blunt and colourless (…)
If forking defined 20th century type, one would imagine that in the digital era the promiscuous forking of typeface designs would take an ever higher flight: typefaces are distributed in a form that is extremely easy to manipulate and the tools to do so are readily available.
Yet looking through Typographica’s favourite typefaces of 2012, there is only one typeface that is explicitly based on another popular digitally available typeface: Stanley, based on Times New Roman. Apparently, type designers prefer to take their inspiration from the pre-digital era: they work from Vienesse street signs, a 1909 plain Dutch typeface in which a Piet Zwart pamphlet was set, the work of an 18th century Parisian stencil maker who had sold all his copper plates to Benjamin Franklin, and ‘a very little known typeface issued in 1913 by the Dresden foundry Schriftguß AG Brüder Butter’.
Why is it then that all these designers base themselves on pre-digital sources. Why don’t they work from existing digital font sources? Or, at least, do not do so openly—it might be the case that they already do so, as there is really no way to know if a designer started of with an existing digital font.
One of the reasons the fork has flourished in type design is because copyright protection is very weak on typefaces. Like baseline describes in I like tight pants and I want my generic font medicine, it is not the design itself, but only the final digital font file that can be copyrighted. This is one of the reasons why type designers choose to work on pre-digital revivals: the practical consequence that they do not have to care about any copyrights in the original.
Another element is that in working from pre-digital sources, typographers underline the need for their skill set, and stress the labour involved in the process. Ricardo Lafuente is onto something when he borrows Fred Smeijers’ terminology, to describe type designers efforts to separate type designers into “true” type designers and mere font tweakers. At least, with a digitisation, it is clear that a mere ‘tweak’ cannot suffice. Everyone who has ever digitised a font knows that it is a lengthy and laborious process to interpret the blurry analogue impression of ink on paper into the squeaky clean logic of PostScript points.
Traditionally type design has been a discipline tightly coupled to the printing industry. To get a job as a type designer, one needed to work in the industry, because they paid for the production costs. Nowadays, the means of production for type design are practically free. Type design then, like graphic design, visual art and music, becomes a field with a very low barrier to entry. And because creative professions are fun, and are deemed to be full off rewards, many flock to these fields, leaving a situation where the offer highly outnumbers the demand (Hans Abbing wrote a great book on this: Why are artists poor?).
When one has invested in such a discipline, for example by following an education at the KABK in Type&Media or at the University of Reading, or by joining an organisation like the ATypI, one has all the interest in defending the virtue of ‘professional art’ versus the amateurs. One way to do so is to distinguish oneself from enthousiastic amateurs who distribute their work for free on sites like dafont, and who are not always clear about the progeny of their work. Starting from existing digital typefaces, then, would seem uncomfortably close to a ‘font-tweaker’ approach, blurring the lines between the professionals that know the craft, and the rest of the human population that knows how to open up a font-editor.
In software, it can be confusing to have many forks of one project existing, because it is probably practical to use just one version of the package. Cultural artefacts like typefaces, however, can more easily exist in an abundance of similar guises, because they coexist. Both the inspiration (Akzidenz) and the inspired (Helvetica) can be used by a contemporary designer. Or one might prefer to use a sibling like Univers, that shares the Akzidenz inspiration but takes it somewhere else.
There are several versions of Johnston’s typeface available that one could use instead of Gill Sans, and, according to Archer: ‘FB Agenda (1993 by Greg Thompson), Bliss (1996 by Jeremy Tankard) and Fedra Sans (2001 by Peter Bilak), are some of the recently-produced typographical riches that all owe some part of their provenance to Edward Johnston’s sans serif lettering for the London Underground in 1916’.
As more and more typefaces are becoming available under various open source licenses (like the aforementioned Railway Sans, or the Ghostscript fonts baseline treats in I like tight pants and I want my generic font medicine), a type design culture of the digital fork becomes more and more feasible. Yet for this to happen, type designers might first need to let themselves, in Lafuente’s words, be ‘contaminated by the creeping tweaker threat’. And we are a long way from there. Even Dave Crossland, who is one of the most visible figures in the world of open source type design, and who one would suspect to embrace a culture of appropriation and re-use, has the following advice to an aspiring type designer:
the educational value of doing outline drawing over the top of an existing typeface design is rather low. If you want to post work for review, you'd be better off making your own typeface design from scratch :)]]>
In short: We present a release of UFO source files for the Ghostscript fonts, a set of typefaces created to replace (and mimic) the proprietary typefaces present in the specification of Adobe PostScript. Download them as a zip, together with installable fonts, or fork them on GitHub. Let me explain.
Like medication for which the patents have expired, popular typefaces sprout generic sisters and brothers. For Helvetica: Swiss 721, CG Triumvirate, Pragmatica, Nimbus Sans.
All of this is possible because copyright protection on fonts is not very strong, especially in the United States. The thing that is copyrightable, though, is the series of points contained in an actual font file, i.e. the digital artefact that constitutes the typeface. That is why, in any End User License Agreement, the legal language refers always to the font as a program: a program, as a series of written instructions, is copyrightable.
Desktop Publishing is what happened when personal computers got powerful enough to create layouts for print publications. If you want to find one document that represents the DNA of Desktop publishing, the Adobe Postscript specification would be a good candidate. PostScript, as a language that describes page layouts, and the vector drawings and typefaces used in these layouts. In the late 1980ies, it is the glue that binds together layout programs such as Aldus Pagemaker, Adobe Illustrator and printers such as the Apple Laserwriter.
In the 1990ies, virtually all graphic designers find themselves moving to the desktop computer. Yet even if its production has moved to the screen, the image of Graphic Design in the 1990ies is still tightly coupled to print. David Carson’s ‘Print is Dead’ is a book, not a website or a tv show. Designers make their name with magazines and record covers. And even if Truetype, a format developed by Apple & Microsoft, makes a lot of progress in adapting vector typefaces for the screen, it is Emigre’s and Fontshop’s typefaces, designed for print and executed in PostScript, that seem to capture the design aesthetic of the era.
It is also in this time that we see a number of auto-reflexive typefaces, that deal with the nature of the vector curve in general and PostScript in particular. Neville Brody’s FF Autotrace is a reflection of the passage from the analog to the digital, and more precisely, from raster to vector. Simulating the effect of a rushed automatic ‘vectorisation’ script on a bitmap of a Helvetica like grotesk, it is a meditation on the kind of transformations that are performed over and over again in the process of Desktop Publishing.
An analogue drawing might first be scanned to get it into the computer, which produces a raster image, then transformed into vectors to be able to scale it in dimensions. To make the image visible though, it will need to be rasterised again, first on the screen of the designer, and finally by the printing device.
Letterror dives into the very fabric of PostScript when creating FF Beowolf, what they call a RandomFont. Just van Rossum and Erik van Blokland have added an extra command to the PostScript language: freakto. Instead of the more predictable curveto and moveto commands of regular PostScript, the points are slightly random. Letteror says ‘Beowolf demonstrated that digital fonts are data and code, and therefore instructions that can modify themselves.’. Pierre H. tells me, it used to get you banned from printshops (the PostScript execution taking place in the printer, printing a document in Beowolf takes significantly more time and computing resources then printing a regular document).
It is only today, when every design agency also produces websites and mobile applications, PostScript has lost its position of the central language. When designing for the screen, a ‘page description language’ is not flexible enough. The defining language of today is probably the markup language HTML, which trades in the (typo)graphic precision of PostScript for the kind of flexibility screen based media require. The main standard for eBooks, ePUB, is based upon it.
Every language needs interpreters. Early laser printers had their own PostScript interpreter, translating the vector image into a rasterisation. You also need an interpreter when you want to rasterise a PostScript document to the screen. A PostScript interpreter that is Free and Open Source (and who’s history also goes back to the 1980ies) is Ghostscript.
As part of the specification for PostScript, Adobe specifies a set of 35 fonts each PostScript capable device should be able to handle. Because these typefaces are not available under an open source license, GhostScript has a problem creating a Linux version. The solution: the company developing GhostScript convinces the Hamburg based font design company URW to license a set of their generic typefaces under the GPL license:
The copyright holder of these fonts, URW++ Design and Development Incorporated (the successor to the former well-known URW company), was willing to release the fonts with the GPL and the AGFPL because they judged (correctly in my opinion) that these particular fonts have become such a commodity item, with such low profit margins, that the value to URW++ of having URW++`s name widely visible and appreciated on the net is now greater than the loss of profit from those future sales that the free licenses will cause not to occur.
If PostScript is the DNA of Desktop Publishing, these 35 fonts represent its typographic genes. Thus, while none of the fonts are extremely interesting in their own right, they form a great backdrop for showing procedural transformations on typefaces. Examples are OSP’s NotCourierSans (chopping the serifs of the Nimbus Mono), Patin Helvète (adding serifs to the Nimbus Sans) and Limousine (attempting to maximally change the character of Nimbus Sans through minimal modifications). And it is because of the free and open source license of the GhostScript fonts, that these alterations and their redistributions are allowed.
You can get the original release of the Ghostscript Fonts on Sourceforge. These are PostScript Type 1 fonts. If you want to continue building on them, the UFO font format is more suitable. That is why we prepared a release of all these font files in this format (an added convenience: less cryptic filenames). We see these fonts as finished, but there are some technical implementation details that could still be improved: you can find more info in the README.txt. More importantly, you might find ways to abuse them for your own ends—if you were formed in the 1990ies, PostScript is in your blood: and these are the fonts you will feel all Oedipal about.
]]>A video of the redesign of the OSP homepage, from a traditional blog based site to a site that is used to communicate our design process in a ‘release early, release often’ manner: every time we add a file to our shared repository it shows up on our site.
The screenshots above are from a period of 48 hours where we work on the site together. To write CSS, we keep a shared Etherpad. One person is in charge of the gong: when she rings, she copies the contents of the pad into the site’s CSS file, and uploads it—after which everyone can refresh their browser and see the updated styles.
If you run a recent version of Etherpad, you can install ep_export_less_and_css, a plugin bnf created that enables CSS export directly from Etherpad.
]]>Version 1.0 is reserved for the first version that sees the design intentions crystallised, the functionality in place, and all the first bugs accounted for. It’s what you would have wanted your first release to be like, except that it took all the releases in-between to get there:
We are very pleased to announce the release of IPython 1.0, nearly twelve years after the first release of IPython 0.0.1.
With the Open Baskerville project, we try to use this logic on a typeface. This is a screen shot showing the metadata of our most recently released font:
As you can see, we put the version number right in the font name. This has a practical reason: if there are going to be multiple versions, better make sure the user can tell them apart. But there is also a philosophical reason—we want to make it clear up front that our typeface is developed in an iterative way.
Typefaces are not usually developed through release early, release often. When a designer or a foundry releases a typeface, it is usually considered finished: sometimes new technological developments warrant a new release, like when fonts first got released as OpenType, or now with the release of webfonts. The constant stream of updates as we know it from software teams is absent from type face development, even though most foundries refer to their work as software.
There are cultural reasons for this. One is that the industry of type design has until now not really embraced the malleability of digital typography. Ricardo Lafuente asks: why are people who make and sell typefaces still referring to themselves as foundries, as if they are still producing shapes cast in lead?
There are also practical reasons: once one has made a layout with a typeface, one does not usually want it to change—especially in the width of the letters and their spacing—as it would change the layout in unpredictable ways.
But the ideas in the Free Software and open source movements have found their way into the larger field of culture. Libre fonts—typefaces released as open source—have been a large success in recent years, thanks in no small part to web typography, where for a long time most traditional fonts could not be used because of licensing restrictions. But in the way in which they have are made, the fonts offered on sites such as the Open Font Library and Google Web Fonts do not really offer any innovation over the existing foundry model. They are mostly released by individual authors as a finished package. Projects that think about setting up a framework for collaboration and iterative development are rare—what is telling is that it is often not clear how to contribute back a change to the font.
This means libre typography is in a hairy spot. Even if conventional type foundries celebrate a personality focused idea of type design, the actual production takes place in tightly coordinated teams: the type designer can count on other designers to help him flesh out the alphabet, and foundries often reach out to specialists when it comes to specific areas of type design such as kerning and hinting. Individual designers working with free licenses will not be able to match these teams on production quality if they work by themselves.
To me it is clear that if libre typography wants to distinguish itself from its traditional counterpart, it needs to embrace alternative conceptions of type design. This can be by focusing on the possibilities of appropriation, remixing and forking of existing typefaces. Manipulating existing typefaces, either manually or through scripts, is only allowed only with libre fonts: the End User License Agreements of most typefaces explicitly forbid modification. Or it can be by embracing new collaboration methods and iterative processes, like we try to do with Open Baskerville.
It is clear that the right tools for typographic collaboration still need to be built. But like I explain in I like tight pants and how it has come about that code hosting site github offers visualisations of typeface development, some elements of the underlying system are already showing up. The open font format UFO and the versioning process Git are a solid basis to built on.
A collaborator on Open Baskerville needs to have an account on Github and the software Git installed on ones computer. She first ‘forks’ our repository to her Github account: she now has her own version of the revision history. This fork she ‘clones’ to her own computer, using git.
The clone consists of all the files of the project, plus the version history. She now goes and makes changes in the files. When she is happy with the changes, she ‘commits’ them, and ‘pushes’ them back up to her Github repository. She opens a ‘pull request’ where she asks for the changes to be merged into our repository.
This is a rather involved process. Outside of software developers, not many people have experience with Git. The existing interfaces to Git are not intuitive to use, being geared to programmers directly editing source files rather than designers using a graphical tool. I think the complexity of this process is one of the barriers to contribution on our project.
It will be much more easy to contribute to Open Baskerville once there exist more easy ways to handle the version control. Whether in the form of plugins for font-editors, or a new editor built around collaboration.
What will stay the same in the future is the access we have to the revision history, as tracked in Git. In Open Baskerville, we use a set of scripts to be able to quickly generate a font package for each revision: we packaged them for other projects to use.
In computer software the Semantic Versioning standard is an attempt to formalise generally accepted practices for attributing version numbers.
It distinguishes between MAJOR, MINOR and PATCH versions, corresponding to three period separated numbers: MAJOR.MINOR.PATCH (i.e. version 1.5.4) What is the difference between these categories? From Semantic Versioning’s specification, the guideline as to when to augment the MAJOR, MINOR and PATCH versions:
- MAJOR version when you make incompatible API changes,
- MINOR version when you add functionality in a backwards-compatible manner, and
- PATCH version when you make backwards-compatible bug fixes.
This makes the distinction between how a program is used and how it works on the inside. Changes to the code implementation that don’t change how the program appears to the user, merit only a change in the PATCH version. Adding functionality that appears to the user, without changing existing functionality, merits a MINOR version. Finally, changing the way in which you as a user have to use the program, means updating the MAJOR version.
However, this model does not seem really applicable to type design. This is because every change in the code, influences the visual product with which the designer works. My poster design might break more spectacularly if the fonts metrics change, by messing with my line breaks: but if the axis by which the bar of the e is slanted changes, that does ‘break’, in a more subtle way, my design as well.
This means that for fonts, unlike in software, every version is functionally different. Another reason why we chose to feature the version number prominently in the name of the typeface.
Resuming the role of version numbers in Open Source, we have seen both the way in which a version number talks about the mindsets and the aims of a project, and the more formalistic definition of Semantic Versioning.
It is the metaphor of the long road to the first version that I find the most easily applicable to design projects. Every project start with a set of goals, aspirations, challenges. In the context of design, I would then rephrase the different types of version numbers as such:
For example, in Open Baskerville, the major design goal is to recreate Fry’s Baskerville in a way that is usable on the modern web (Version 1.0). As a sub-goal, we are first working to recreate a width faithful to Fry’s surviving specimen (version 0.1), with a system in place to create releases. We logged 83 incremental improvements towards this goal, but we are not there yet (thus, at version 0.0.83).
If we ever reach 1.0, what then? Can we think of a new major design goal for the project, or rephrase its design goal? We might want to re-inspire ourselves on other Baskerville variants, for example, and adjust are goals accordingly.
Where does it end? What is the final number? Is continuing working on one design project for years and years a desirable scenario, in a way that Windows now is at version 8 and Illustrator at version 16?
There are arguments against creating too many versions of a design. A design is a product of hopes, aspirations, goals and constraints that exist at a given point. The recent phenomenon of film directors revisiting their older movies and adding contemporary computer effects (not available at the time) has not at all been well received by fans. It shows that the constraints placed upon an artistic project shape it and create its character, and that authors might want to be reticent in revisiting their works.
Similar phenomenons exist in software too. People who use many subsequent versions of a software often feel like something gets lost along the way. The initial sense of purpose embodied in a program can give way to what is called ‘bloat’, by adding feature upon feature until it tries . In programmer’s circles, this is known as Zawinski’s law: ‘Every program attempts to expand until it can read e-mail.’ At the same time, using old versions of software is not very practical. They might contain security holes; they might even no longer run on your current operating system.
With typefaces we are in luck that they will in most occasions be usable for a longer time. As long as a description of its points exists somewhere—and mankind does not forget the mathematics of Bézier curves. So I think I will want for the 1.0 version to be the last. Anyone who has another vision on the design, is free to fork.
]]>In the wake of Microsoft’s acquisition of Nokia, here is a tale of a telephone an a 1000 programmer’s hearts breaking. It starts when I am on the lookout for a new phone. A phone that is a little computer that can run all kinds of applications I can install myself—a smart phone. Since I like my new found ways of writing scripts and getting intimate with the terminal, I am looking for a telephone that resembles the UNIX systems I know.
I have also learned to write Python. I know how to use it to make web applications, and it seems to be reasonably easy to create desktop applications. But when creating applications for smartphones, the two most popular phone operating systems have their own way of doing things, using different languages: Objective C on the iPhone, Java on the Android. Big companies like Apple and Google prefer to create their own way of doing things, and call it a platform—which, in the words of Eben Moglen means: places you can’t leave.
The Nokia N900 then seems to offer an alternative. It runs the Maemo operating system, that shares with Android its Linux core, but reuses much more of Linux’s graphical model. It should allow me to reuse my new found knowledge of the Unix platform and its programming languages—like the Python one I have just been learning:
27 minutes to get a basic app running; an afternoon (most of it spent doing other stuff while stuff downloaded) to get the full development environment (with emulator) up and running. Simply excellent. The N900 really is a very good platform for development work, especially with Python.
In a similar vein, whereas most telephones are locked down by default, requiring some kind of ‘jailbreak’ procedure to allow to install any software, the Nokia has no such restrictions—it even comes with a Terminal program by default.
As I scour the internet figuring out how it works to jailbreak the latest generation of iPhone, I come across seedy forums with adolescents shouting homophobic abuse at each other, and I figure I would rather be part of a more positive ecosystem—I order the Nokia.
It works out quite nicely. I like using the phone; the full QWERTY keyboard makes for great note taking and texting. Though I never get to any programming beyond installing the IPython shell, there is an actual pleasure in the feeling that I could, whenever I would want to. Finally, I profit from a great build quality, which sees my phone survive falling into a bucket of paint.
The great downside to the phone, are caused by the simple fact of it being not very popular. As it does not represent a sizable number of All the new applications that appear are for iPhone and Android. Apparently, having a phone that you can easily fiddle with yourself, is not what gets users flocking to the device. I feel a little lonely using this operating system.
Yet at Fosdem 2011, a yearly event for Open Source software developers, it turns out everybody has this phone. I speak with the community manager for the largest Maemo website, who speaks with great enthousiasm about this community. I remember that John has this phone, and Ginger too. For a moment it feels like the people needed for the platform to have a future, are there.
Yet companies only work with open source as long as they can make open source work for them. ‘The most open’, by itself, is not easily translatable into market share. Two days later, as I get home, I read that Nokia has signed a deal to continue with Microsoft Windows as the operating system for its smartphone. It is like I hear a thousands programmers’ hearts collectively break.
]]>This is where I type my entries. It is a lonely place. The other I like tight pants contributors are nowhere in sight, it is just me typing away in the browser. Only when I hit a submit button the information is sent to the server.
Were my colleagues to try and edit this article at the same time, the system would not allow it. Content management systems put in place editing locks, allowing only one user to edit at a time. This is because the text form I write in can not be updated while I am editing it. So if other users were allowed to update the entry inbetween the time that I open the text field and the moment I press the submit button, their changes would effectively be overwritten. Wikipedia employs a sophisticated merging tool to merge various edits together.
Baseline, who works with OSP, has introduced me to Etherpad. Etherpad presents you with an online document allowing you to start typing. As you do, you might see others connected to the pad start typing as well. There is no submit button. Everything is saved while you type so that it can be shown to your collaborators at the same time. You are no longer solitary with your text box:
Once you have used Etherpad to write, it becomes difficult to imagine writing collaboratively without it. In a book sprint in Rotterdam, we used Booki, which allows for sophisticated pdf and ebook creation. Yet like many content editing tools it imposes a single-user content lock on each chapter. At the end of the session it turned out everyone had used Etherpad to write their chapters together, before copy and pasting it into the Booki platform.
If you have not used etherpad, changes are you have used the technology through Google Docs. It is Google who in 2009 bought Etherpad. At the time of the acquisition the source code gets released under a permissive license. So if the code is out there, why are we not seeing more Etherpad-style collaboration online?
]]>Built by OSP as part of a workshop where students of the Piet Zwart Institute get up close and personal with vector graphics. You edit the raw XML strings that make up the glif.
An online font-editor would be a great way to make it more easy to do collaborative type design. Speaking with Dave at this years Libre Graphics Meeting, we figured that a good place to get started would be to have a library that makes it more easy to deal with typefaces in web-apps—a JavaScript port of RoboFab perhaps?
]]>This computer is situated in Belgium’s Royal Library. It offers you access to an internal network of 300.000 pages of Belgian periodicals, produced between 1831 an 1945. It has no access to internet, no usb ports, and no connection to a printer.
If the library allows any digital distribution of the materials outside of the library walls they risk claims of copyright holders.
During Public Domain Day (NL) Librarian Marc D'Hoore explains: determining the authorship of a newspaper is difficult. Much of the work is anonymous or pseudononymous. There are not always clear contracts between the publishers and the authors. You do not know who owns the copyright, and you do not know when it will pass into the public domain.
Because of its mission of preservation, the Library needed to scan these materials—the paper is of extremely poor quality and disintegrating. But the Library also has a mission to facilitate access to its materials in any way it can.
Digitalisation seems to offer a great potential to make the material from the collections more accessible. Once scanned, material can be indexed, and put online, and be available to everyone with an internet connection. In potential. In reality, the Royal Library can not do this, because it would open itself to all kinds of claims of damages by copyright holders.