History of Designing for the Screen
Tom Arah looks at the history, problems, solutions and future of designing for the screen.
Previously I looked at the history of desktop publishing (DTP) with its focus on print-based design and output to paper. However the advent of the Apple Mac and of the Graphical User Interface (GUI) into the mainstream of personal computing also heralded the opening up of an entirely new publishing medium and one that is set to become even more important than paper – the computer screen itself.
What made the launch of the Mac in 1984 so significant for screen-based design was that it broke away from the dominant, text-only, character-based display of its day and instead treated the computer screen as a blank sheet of paper - right down to its use of black pixels on a white background. By treating the screen as an addressable bitmap the Mac and its later imitators, most notably Microsoft Windows, enabled graphics and, crucially, graphical typefaces to be presented in rich onscreen layouts.
Bitmapped GUIs enabled typographical font handling
At its launch both the Mac’s bitmapped display and dot matrix ImageWriter printer operated in perfect harmony at 72dpi so enabling the same bitmapped fonts to be shared across screen and paper. However the quality of print simply wasn’t acceptable and, critically, a rendering system based on bitmaps simply wasn’t scalable. In particular bitmap-based rendering requires stored versions of every typeface and style you intend to use at every possible point size which quickly becomes unworkable, especially at higher resolutions.
This apparent impasse was broken in 1985 by Adobe and involved moving away from pixel-based bitmaps to programmatic vectors. With the PostScript Page Description Language (PDL) and its associated Type 1 font format both the fixed page layout and the scalable fonts within it were described mathematically – each page was a program describing its end appearance, effectively a full page vector drawing. At a stroke this programmatic approach to page rendering enabled resolution, device and platform independence based on the use of fully scalable typeface outlines.
Adobe’s PostScript defined typefaces as scalable vector outlines
PostScript was a brilliant solution to the needs of design-rich, high quality paper output so why not use PostScript to programmatically print to the screen? This logical next step is exactly what Adobe provided in 1987 with Display PostScript (DPS) created in conjunction with Steve Jobs (recently ousted from Apple) for use on NeXT workstations. The advantages that DPS provided over bitmap-based GUIs were key. Most importantly, DPS enabled Type 1 typefaces to be rendered at any size onscreen as well as on paper. In addition the resolution independence of the system meant that a 24-point font actually would appear as such while future higher density screens would provide more pixels to render each glyph so improving the onscreen quality. In short Display PostScript provided a richer, more accurate, forward-looking design architecture and, crucially, one that was platform-independent and so could be shared as shown by its use on IBM and SGI workstations as well as NeXT systems.
However writing to the screen proved very different to writing to paper – and far more complex. Rather than a single fixed size page, for example, a screen description language needs to deal with multiple, variable size, variable zoom windows. In addition DPS needed to work with a host windowing engine, such as the Unix-based X Window System, which added whole new layers of complexity. Throw in the need to provide hit detection and other interaction capabilities along with programming language support (provided via the ability to wrap PostScript code within a C function) and the order of magnitude of the problem, and of the processing demands involved, becomes clear.
Display PostScript hid another, fatal flaw. The major strength of DPS was its ability to provide high quality fonts onscreen based on scalable vector outlines, but in fact this approach breaks down at small point sizes and low resolution as there just aren’t enough pixels to play with, resulting in dropped features and letter stems of varying weight. To avoid this it’s essential to adjust the shape of each glyph to the bitmap grid available which is just what Adobe’s Type 1 font format enabled through its system of hinting. However Type 1 hinting was a one-off effect tailored to the LaserWriter’s 300dpi output and so inadequate for the problems of even lower resolution screen rendering. The end result was that for the most important onscreen body copy, the vector-based DPS needed to revert to hand-tuned bitmapped fonts!
While Display PostScript found a temporary niche at workstation level, ultimately it was too far ahead of its time and of the processing and screen capabilities of its day to go mainstream. In any case neither of the two major OS developers for PCs were interested in handing over such key territory to Adobe or accepting the associated licensing fees. Instead, in 1989, Apple and Microsoft, although at the time engaged in a bitter legal battle over the Windows GUI, agreed to work together to develop the next stage of mainstream wysiwyg (what you see is what you get) computing.
In particular they decided to put aside their differences to cut Adobe out of the loop with Microsoft working on a PDL-based replacement for PostScript, while Apple developed an alternative scalable font format to replace Type 1.
As things turned out Microsoft’s TrueImage never amounted to much, but Apple’s TrueType proved hugely significant as it enabled advanced and dynamic hinting that effectively hand-tuned the vector font outline according to the number of pixels available. The arrival of TrueType in 1991 finally enabled everyday PCs to use the same vector-based typeface outline to produce high quality scalable output on screen as well as on paper and marked a huge advance in the development of GUI-based computing – indeed Apple effectively gifted Microsoft the technology that made Windows 3.1 such a breakthrough success.
TrueType brought high quality onscreen fonts to the average PC
TrueType was a superior format to Type 1 for screen rendering but the way that Microsoft and Apple chose to use it was a step backward from Adobe’s approach with Display PostScript. In particular, rather than programmatically rendering each glyph’s outlines directly to the screen based on the display’s actual pixel density, the TrueType outlines were instead used to dynamically generate bitmapped fonts at the desired size based on a hard-coded nominal resolution (72 dpi for Mac, 96 dpi for PC, 120 dpi for Large Fonts) and these bitmaps were then used to write to the screen. This solved bitmap-based rendering’s inherent memory issue to make scalable font handling viable and to create the GUI platform that we know today. However, unlike true programmatic vector-based rendering, it failed to provide device, resolution and platform independence.
At first this distinction was hardly noticeable. OK, text specified as 12-point isn’t actually physically 12-points onscreen and its size varies from monitor to monitor, but it’s 12 point in print and that’s what counts. And, when you upgrade to a higher density, higher resolution screen, the fact that your onscreen text doesn’t get sharper but smaller is counter-intuitive, but you can always change the zoom level to make it readable again. Moreover the fact that the vector font outlines used to present a document are actually completely separate to it just isn’t apparent when the document never leaves the authoring system. In short device, resolution and platform dependence simply weren’t important when each computer was viewed as a standalone tool for producing print.
However with the rise of networking in general and the internet in particular even personal computers became increasingly interconnected and suddenly these issues became central. Now computers became content consumers as well as print producers and the screen a medium in its own right. Moving data electronically over the internet was simple but how could you reliably present that content onscreen across different platforms and devices? If NeXT-style systems based on the DPS vector rendering platform had become the norm this would have been relatively straightforward, but for mainstream Macs and PCs, based on their standalone, bitmap-based GUIs, the chickens came home to roost. The bottom line was that you couldn’t reliably specify text size, position or typeface and not just between PC and Mac platforms but from one PC to another!
The challenge was to find a way to publish information across the internet to any computer screen. The groundbreaking solution that emerged was developed on a high-end DPS-based NeXT computer but, rather than pushing the screen design envelope, it took the opposite tack and returned to absolute basics. When Tim Berners-Lee came up with the idea of the World Wide Web in 1989 he sensibly chose to completely avoid the whole question of presentation.
An early web page as displayed on Tim Berners-Lee’s NeXTcomputer (http://info.cern.ch/NextBrowser.html
As such, the HyperText Markup Language (HTML) that he devised had absolutely nothing to say about typography or layout and instead applied itself solely to the mark up of content features – headings, quotes, addresses and so on – that the browser application was then free to interpret as it saw fit. All text was rendered in the browser’s default display typeface, the end user could set a default type size to make sure content was readable and text simply flowed to fill the width of the browser window. In other words HTML acted as if GUIs had never been invented – it could even work on old-style, fixed width, character-based terminals.
The enormous inherent benefits that HTML provided, such as global access, easy authoring and hyperlinking, ensured that the Web took off as a publishing medium. However its complete lack of design capabilities left plenty of room, and demand, for an alternative approach. In particular, in the networked world, it became increasingly anomalous that work created on the computer had to be printed out as hard copy to be reliably exchanged with layouts, graphics and fonts intact.
Adobe, licking its wounds from the failure of Display PostScript and facing a post-TrueType collapse in its licensing revenues, knew that it was still sitting on the solution. In 1991 the company began work on Interchange PostScript (IPS) and two years later in June 1993 the new technology was presented to the world renamed as PDF – the Portable Document Format. Of course what made the format “portable” from platform to platform and from system to system was the underlying device, resolution and platform independence enabled by PostScript-based programmatic rendering of embedded scalable vector fonts. Crucially, thanks to these PostScript foundations, it was possible to create PDF files from any application capable of printing. Even more importantly, thanks to the new cross-platform Acrobat Reader application, it was now possible to accurately render PDF files to non-PostScript printers and directly to screen.
PDF reinvented PostScript as a screen medium
With PDF acting as “ePaper”, a universal electronic paper equivalent, Adobe had created a second and far richer, cross-platform screen-based publishing medium. However PDF never came to replace HTML as Adobe originally envisaged. Ultimately PDF’s approach of using the screen as a window onto the printed page was just too awkward - the screen isn’t just a piece of paper on its side, it’s a medium in its own right. PDF has an important role to play but the Web remained the most popular way to publish information to end users’ screens.
However, to fulfill its potential, HTML needed to be made a much richer design medium. Originally this was done via undesirable workarounds and extensions such as the use of table tags to divide up the web page and of Netscape’s font tag to specify a particular font size. Eventually though, beginning in 1994 and later under the aegis of Tim Berners-Lee and the W3C, Håkon Wium Lie and Bert Bos produced a dedicated styling mark-up language, Cascading Style Sheets (CSS), to enable the web author to specify how they would like their HTML and later XHTML content to be presented by the browser. While CSS doesn’t guarantee that the author’s wishes will be granted – you can specify a particular font for example, but whether it will appear still depends on which fonts the end user has installed – it does make HTML a much stronger design platform.
Thanks to CSS and other important web-based advances, such as the advent of data-driven page generation through server-side scripting languages, HTML has become much the most important screen-based medium. But it too has a fatal flaw. Ultimately HTML operates on a fundamentally page-based architecture based on static, typographically-limited, textual content called one page at a time from the server. But the dominant metaphor for screen use isn’t the page at all but rather the computer application and when interacting with a local program the end user expects rich content, rich design and rich interaction continuously. And there’s absolutely no reason that a similar experience shouldn’t be delivered over the Internet pipeline.
It shouldn’t be surprising that the solution that emerged to provide this third and richest screen medium was developed by Macromedia the developer behind Dreamweaver, the professional designer’s tool of choice for HTML web authoring. However back in December 1996 when Macromedia bought up FutureSplash Animator, a tiny niche application for creating cartoons, no-one could possibly have predicted the future that was in store for it when relaunched as Macromedia Flash..
Ultimately what made Flash and its SWF (Shockwave Flash) format different and gave it its strength was that it was built on vectors. From the outset this meant that Flash’s drawings, animations and graphical text were fully scalable and could be rendered efficiently and reliably on any supporting platform and device. Flash’s typographically-rich, dynamic vector animations stood out from their static host pages making the Flash player a must-have download and so creating a platform on which Macromedia was able to build.
In particular by grafting on streaming audio and video support along with dynamic text and graphics handling based on an ongoing connection to the server (no need for page refreshes), Flash became a complete multimedia solution. Just as importantly, through the development of preset UI components and ActionScript, Flash was able to move from basic user interaction to advanced programmatic control and processing. The end result is that, while Flash can still be used to add the odd bell and whistle to an HTML web page, it can now be used to create advanced, content-driven, universally-accessible Rich Internet Applications (RIAs).
Flash-based Rich Internet Applications take screen-based design to a new level
With Flash the holy trinity of screen-based media – HTML, PDF and SWF - is complete; respectively catering for internet-delivered sites, documents and applications. To take advantage of them, all the end user needs is the necessary client software – a browser, Acrobat Reader and Flash player. From the professional designer’s perspective things have become even simpler: after its recent takeover of Macromedia, there’s now really just one player to deal with – Adobe. While its Display PostScript dream never materialized, Adobe has nevertheless come to dominate the field of cross-platform, screen-based design through its applications, clients and supporting technologies.
So where does Adobe, and screen-based design, go from here? The obvious next step is to merge the three key media in a “universal client” and that is exactly what Adobe is proposing with its “Apollo” project, expected to see the light of day in early 2007. Current details are still thin on the ground but, rather than merging players, it looks as if Apollo will take the form of an additional cross-platform runtime designed to make it possible to combine HTML, PDF and SWF in any combination. The prospect is mouth-watering, enabling the best of all worlds – mixing rich static, dynamic and live content within layouts that combine fixed and fluid elements and all wrapped up in an intelligent, interactive interface.
Adobe’s Apollo project plans to fully integrate HTML, PDF and SWF
In addition the Apollo client will finally free the underlying technologies from the browser so allowing Adobe to provide a total and reliable standalone solution. This in turn will enable Apollo applications to run offline and directly from the desktop effectively blurring the current distinctions between site, document and application. This is exciting stuff. Where Display PostScript attempted to offer a shared screen rendering platform for text and graphics, Apollo takes this idea and runs with it to provide a shared platform for delivering all forms of rich and dynamic content to both screen and paper via advanced interactive applications hosted both locally and remotely.
Until it actually arrives the full implications of Apollo for screen-based design remain uncertain, but the potential significance is enormous. In particular, rather than Adobe being cut out of the loop, suddenly it’s Microsoft and its browser and programming solutions that are in danger of being left out in the cold. In fact Windows itself is threatened as it is not required to view and run Apollo applications, a fact that will be of particular interest to handheld and set top box manfufacturers. If it succeeds, Apollo could prove to be Bill Gates’ Java nightmare back with a vengeance.
Naturally Microsoft is planning to fight back – and on all fronts. With the imminent launch of Windows Vista, with its associated WPF, XPS and XAML technologies and suite of Expression applications, the scene is set for the mother of all battles with the winner determining the future of screen-based design. It’s a subject to which I plan to return.
PostScript: The low resolution of monitor displays is a major limitation on the quality of onscreen typography but there is an important workaround. By intelligently varying the grayscale level of those pixels surrounding the glyph’s rendered bitmap the eye can be fooled into seeing smoother edges. Get it wrong however, as Flash did until recently with its handling of smaller point sizes, and the onscreen text can become too soft for comfort.
Tom Arah is the webmaster of designer-info.com. He has been a professional designer working with computer software since 1987. He also offers training and consultancy and since 1997 has been the contributing editor covering design issues for PC Pro, the UK's biggest-selling (and best) computer monthly.