Skip navigation

The opportunities for hands-on creativity in new media, suggested by Bob Cotton 2012

An Introduction to New Media

New Media = Digital Media = Cybermedia, and it is THE creative environment and artist’s palette for the 21st century. Visioneca (see is a festival that celebrates this new palette of creative opportunities – this newly converged opportunity-space. Underpinning this new palette is the computer. Digital media runs on code – different sorts of code for different sorts of media. Multimedia and trans-media projects harness this media code together with coding – the authoring, scripting and programming that provides the means for audiences to interact, to choose, to collaborate with digital media.

(My definition of Media is borrowed from Marshall McLuhan: It is the extensions of Man from Understanding Media, 1963)

(All the images used here are copyright to their respective owners, and are either used with permission, or respectfully used under the ‘fair-use’ guidelines in Section 10, 1998 Copyright Act)

What is New Media?

New Media is Video, Animation, Games, Simulations, Sounds, Web, Music, Voice, Gesture, GPS, Movement, Location, Augmented Reality, Databases, Maps, Social Networks, Messaging, Live Video, etc….

New Media is Skype, Youtube, Facebook, Flicker, EverQuest, World of Warcraft, Second Life, Linked-In, SMS, itunes, iphone, ipad, apps, Android, IOS, Flash, Java, Modul8, Max MSP, Maya, AfterEffects, Final Cut, Maya, 3d Studio Max, etc…

new media tools include: Max MSP, Modul8, VVVV, Arkaos, HTML 5.0, ActionScript, Lingo, Pure Data, EyesWeb, Ajax, Objective C… x-code, PHP, SQL, etc….

new media devices include smart-phones, pads, handheld consoles – these markets are dominated by Apple andc IOS,and Android with its variety of manufacturers, and to some extent Blackberry. The console market sees Nintendo, Sony and Microsoft as dominant players…

This is a guide to the diversity of creative media in our digital media-space.

This is a map of the digital media opportunity space that I drew in 2001

Digital Media Opportunity-Space

On this diagram you can see the main classifications of digital media – the area in YELLOW is all the traditional media that have now been ENHANCED by being digitised. These include of course, Television, Records, Radio, Publishing (ebooks, online magazines etc), as well as the components of these media – animation, post-production editing and effects, audio-mixing, page makeup, photography, illustration… (I’m not saying that ALL our media are digital of course, but ALL our media to some extent are affected by the Digital – page composition for traditional paper books and magazines is all digital, printing presses are mostly all digital. We don’t even have digital records much anymore.)
The area in PINK, I’ve called DYNAMIC MEDIA, by which I mean all the new media which are not just digitised versions of previous (analog) media. These include all the media that are generated by computers – such as Virtual Reality, simulations, games, augmented reality, expert systems, voice-recognition, speech synthesis, automatic language translation, encryption…the WWW – all the stuff that is a product of the mix of disciplines that emerged after the second World War, disciplines that included Computer Science, Artificial Intelligence, Communications Theory (Shannon), Cybernetics (Weiner), Game Theory (von Neumann), A-Life (Artificial Life), Robotics, Computer Graphics, Internetworks, the World Wide Web, Social Networks, and so on…
In the central concentric rings are the sectors to do with PERSONALISATION, COMMUNICATIONS and COMMUNITY – the range of software that store our personal preferences, our client-relations-management systems, our cookies, contacts, diaries, personal content-management systems (photos, music, videos, etc), and our blogs and personal websites. Outside that layer in COMMUNICATIONS – all the tools we use to communicate to others, and to machines – email, SMS, MMS, messaging, Skype, chat, blogs, sites, social network profiles (etc) and finally COMMUNITY – social networks, multi-user domains (MUDs), Worlds, Second Life, geographic information systems like crime-maps, recommendation engines, information sharing and publishing – flicker, tumblr, youtube, vimeo and the like.
So Digital ‘New Media’ is not just one thing – its many, many things. This map was made in 2001. Perhaps you should make your own map of digital media developments that interest you.. its not easy to create a taxonomy of this opportunity-space, nor a means of visualising it that makes sense for designers. I’ve called it an OPPORTUNITY SPACE because it represents this dynamically developing palette of opportunities in which designers, developers, programmers, engineers and entrepreneurs – in which ALL OF US are operating. Its the biggest and best artist’s palette that ever existed!

So what is Digital Media?

From the 1960s onwards, this last half-century has seen our media landscape completely transformed from ANALOGUE to DIGITAL media. Networks became digital in the 1960s, the telephone system replaced manual operators with digital computer exchanges over the next twenty years, and the INTERNET grew from half-a-dozen university connections in 1969 to become a worldwide DIGITAL network by the 1980s. MUSIC became digital with the invention of the CD in the early 1980s, RADIO became digital in the 1990s with DAB. 3d Computer Graphics and Computer-Aided Design replaced hand-drawn blue-prints. computer animation gradually replaced cel-animation. Typography and typeface-design, and graphic design became digital from the 1970s. Low-cost Desktop Publishing Systems were available by the mid-1980s. Video was DIGITAL by 1990, and a decade later we had DVD, Digital-Satellite and Digital-Terrestrial TV. Books became e-books. Photography became DIGITAL by the 1990s. Newspapers went DIGITAL and ONLINE by the mid-1990s. And this year, the foremost living British artist exhibited his new analogue and DIGITAL WORK – paintings, drawings, and video – side by side in the Royal Academy.
Being Digital simply means that images, film, sound and graphic media are generated or processed into NUMBERS – not just numbers, but BINARY numbers – simple strings of 0000s and 11111s. These numbers mean that DIGITAL media can be COPIED endlessly and accurately – and cheaply. These numbers can be BROADCAST using very small bandwidths. Because they are numbers, media can be COMPRESSED, can be ENCRYPTED, can be interpreted and re-generated by computers. And I don’t just mean PCs and desktop computers – I mean the computers that are at the heart of every digital media device – every console, player, smart-phone, ipad and xbox.

Digital Convergence

Nicholas Negroponte: Venn diagram illustrating his theory of digital convergence c1982

Why do we have this amazing eruption of ‘digital new media’ in the last 20 years or so? Well, this is a Venn diagram (named after the mathematician John Venn who in the 1880s specialised in Set Theory). This particular Venn diagram was drawn by Nicholas Negroponte in the early 1980s to illustrate his theory that three major industrial sectors – Media, Telecoms and Computing – all quite separate entities back then, would gradually converge over the following two decades to create a brand new opportunity-space for designers, artists, directors and developers.


Because Negroponte could see that the digital computer would change everything in these big market sectors. 

Some elements of Media were already digital (compact discs carried digital music and sound, Videogames and computer games digitised animation and sound), and within a decade, all our familiar modern media – like TV, Film, Records and Radio would become digital. 

And newspapers and books too. Eventually he predicted, ALL media and ALL telecommunications (wireless, mobile and landline) would be digital, and that this ‘convergence’ would create a new media opportunity-space where designers, content developers, storytellers, actors, musicians, game-developers, photographers and film-makers, – and of course engineers and programmers – would create brand new media players (like the DVD, the DTV, the iPhone and iPad, the Xbox and the PS2); and brand new content (like apps, MMORPGs, ARGs, Social Networks); and brand new ways of communicating with each other (like SMS, instant messaging, Twitter, Skype and mobile phones).

There was a strong economic logic behind all this: digital was better (easy to copy, easy to broadcast, easy to publish) + digital devices just needed a microchip (think of the ipod nano) + digital used less bandwidth (you could have more TV channels, more cellular channels) + digital content could be repurposed (movies for TV, movies for DVD, movies for smart phones) ERGO everyone could make more money..

AND what’s more.. digital devices were going to get more powerful year by year and were GOING TO GET CHEAPER every year or so.


The number of transistors on microprocessors will double every 18 months, while their price remains the same.

Digital Media is more than just DVD, Digital TV, websites and DAB.

The Age of Omnimedia

There is one more really important aspect of the Digital Media Opportunity-Space, and this is it:

Through the Web/Net, we now have access to an immense amount of archived media and information – we can already find millions of books, trillions of pictures, thousands of films and videos, maps of the entire world, millions of records, thousands of games. Soon we will be able to find EVERYTHING ONLINE – our entire cultural history – all the contents of every gallery, museum, archive, library – all the schoolbooks we could ever want! All the comics, graphic novels, newspapers, magazines, paintings, drawings, blueprints, animations, photographs, (etc, etc) – EVERYTHING AVAILABLE, TO EVERYONE, ANYWHERE THEY WANT, AT THE TOUCH OF A FINGER-TIP.


AND IT IS WHY THE W3C – WWW CONSORTIUM – IS BUILDING THE TOOLS FOR THE SEMANTIC WEB – WEB 3.0 – THE WEB AS A GIANT SEMANTIC NETWORK – SO ITS EASIER TO SEARCH, POSSIBLE TO HAVE A CONVERSATION WITH…The Web/Net will become a repository of information, linked together in ways that make sense to both us and to the search-engines and software web crawlers that index everything that is digital.


And this has several profound implications for everyone, including:

smart software agents to help us do what we want to do

natural language interfaces – talk and gesture to computers and devices

interactive data-visualisation tools

better learning tools

better recommendation engines

better expert systems

more personalised Web/Net

better personal content-management-systems

(more on these later…)

The remainder of this article or manifesto for Visioneca illustrates some of the exciting diversity and range of technologies, content-innovation and creative exploration that inspires us at Visioneca.

‘We must expect great innovations to transform the entire technique of the arts, thereby affecting artistic invention itself and perhaps even bringing about an amazing change in our very notion of art.’

 Paul Valery ‘The Conquest of Ubiquity’ 1928

Paul Valery was a French poet and philosopher. This quote is from the famous essay by Walter Benjamin: The Work of Art in the Age of Mechanical Reproduction (1936). This essay was amongst the first to deal with the idea of seeing (printed) reproductions (as opposed to just seeing original paintings), and is very relevant in the age of universal digital copy and paste.

This article is about those “great innovations to transform the entire technique of the arts”….

David Hockney: iPad Art and Joiner Photographs

With his recent (2012) exhibition at the Royal Academy, David Hockney is confirmed as one of Britain’s leading artists. Over the last 40 years, he has been at the forefront in the investigation of vision and drawing and painting,  and in his exploration of new visual media, such as his Joiner pictures, his work on the Quantel Paintbox, his investigation of Camera Lucida drawing, and his use of the iPad as a drawing tool.

David Hockney: iPad drawing c2010

Cubism and Hockney’s Joiner Photo-montages

David Hockney’s Joiner Photographs are very interesting. An extension of his interest in Cubism and ‘simultaneous perspective’ – the combination of several different views of the subject during a short period of time – he began taking these in the 1980s, first of all with grid-like rectilinearity like this portrait above, then later more freeform explorations like The Desk (below). In creating these, Hockney wanted to imbue photography with the temporal quality of drawings – that is, images prepared over time…images that take time to prepare and to look at – like a drawing.

This kind of exploration – an exploration of both how we see and how we create images that invite us to look more carefully – is the signature of the avant garde. It is one of the central drives in Modernist Art, visible from the Impressionists onwards, but especially in the work of early 20th Century artists – the Cubists, and the Futurists, and it is coincident with the innovation in motion pictures celebrated at Visioneca…(

 web art word-clouds

Since the middle of last decade, the automatic creation of word-clouds or (tag-clouds) that statistically represent word occurrences in a text by means of font, size and type style, has provided another perspective on typographic communication…

Paragraph about Visioneca transcribed and transmuted by Jonathon Feinberg’s Wordle – a visualisation of the stats of the text…

commercials using Augmented Realities

Augmented Reality is the use of computer animations, data-readouts and other computer-generated material (games, maps, etc) as an overlay on the real world, or over a live video of the real world. Such ARs can only be seen by a head-mounted monocular, or by means of a smart-phone view-finder. (Have you used the AR Boomerang? a free augmented-reality app for IOS…)

Augmented Realities – the superposition of computer-generated animation over our real-world viewpoints using smart phones, ipads or monocular headsets, allows designers to build seamless marriages of CGI and reality or Augmented Realities as another communications medium…

Stop-motion and time-lapse video

commercials and short-form video using stop-motion and time lapse: bravia colours
Stop-motion is one of the oldest animation techniques (here it involves animating several dozen plasticine rabbits, shooting a frame of video- then animating every rabbit again…and so on…) here it is used against a real world city-centre backdrop, so the timelapse effect of movement of people and cars adds to the stop-motion animation (or clay-mation) of the coloured rabbits…

Her Morning Elegance by Oren Lavie 2009 Oren Lavie stop motion – ditto behind the scenes

The availability of high-resolution DSLRs and automatic timing have transformed the technique of shooting stop-motion films, but although the technical aspects are easier, such films still need to be driven by a strong idea. These short films by Oren Lavie, Sebastien Armand and Angela Kohler & Ithyle Griffiths, demonstrate that the essence of this kind of photo-realistic animation is in the original concept, and in the brilliant execution of this idea. The end result is visual poetry…

Sebastian Armand: Elie 2011

Angela Kohler and Ithyle Griffiths: Lost Things

Elapsed-Time (or Time-Lapse) and Stop Motion were amongst the very first special effects that emerged with the invention of cinematography in the mid 1890s. Both techniques are  ‘in-camera’ effects. Stop-Motion is created by manually or automatically controlling the shutter exposures, while moving the subject between exposures. Animators use this technique to animate otherwise inanimate objects – like plasticine/clay models (clay-mation – like Wallace and Grommit), or simply garden tools and human actors, as in Norman McLaren’s brilliant Neighbours (1952). McLaren was an avant garde innovator in mixing live-action, stop-motion and elapsed-time.

Neighbours by Norman McLaren

Elapsed-Time is the staccato exposure of a life-action scene, so that a limited number of frames per second are exposed (say 1 frame/sec), and then the exposed footage is played at normal speed (30 frames/sec)…

This technique is used extensively in one of my favourite art films: Godfrey Reggio’s Koyaanisqatsi.


Street Theatre: Royale de Luxe: The Sultan & the Little Girl (2006) The Sultan’s Elephant

Animatronics (mechanically and electronically-powered puppets) were pioneered by film special effects wizards like Ray Harryhausen and Jim Henson, here they are used as the means of animating giant (60′) puppets. Royale de Luxe specialises in spectacular city-wide street theatre using music, animatronic puppetry, and strong location-specific narratives… The Little Girl

Stelarc_ Cyborg Exo-Skeleton

Animatronic techniques are used here to control an exo-skeleton (external skeleton) of mechanical limbs that amplify the artists muscular movements….Stelarc is the nom-de l’artiste of Australian Stelios Arcadiou – a professor at Brunel University, and a great cybernetics innovator and cyber-performance artist.


Cybernetics is the science of control and communication in men and machines – a discipline created by the mathematician Norbert Weiner, in a book published in 1948.

Watch What is Cybernetics?… A six minute introduction to the work of Weiner, Bateson, Ross Ashby, Stafford Beer and other pioneers of Cybernetics.

I like cybernetics, because it is a multi-disciplinary research area – as applicable to biology, ecology, business, real-world systems like factories, telecommunications networks, administration and city-planning as it is to computing- and games! (For example, one of the first ‘God Games’ was Will Wright’s Sim City (1989) – a game that was based on the Systems Dynamics algorithms developed by one of the Cybernetic and Computer Science pioneers Jay Forrester back in the early 1960s…

An algorithm is a logical approach to problem solving. Algorithms are a list of instructions that can form the basis of a computer program.

1989 Will Wright: Sim City

Sim City is a game where you are in control of an entire city – you could build new houses and schools, transport systems, factories and parks – but if you didn’t do it right, the Sims – the citizens of Sim City started complaining, demonstrating and even rioting…

Sim City is a game based on cybernetics, which is about how complex systems work, how these systems use feedback and sensors to control their operations – Just like your body shivers if its cold, or perspires if its hot – the body is a cybernetic system designed to maintain itself for optimum performance.

Another aspect of both animatronics and cybernetics is Robotics. Robotics is a branch of Artificial Intelligence – another cybernetics-related scientific discipline that emerged at the same time as Cybernetics.

Robotics and Affective Robotics

Rodney Brooks and Cynthea Brazeal: Leonardo Affective Robot c2009 leonardo brazeal kismet – brazeal

Affective Robotics is an area of research that explores how to make robots seem more human. Cynthea Brazeal, working with Rodney Brooks at MIT, is one of the leading experts in this field. These videos can be quite disturbing, verging on the psychological unease that the Japanese roboticist Masahiro Mori calls ‘the uncanny valley’ – when the representation of a real person by a robot is almost – but not quite(!)  – convincing…The Japanese lifelike Robot Nurse is an example of how uncanny the valley can feel…

Kokuro:  lifelike robot care-worker c2009

Masahiro Mori: The Uncanny Valley – if a robot is lifelike, but not life-like enough…its creepy!

AI, Artificial Life, Smart Agents and Synthespians

Related to Robotics very directly, another spin-off of Artificial Intellgence (AI) is Artificial Life – the developing attempts to model or simulate natural growth in a computer.

AI began in the late 1940s and was a field of study that encompassed a lot of separate disciplines, such as:

Artificial intelligence

Chess game playing

Encryption decryption

Turing Test

Natural language interface

Speech recognition

Speech synthesis

PGP personal encryption

Digital signatures

Language translation

Pattern recognition

Machine vision


Expert Systems

Artificial Intelligence

By the way, the study of Artificial Intelligence (or Machine Intelligence) grew out of the work of the founder of modern computing, Alan Turing. (You remember he was the English mathematician who conceived the Theory of Computable Numbers and the Universal Computing Machine – the Turing Machine back in 1936). After WW2 and his brilliant work decoding German Military signals, and building one of the first digital computers, Turing suggested that eventually we might make computers that were as intelligent as human beings. He proposed a test that could be used to determine if a machine was intelligent – it became known as the Turing Test – if a human could have a conversation with another human and a machine, and could not tell which was which, then the machine would have passed the Turing Test, and could be deemed to be intelligent.

So from 1966, when Joseph Weizenbaum wrote a chatterbot called Eliza, there have been various attempts to write a software intelligence (an AI) that can have a conversation just like a human being. – this is the current version of Weizenbaum’s Eliza chatterbot

There are lots of other chatterbots, dialog systems, chat-bots, talk-bots, artificial conversational entities, and even personality constructs on the web. The first one I saw that really impressed me was by a Southampton AI expert who had written a personality construct of Bob Dylan. There were lots of video clips of Bob Dylan in conversation – a chatterbot based on Dylan’s biography and lyrics – so you could type-in your conversation, and recieve a relevant reply…it was fun. We did one with Jarvis Cocker when I was a creative director at AMX Studios in the mid-1990s. These personality constructs could be specially shot for interactive devices like CDROM and the Web – We did one with Ryan Giggs for our set of CDROMs for Manchester United, too.

Of course with voice-recognition, and speech synthesis we can now explore more ‘natural language’ chatterbots – where we can talk to them naturally and listen to their replies… Verbot (verbal-bot) is one of these – and they have free editing and knowledge-base kits so you can build your own chatterbots…

other chatterbots:

or chat with Captain Kirk

OK so they haven’t passed the Turing test yet – but you can see that IF the chatterbot had a big enough knowledge-base (read: ‘cultural intelligence’) AND a clever-enough bot-engine that could sensibly parse (analyse) your dialogue, then you could get close. The semantic web will encourage this kind of development, as will Apple’s Siri software agent (‘personal voice assistant’) on the iPhone.

Apple: Knowledge Navigator 1990

John Sculley and Doris Mitsch: The Knowledge Navigator – a virtual (video) prototype from 1989 

Apple’s Siri has its roots in a ‘virtual prototype’ – a video made by Apple in 1989 that described the possibility of a personal digital assistant or ‘Knowledge Navigator

Apple produced several short videos around this time – videos that showed products that didn’t exist, but were on the cusp of being invented.

Many of the ideas in Knowledge Navigator and Futureshock have already emerged in the web-enabled computer-mediated world of the 21st Century – we have the hyperlinked World Wide Web – operating from 1991-2), multi-touch (1990s), voice-recognition (1945), speech synthesis (Vocoder 1939), online video links (Skype 2003)), integrated diary, contacts etc (iPad 2010), geographical information systems (Google Earth 2006), online encyclopedia (Wikipedia 2001), and now Siri – the personal digital assistant (2012). So this was quite a prescient video!

Apple Computer: Futureshock 1987

Expert Systems

(from Feigenbaum: Dendral 1966)

Another of the many interesting spin-offs from AI is Expert Systems – databases where we try to store all the human knowledge (all the human expertise) on a particular subject, then try to build smart chatterbots, so we can ask the Expert System questions about its subject. Expert Systems have the ability to ‘reason’ about their body of knowledge – using algorithms like ‘Inference Engines’ they can take a set of data (say – the patients symptoms in a medical diagnosis) then give a reasoned result – like a diagnosis. Invented in the 1960s by Edward Fiegenbaum and others, Expert Systems were used in Pharmaceutical development, Oil exploration, and many other specialist subject areas. The one that I find most interesting is the idea of embodying knowledge from thousands of doctors and health-workers into a Barefoot Doctor Expert System, putting this on a laptop or ipad, and giving it to health workers all over the world. Or a personal carbon-profile expert system that has all the data about your travel, entertainment, home and other energy uses, and can suggest ways of saving energy and getting a low carbon footprint…

The Semantic Web promises a better performance from Expert Systems, because, with semantic-network search engines, the input-parsing, input analysis, inference-engine and search and retrieve aspects of the Expert System will all be enhanced.

Software Agents and Virtual Robotics

Massive Prime

Massive are the autonomous agent software developers originally from New Zealand. Their first big success was their work on the battle scenes in Peter Jackson’s Lord of the Rings. They have developed to become a leading world supplier of autonomous agent software – the kit of parts for capturing motion, applying that motion to 3d CGI puppets (software robots), cladding those robots photo-realistically, and building kits for programming the ‘brains’ of these agents, so that Massive Prime can simulate large crowd scenes and battles – and choreograph commercials – very successfully and realistically.

Massive – Prime Autonomous Agent Software

Massive Prime – Autonomous Agent Software – the ‘Brain’ parameters

For example each agent in a Massive Prime simulation has a simple set of rules that can be programmed into its simple ‘brain’ – rules like advancing towards the enemy, avoid getting in the way of your allies, attacking the enemy – having a battle scene made up of independent agents gives a far more realistic effect. These autonomous software agents can to some extent think for themselves – they are software robots… Lynx Effect Billions of Girls

Artificial Life (A-Life)

Craig Reynolds: Flocking algorithm

In 1986, the programmer Craig Reynolds wrote a computer program that attempted to simulate the behaviour of birds as they flock together. In doing so he solved a problem that had been puzzling ornithologists for years – what conditions determined how birds flock? Reynolds came up with three simple rules:

avoid bumping into your neighbours

fly in the same direction as your neighbours

steer towards the average position of neighbours

When he applied these rules to some simple computer  ‘birds’ that he called boids, the flocking behaviour seemed to replicate those of real birds flocking, (or fish shoaling…) Simple rules can generate very complex behaviour!

Richard Dawkins: Biomorphs 1986 – Dawkins Biomorphs

Also in 1986, Richard Dawkins made a Mac computer program that evolves a single pixel into a branching stick-shape, then into a myriad of literally millions of permutations. Dawkins built a biomorph with just 9 genes that could be adjusted by random numbers, in each generation his biomorph would  grow a limb like a stick insect – his program relies on you, the user, to select the biomorph you like, then the program automatically generates a family of offspring – each with small random variations in their ‘gene-pool’. Biomorphs are an elegant proof of Darwinian Evolution by Natural Selection, and an illustration of the fact that complex lifeforms can be generated by simple rules…

John Horton Conway: The Game of Life in Cellular Automata – game of life

This field of study (A-Life) had been a recurring theme in the work of the Manhattan Project mathematician and computer pioneer John von Neumann, who described a theory of self-replicating machines in the 1940s. In 1970, another famous mathematician, the Liverpudlian John Horton Conway, created a simple two-state cellular automata called the Game of Life. This ‘game’ is played on a computer using a cellular grid. It has four rules:

1. Any live cell with fewer than two live neighbours dies, as if caused by under-population.

2. Any live cell with two or three live neighbours lives on to the next generation.

3. Any live cell with more than three live neighbours dies, as if by overcrowding.

4. Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.

Life is a zero-player game. You just create the initial conditions, then observe how it evolves. The Game of Life is further proof that great complexity can arise from a set of simple rules.’s_Game_of_Life

So it was in the mid-1980s that the research sector of Artificial Life (A-Life), and the idea of Genetic Algorithms (GAs) emerged. Just over a decade later, A-Life delivered a world-craze in the form of Tamagotchi – designed by Aki Maita in 1996. – history of Tamagotchi

Carl Sims – genetic algorithms

Another aspect of A-Life is the development of computer programs that are able to replicate themselves with slight variations and randomness, then test themselves for a given purpose. These programs are called Genetic Algorithms – they breed together to find the best offspring program for a particular task. The human programmer is the ‘gardener’ of these offspring!

The birth of the Robot, and the Satire of Automation

Inklings of Robotic automation began in the fiction of the early 1920s – Karel Capek’s famous play Rossum’s Universal Robots (1921), gave us the word ‘Robot’. A decade or so later, Charlie Chaplin – at that time an International Star, and the greatest comedy-actor in the World, directed the featutre-length Modern Times, as a satire of automation, and a comment on the apparent irrelevance of Man in an Age of Machines…

Charlie Chaplin, directing and starring in Modern Times 1936 Times 1936

Charlie Chaplin: Modern Times 1936

In a much more humorous vein, this is Charlie Chaplin’s hilarious commentary on assembly-line mechanisation (automation) in the 1930s in Modern Times (1936).

It can be easy to forget that film-makers in the 1930s were just as concerned with the state of the world as are film-makers now. This sequence from Chaplin’s Modern Times illustrates the satirical nature of his observation of the ordinary man – Chaplin himself – in the newly mechanised assembly-line factories. “Capitalism doesn’t need people” as they say, as Chaplin himself illustrates.

Dystopian Vision of Robotics: Fritz Lang’s Metropolis (1927)

In this synthesis of robotics, cyborgs and Frankenstein, the scientist Rotwang tranforms the robot into the woman Maria…

The robotic future as illustrated by Fritz Lang in Metropolis (1927), with Eugen Shufftan’s brilliant model technique in which he used mirrors to ‘place’ live action human figures inside miniature set models. This ‘Schufftan Process’ was a breakthrough in special effects at the time, and influenced directors like Alfred Hitchcock (in Blackmail, 1929).

Fritz Lang + Eugen Schufftan: Metropolis 1927

Who was special effects director on Metropolis?

Marshall McLuhan and The Medium is the Message – lecture by McLuhan

The Medium is the Message

Marshall McLuhan was a Canadian scholar and philosopher whose work plays an important part in contemporary media theory. His books The Gutenberg Galaxy , Understanding Media and The Medium is the Massage have important insights on the Print and Electronic Media and their affect on us. Many people have been confused by McLuhan’s famous and pithy statement: ‘The Medium is the Message’ – he explains that what you personally say on the telephone is unimportant compared to the fact of the telephone call itself. Its the medium of the telephone, and the television, the printing press (etc), that makes the difference, not the content of the medium. 

Global Village

McLuhan argues that new media alter the ratio of sensory awareness that we have. The age of print he says, conditioned us to reading linear information, to investigate the world in a step-by-step logical approach, a cause and effect approach that in turn created the scientific method, and in many other ways (the personal perspective of the reader) conditioned our senses to become Visual. The new electronic media act like an extension of our central nervous system around the world, creating a Global Village:

“It is a principal aspect of the electric age that it establishes a global network that has much of the character of our central nervous system. Our central nervous system is not merely an electric network, but it constitutes a single unified field of experience….”

(McLuhan wrote this in Understanding Media in 1964 – 5 years before the first DARPA experiments to make an Internetwork, twenty years before Tim Berners Lee had the idea that led to the World Wide Web….)

Read McLuhan’s The Gutenberg Galaxy and Understanding Media for more on the electronic media, and Elizabeth Eisenstein for more on the effects of the Print revolution starting in 1450 with Gutenberg’s printing press.

“The next medium, whatever it is—it may be the extension of consciousness—will include television as its content, not as its environment, and will transform television into an art form. A computer as a research and communication instrument could enhance retrieval, obsolesce mass library organization, retrieve the individual’s encyclopedic function and flip into a private line to speedily tailored data of a saleable kind.”

(Marshall McLuhan The Gutenberg Galaxy 1962)

“Men are suddenly nomadic gathers of knowledge, nomadic as never before, free from fragmentally specialism as never before – but also involved in the total social process as never before; since with electricity we extend our central nervous system globally, instantly interrelating every human experience.” 

Marshall McLuhan: Understanding Media 1964

Of course, around the 1950s and 1960s, McLuhan wasn’t the only writer who was examining this change of consciousness created by the electronic media. The new wave science-fiction writers were actively exploring this too: especially Philip K. Dick (Blade Runner, Total Recall, Screamers, Minority Report, A Scanner Darkly, etc…)

Philip K. Dick: A Scanner Darkly 1977

Rotoscope Animation + Life Action films

Richard Linklater: A Scanner Darkly 2006

This 2006 feature by Richard Linklater is based on Philip K Dick’s novel of the same name, and combines live-action cinematography with interpolated rotoscoping – a live action film that looks like animation.

Who invented Rotoscoping?

Marjane Satrapi and Vincent Partonnaud: Persepolis 2007

A prize-winning animated feature based on Satrapi’s autobiographical graphic novel, set in the Iranian Islamic Revolution of 1979. The film uses different styles of animation for the back-story and ‘present day’ parts of the story…

Marjane Satrapi + Vincent Partonnaud: Persepolis 2007

Ari Folman: Waltz With Bashir 2008

Waltz with Bashir 2008

Ari Folman’s multi-award-winning animated documentary about his personal responses to the Israeli-Lebanon war of 1982. With a similar look to rotoscoping, it is actually made with a combination of Flash, cel and cutout paper animation, and was praised for its innovative technique…

promo video

Joss Akerlund+Lady Gaga Telephone 2010

Since the early 1980s, the pop promo video has become an important commercial genre in its own right, and it’s short-form format (usually 3-5 minutes) has encouraged a wide range of directorial innovation. Lady Gaga’s 2010 HD Telephone references Quentin Tarantino’s Kill Bill and Pulp Fiction, the Adam West Batman television series Caged Heat – the 1970s noir movie about women convicts, (and lots more…)…carried by great choreography, grunge sets and fabulous costumes – and vaguely narrative lyrics…and Gaga’s music, (as well as frequent product placement), Gaga brings European art movie consciousness and Pop Art to this Joss Akerlund promo.. gaga telephone

The First pop promo?

In my own reckoning, this opening clip from Don’t Look Back – D.A. Pennebaker’s 1967 documentary on Bob Dylan’s UK tour of 1966, is the first clip that qualifies as a pop-promo. OK it wasn’t used as a promotional tool at the time, but it has the signature of a pop promo. Its centred on the song. It’s geared very precisely  to its cult audience – it has the poet Allen Ginsberg in the background to appeal to the post-Beatknik generation, Dylan himself is laconic, almost bored – coolly aloof…

1985 Godley and Creme: Cry

Godley and Creme: Cry 1985

This Godley and Creme (from 10cc – they were a successful MOR band) video from 1985, Cry uses continuous film lap-dissolves with carefully registered eyes, to create an effect that was to become a digital motif called ‘Morphing’ from the early 1990s. Video effects just before the digital revolution of MPEG and MPEG2 in the 1990s used analogue techniques – in-camera effects, post production overprints and lap-dissolves – now wholly replaced by digital post.

Morphing as a transition effect

500 Years of Female Portraits in Western Art by Philip Scott Johnson (2007)

Women in Film by Philip Scott Johnson

Morphing is a transition technique that uses the computer to interpolate an animation between two images. It has been available as a free or low-cost download since the early 1990s, and is another tool for turning stills into movies…

This sequence of morphs was nominated for Most Creative Video in the Youtube Awards of that year.

 And someone else listed all the sources for this video:

What is a lap-dissolve?

Film Special Effects and Virtual Cinematography

The use of digital editing and post production compositing began in the early 1980s with movies like Tron and The Terminator, and the development over the following 20 years of hyper-realistic CGI – computer-graphic imaging. Over this period the world of CGI programmers collectively created all the algorithms for representing, skin, reflections, hair and fur, human motion, facial expression, explosions, clouds, smake – and in the 1990s, effects that revealed the invisible – ultra-slow motion, microscopic and telescopic vision. SFX/VFX (post-production) is now a major sector in the Movie Business, and London is an important centre of Digital Special Effects, Computer Animation and other Post-Production techniques…

Stephen Lisberger: Tron 1982

A decade before digital, this was the first attempt to create a feature-length movie using largely computer-generated wireframe 3d, and green-screen chromakey matting. Tron was made at the birth of the idea of Cyberspace – the first Multi-User-Dungeon(virtual world or MUD), the Essex MUD was created by Richard Bartle at Essex University in 1978, Vernor Vinge’s novella True Names was published in 1981, William Gibson’s novel The Neuromancer was published 2 years later. This was also the period of the birth of the sci-fi genre known as Cyberpunk.

This is one of many storyboard frames that Lisberger had prepared for the 1982 Tron – beautiful atmospheric images in this first movie to illustrate the idea of cyberspace. These lively sketches look like they are sketched over test frames of Jeff Bridges in costume and accurately convey the flavour of the final optically-composited look of the finished film. Despite its obvious shortcomings compared to CGI now, Tron was a box-office success and prepared the way for more computer animation – leading a dozen years later to John Lassiter’s Toy Story – the first completely computer generated feature.

What is Cyberspace? What does CGI stand for? (1982)

Wachowsky Brothers: The Matrix 1999

John Gaeta: multi-camera rig for bullet-time sequence in green-screen studio

With the release of the Wachowsky Brothers iconic-mythic Matrix in the late 1990s, digital film-makers were discussing the likely future of HD and the possibilities of combining motion-capture, motion-control, 3d CGI, photo-grammetry, extensive matting and digital compositing to create ‘Virtual Cinematography’. In the Matrix, the Wachovsky’s and their special effects genius John Gaeta combined these techniques to illustrate how film and computing could fuse together to create brilliant cine magic. timeWhat does ‘virtual’ cinematography mean?

International Standards in the Creative Process

International Standards are developed by expert committees and these committees often give their name to the standard that they develop, publish and maintain – so MPEG the video compression format, was named after the Motion Picture Experts Group, JPEG after the Joint Photographic Experts Group, and so on). HTML – the code and standard that underpins the Web is different, it stands for Hypertext MarkUp Language, and was developed first by Time Berners Lee (over the period 1989-1992). Now it’s safeguarded and developed by the W3C – the World-Wide Web Consortium ( 3 Ws and a C!). The latest version of HTML is HTML 5.0, and built into the 5.0 standard is a number of data layers (functions and attributes) that enable coders, developers and designers to create new conjunctions and new types of media.

Initially, HTML had to carry information regarding the type, position and content of the text, headers, images and colours related to a web-page. As you can see with HTML 5.0, the spec has grown so that now HTML 5.0 has become a multi-media, multi-screen, database-ready (thats the SQL bit – Structured Query Language is the code that is used to operate a database). So that now Chris Milk and his colleagues at Google can author a movie that uses these HTML 5.0 characteristics….Innovation is built into International Standards, and understanding the potential of these standards is a mainstream route to innovation.

Artists and researchers exploit these standards to see what they will do in terms of narrative and creative story-telling or iimaging. In the late 1990s, groups like exploited the then latest generation of HTML , to create works of ‘web-art’ by creatively experimenting with code.

source code for

The source code from – the code playfully echoes the graphics of the displayed page. Coding is the glue that defines, packages, displays and presents digital media – here it is playing with ASCII characters – the ‘alphabet’ of programming that includes all the non-alphanumeric symbols used in code-writing.

2011 Sergey Mavrody: HTML5 spec from HTML5.0 and CSS Quick Reference Book by S. Mavrody 2011

Mavrody’s brilliant information-graphic shows the evolution of HTML 5.0 and its ‘taxonomy’ – the classification of its parts-. You can see that this is a multi-media markup language code that can glue images, sounds, video, multiple windows, databases, typefaces, geometry, geo-location, and personal data – and much more – into a coherent site or online event.

Open-Source Software

Free Art and Technology

There are plenty of tools available online. Many of them are free – why? Because the internet was originally only an academic network, built and maintained and used by coders, researchers, teachers, engineers and scholars who wanted to spread the word, share information and foster a global free-access, non-commercial network. These ideas were later cemented into the Open Source or Free Software Movement (from about 1998). Open-source means that:

a programmer or group of programmers build a useful program (like an operating system (eg Linux), or a server OS (eg Apache), then they post this code for free on the internet. Anyone can download this and use it – for free, with the only condition being that if they add useful bits to it, or improve it in some way, then they must post the source-code for these new bits back into the open-source project. Some people have forecast that Open-Source is a model that can’t be beaten in the marketplace…

Open-source software is similar in essence to International Standards for software, compression, text etc…

Media Innovation

Just a reminder that media-technology innovation, running hand-in-hand with media content innovation (the two often go hand in hand, and are often driven by the opportunities presented in new international standards. This kind of innovation has developed over a couple of  hundred years (or even more), beginning in modern times with the wave of innovation surrounding the invention of cinematography in 1895.

 For example, one of the first strands of innovation, was the experimentation with 360-degree panoramas in the early 1900s.

Immersive Experience

One of the main strands of media technology innovation over the last two centuries is the aim to create an immersive mediated experience for viewers and more recently for users or players. The history of immersive art-experience goes back to the theatre-in-the-round of ancient Greece, and more recently to the Magic Lantern Shows of the 18th and 19th Centuries, the  Dioramas and Panoramas of the early 19th century – such as Daguerre’s Panoramas of the 1820s, that used large painted canvases, each picturing a battle-scene, a famous city-centre or renowned landscape. In the early 20th century, several attempts were made to create photographic panoramas:

Raoul Grimoin-Sanson: Cineorama (1900)

Grimoin-Sanson’s Cineorama was built for the 1900 Paris World’s Fair, and consisted of 10 synchronised 70mm film-projectors, proecting images onto 10 9×9 metre screens arranged in a full 360-degree circle around a viewing platform. The ‘user-illusion’ or simulation was that of the audience flying in a giant balloon over Paris, and looking at the city from the air…


Cineorama Projection apparatus c1900

Lumiere Bros: Photorama 1900

Lumiere Bros: Photorama 1900

The inventors of cinematography, the French Lumiere borthers, also had a stake in immersive technology – in ‘surround-vision’ or what they called Photorama. As you can see, their system involved a rig of 12 cameras, projecting into special Photorama Lumiere viewing theatres, with the audience again sitting in the middle. These were of course multiple still images that were being projected, with 360degree panoramas made of hundreds of 8.7×63 cm images…

Immersive Cinema in the 1950s and 1960s…

The Russian (USSR) Kinopanorama of 1959
Kionopanorama multi-screen panoramic screens
Entrance to the London Circlorama early 1960s
 Circlorama Projection System

More recently in the 1960s, the experimental film pioneer Stan Vanderbeek created his own inflatable canvas dome at his home in New York State, and developed multiple projection systems to immerse his friends and his audiences in moving pictures:

Experimental film pioneer Stan Vanderbeek with his Moviedome Studio in New York State, 1962

Stan Vanderbeek – multiple-projections in the Moviedome c1963

Vanderbeek made several famous films around this time, including Science Friction – a short film that greatly influenced the young Terry Gilliam. Science Friction 1959

Vanderbeek was a real innovator, integrating images from Oscilloscopes and other early ‘computer-like’ imagery as well as video feedback

This short 1972 documentary by John Musilli gives some idea of his range of ingenuity, and his insight into the computer as a new tool for artists…

Stan Vanderbeek: The Computer Generation part 1

Vanderbeek demonstrates the early ‘Layers’ drawing program designed by Jim Taggart at MIT, following Ivan Sutherland’s famous Sketchpad (1963). Layers is a tremendously early insight into how digital paint and compositing programs would work…Vanderbeek gets this straight away…

Sistine Chapel 360 QTVR

A more recent experiment in immersive imagery began in the early 1990s with the brilliant invention of Quicktime Virtual Reality (QTVR) by the Apple Advanced Technology Group. This clever software stitched together individual photographs to make a seamless 360-degree panorama, and ‘played’ it interactively and in realtime on the user moving a mouse… Furthermore, hotspots could be implanted in a QTVR scene, so that the user could click from one QTVR to another, or to a linked website… You could build a matrix of linked QTVRs – a virtual maze or interconnected photographically ‘real’ 3d environments…

AES+F: The Last Riot

AES+F: Last Riot (Venice Biennale 2007)

This Russian Art group produced The Last Riot – a multi-screen film for the 2007 Venice Biennale. It was a spectacular event, with 3 large projector screens synchronised to create a wrap-around cinematic screening with an immersive Wagnerian-style music track. The live action ‘actors’ are like Benetton fashion models performing iconographic, mythical actions, and superimposed or composited within a CGI-generated scene. It is an impressive illustration of how ‘monumental’ and emotionally moving new media art can be.

AES+F The Last Riot 2007

I love these grand ‘canvases’ of AES+F, they hark back to the painted panoramas of Daguerre in the 1820s and are narrative pictures like William Powell Frith’s Derby Day (1858), yet they use computer-graphics animation, composited with advertising-style presentations of young actors, models and athletes and a ‘non-linear’ narrative delivered in stlised loops of action – all to a Wagnerian music track…The multiple screens reinforce the immersive experience.

Photosynth network image location

Microsoft Live Labs: Photosynth experiments (2006)

Photosynth is now a free app for IOS devices like iPad and iPhone, but it began as a research project at Microsoft. The original aim was to devise a version of the WWW that used everyone’s snapshots, tagged with their GPS position, to recreate 3d views of the world – like a web of interlinked photographs. PhotoSynth R&D

theatrical staging and stage-lighting design

Pet Shop Boys and Xavier de Frutos: The Most Incredible Thing 2012

This is a most brilliant example of how to integrate dance, music, choreography, costume-design and new media in a stage performance video. A short story by Hans Christian Anderson, choreographed by Xavier de Frutos, in collaboration with the Pet Shop Boys….Chris Lowe and Neil Tennant. – Full-length version!

Terry Gilliam: Berlioz Damnation of Faust

Terry Gilliam: Berlioz Damnation of Faust Trailer

Gilliam’s staging of Berlioz’ Damnation of Faust had very mixed reviews (mainly because of setting the story in a Brechtian Weimar Republic with the rise of National Socialism), but from the perspective of a multimedia story-teller, the staging was a delight, mixing pre-recorded film clips, projections, and both period and abstract stage devices – like the warped-perspective Ames Cube (below).

Web-based Interactive Movies wilderness downtown

Chris Milk and his developer team have developed a kind of showcase movie that illustrates some of the capabilities of HTML 5.0, cleverly integrating Music, Google  geo-location data – maps, images, video etc from Google Earth and Google Maps, together with realtime computer graphics (the flocking algorithm on the opening screen), CGI superimpositions, live-action pre-recorded video and dynamic use of multiple browser windows to create what turns out to be a personal, emotionally affective experimental movie.

What’s really clever about The Wilderness Downtown is that it combines pre-rendered video clips (the child running), with realtime auto-generated computer graphics (birds flocking etc), AND importantly, the requirement customise it for each viewer – the Movie collects your birthplace address then retrieves Google Earth and Maps data to make the film highly personal… This is an exemplar of how digital filmic experiences can develop to embrace, exploit and integrate other elements in the digital media palette. The integrative factor here is code, and HTML 5.0 standard is a strong open-source option for multimedia, multi-screen, realtime authoring.

Realtime Augmented Reality

In 2008, a range of Flash and Actionscript tools for constructing and manipulating augmented realities were developed by the open source community and Spark Project. These are called the FLARToolkit These used a technique called fiduciary tracking (using ‘magic’ symbols) to trigger AR-style computer-generated overlays on a camera image of the real world. The camera would detect the fiducial marker (like detecting and reading a bar-code), determine its distance, size and orientation, then use it as a marker for the superimposed 3d graphic, model or animation – in realtime! In other words using AR you could include the real-world in your game, your documentary or -as in Tagged in Motion – as the record of your spatial calligraphy or grafitti.

2008 Daim: Tagged in Motion

The graffitti artist Daim uses fiducial tracking symbols and a monoscope head-mounted display (he can see the real world with one eye – the other eye sees the CGI images) to ‘paint graffiti’ in mid-air…

Flart Toolkit demo

Flart Flash-Art Toolkit

Is a set of open-source Augmented Reality tools for constructing ARs with fiduciary markers. Remember that the guy in the video cannot see the 3d model in his hand!!!

motion graphics

Motion Graphics describes the animation and amalgamation of type, graphics, images, video etc in short presentations. This kind of work is particularly associated with Adobe Flash, but can be created in After Effects, dhtml, or even in Keynote or (dare to say it) Powerpoint. As a form it was prefigured by the work of creative film-makers and designers as early as the 1950s, specifically with Charles and Ray Eames influential A Communications Primer (1953), and with Pablo Ferro’s seminal trailer for Dr Strangelove (Stanley Kubrick 1963). The release of Flash in 1996 with its scaleable vector graphics, meant that Motion Graphics could be designed and scaled to cinematic proportions from the desktop. A recent exemplar of this form is Mehlil Bilgils graduate motion graphic on the History of the Internet. – history of internet

Charles and Ray Eames: A Communications Primer 1953

The Eames husband and wife team was one of the foremost post-war design groups, and practised architecture, interior and furniture design, graphics and film-making. A Communications Primer is remarkable in that it was produced only 5 years or so after the publication of Claude Shannon’s Theory of Communication (1948), and Nobert Weiner’s Cybernetics – Command and Control in the Animal and the Machine (1947) – both quite mathematical theories of communication. Here they explain the basic principles for the laymen using graphics, animation, models, diagrams, sound and live-action, all linked together with an authoritative voice-over…

Pablo Ferro: Trailer for Stanley Kubrick: Dr Strangelove (1963)

Ferro’s brilliant trailer for Dr Strangelove 

For 1963 this is incredibly fast – more attune with 21st century cutting speeds and dynamic pace than with the early 1960s – but then the entire movie was revolutionary, from the black comedy of Terry Southern to the cinema-verite of Bat Guano’s attack on Ripper’s airfield base, the brilliant sets of Ken Adam – the Pentagon War Room and the B52 cockpit and plane interior – and of course the cool Jules Ffieffer-inspired titles and poster graphics from Ferro…

2010 Mike Davey – working model of a Turing Machine turing machine

Information motion-graphics can take many forms. Mike Davey’s 2010 video of his working model of the theoretical computing machine invented by the mathematician Alan Turing in 1936, is an example. Turing’s abstract model, described in a 1936 paper, had never been ‘brought to life’ before. After its publication and labelling by Turing as a Universal Computing Machine, It became known as the Universal Turing Machine, it can simulate any other calculating machine in the Universe, and is the model for the subsequent development of the digital computer – the basis of all computers up to the present day. This working model is delightful in its precision and engineering – and in the dramatic insight it gives us on this basic theory.

video projection mapping

Projection mapping began sweeping across the world as a spectacle-based new media art-form from about 2003 when tools like the ARToolkit were released into the public-domain. These tools provided the means to integrate (to composite) CG-imagery together with live-action video to create projected ‘augmented realities’. Projection-mapping is an augmented reality projected onto a building.

Russian Architecture interactive projection mapping

Facade Mapping

LG Optimus Hyper Facade in Berlin

Fifa Sky Arena Frankfurt 2006

 Madmapper : Projection mapping tools
Madmapper – projected images start with the conformation of projection-surface target.

VVVV – an open-source AR and projection-mapping toolkit

VVVV is an open-source graphical programming toolkit that can be used to create realtime (‘runtime’) multimedia presentations that link music and audio with animation, video, CGI, graphics etc. VVVV can also take input from various external devices – cameras, position sensors, motion-capture devices (etc) to modulate the runtime code. This makes it a fabulous tool for planning and engineering realtime events such as VJing, Theatrical-stage design and Stage special effects. Its like a concept-map with code attached….

Semantic Search and Data Visualisation

What is Semantic Search? – This is a search mechanism that uses Semantics ‘the science of meaning in Language’ to deliver better search results. What this means is that semantic search engines will be able to ‘understand’ and use the context of the search-term – to filter-out ambiguities and double-meanings – and even perhaps understand more of the purpose the user has in initiating the search – what the user really wants to know. Tim Berners Lee (the inventor of the WWW) has said that support for Semantic search will form a key part of Web 3.0 – the next version of the WWW – so it’s important that you understand what semantics are, and how they are likely to affect aspects of web design, and data visualisation.

Semantics is the study of Meaning. To improve the WWW, Berners Lee proposes that we build-in better data and file structures – better ways of describing the information we are uploading. The principle way of doing this is adding metadata to the uploaded information. Metadata is a list of descriptive words that help define the context (the meaning) of the upload. Another aspect of a semantic search is the semantic parsing or interpretation of a search query to discover more accurately what the user really wants to know.

According to the W3C,:

“The Semantic Web provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries.”


The term was coined by Tim Berners-Lee, the inventor of the World Wide Web and director of the World Wide Web Consortium (“W3C“), which oversees the development of proposed Semantic Web standards. He defines the Semantic Web as “a web of data that can be processed directly and indirectly by machines.”


Recommendation engines are a popular early exploitation of the powers of semantic searches – the kind of searches that utilise meta-data and other descriptive tags. This kind of data-visualisation was pioneered by Plumb Design in the mid 1990s, in their Visual Thesaurus

A Thesaurus is a kind of dictionary that lists all words with similar meanings together – writers and designer’s and other problem-solvers use Thesaurus a lot to generate new ways of thinking about a topic. It was invented by Peter Mark Roget in 1852 and has been a valuable ‘tool for thinking’ ever since.

The Visual Thesaurus is a realtime data-visualisation, illustrating the synonyms in a spider-diagram. These ‘data-visualisations’ can be used in various ways – as recommendation engines – in the sense that if you like Beyonce, you might also like Rihanna, because their music sounds similar, the lyrics deal with the same subjects, or because they are also interesting singer-songwriters…They are fun just for browsing…

Live Plasma by Frederick Vavrille – unfortunately no longer exists in this graphically rich form… but when you put good design together with clever coding and online databases, and a good inference engine, then the result can be both functional and beautiful.

Liz Turner: Iconaut

Liz Turner: Iconaut – a prototype interface for semantic searches (2008)

This is a prototype semantic-web visualisation tool. Liz has provided several ways to navigate through a large archive of news and magazine articles (that’s the stack of rectangles centre-right), icons (representative pictures), scrolling panels, labels, and an isometric grid to integrate her design… It is prototypes like this that point towards the kind of semantic-search engines we might enjoy in the future…

timelinks timelines

Timelinks automatically collects your photos and constructs them along a time-line (according the metadata date-tag in each digital photo). They can be displayed and browsed chronologically, and where other people appear in your photos, their parallel timelines can be accessed and browsed too…

Pre Digital – Some Films that Showed the Way

1977 Charles and Ray Eames: Powers of Ten

 Made in 1977, the Eames’ short film Powers of Ten is a seminal (influential) breakthrough in motion information graphics. It illustrates our world through a continuous vertical tracking shot starting with a closeup of the hand of a man sunbathing in a Chicago park, and tracking out into the universe in a series of stages each a power of ten further away (a power of ten is 10 multiplied by itself 10 times – so 10, 100, 1000, 10,000, 100,000 etc… this is shortened to 102 = 10×10 = 100, 103 = 10x10x10  =1000, etc..). The superscript number is the power of ten.

Its hard to fully comprehend that this film was made 15 years before Photoshop was available. Powers of Ten is an exemplar of data-visualisation. It is grand in concept (though the idea was first explored by the Dutch artists, Kees Boeke in the 1950s, in  his Cosmic View – The Universe in 40 Jumps – but this was illustrated with Boeke’s black and white pen drawings.

Kees Boeke: Cosmic View 1957

Powers of Ten is a seamless zoom, and as the ‘camera’ accelerates away from Earth it soon transcends the speed of Light, and you realise you are looking at our Milky Way Galaxy from inter-Galactic space – millions of light-years away…

Koyaanisqatsi trailer

Koyaanisqatsi is a Hopi Indian word meaning ‘life out of balance’ and Godfrey Reggio’s brilliant scriptless, actor-less film is an artistic documentary (concept-documentary?) on this theme, showing the effects of industrialisation, urbanism, over-population, and mechanisation on our human life. With a brilliant musical score by Philip Glass, using his hypnotic iterative rhythms, this 87 minute film has been vastly influential, and is a must-see for digital media students.

Zbigniew Rybczynksi: Tango 1983

Shot on film, this experiment by Zbig – a Polish-American, made in 1983, clearly prefigures the use of layers in digital media, and is a remarkable re-visualisation of the goings on over time in a typical mid-European apartment…Over 8 minutes, 36 different characters perform a simple task in the room – all at the same time! Breathtaking in concept, staggering in its planning and execution, this film is a breakthrough acheivement – a decade before digital non-linear editing!

Alexander Sokurov: Russian Ark 2002–TDEHizVA -Russian Ark Trailer

Russian Ark is a digital film – the first feature shot on uncompressed HD. The film-makers only had access to the Hermitage Art Museum for one day, so the shoot was planned as a single-continuous shot, orchestrated by the director and shot by cameraman Tilman Buttner using a steadicam mount. Over 2800 actors and extras in full costume, three orchestras, the entire 33 rooms of the Hermitage Museum, modern Russian history, one continuous uninterrupted 96-minute shot!

Performance Video

Improv Everywhere: Frozen Grand Central

The capture of Performance Art using video is another interesting and developing strand of  new media art. Since the DADA anti-art movement of the early 20th century, and the ‘Happenings’ of the 1960s, Performance Art has become the vehicle for scripted, choreographed, immersive, and often interactive, time-based multimedia art. Improv Everywhere is a New York group that orchestrates large-scale performances and records them on video.

Improv Everywhere: Human Mirror

animation and film

François Alaux, Hervé de Crécy and Ludovic Houplain: Logorama 2010 

Logorama trailer

For a long time in the 1970s I experimented with the idea of illustrations and comics made up of logos and brands, but never got it anywhere near as brilliant as this award-winning film from 2010.

Lotte Reiniger: Adventures of Prince Achmed 1923

Lottie Reininger developed her cut-out silhouette animation technique in 1919, and in partnership with her cinematographer and producer husband Carl Koch, from 1923 made one of the first feature-length animations: The Adventures of Prince Achmed, which still stands as a landmark in animation history. While clearly a development of 18th century silhouette portraiture and the 19th century cardboard children’s toy  theatre shows, it is the astonishing handicraft of her animation technique, and her visual storytelling, that make her work very special.

Why does this inspire?

Innovation in any area of media can be inspirational, and the artistic flowering of technique in Reininger’s work is exceptional. Marrying an 18th century art form with the latest 20th century technology of frame-by-frame animation, Reininger infused her characters with a life that some viewers have described as ‘more real than live action’. Reininger worked with Koch on Achmed between 1923 and 1926. Disney’s first feature-animation Snow White and the Seven Dwarfs appeared in 1937, so Reininger’s Achmed may be the first feature-length animated movie. The stills below some idea of the beauty of her technique, but none of the sense of personality that her characters have when animated.

Further Reading

The best introduction to New Media and its impact on traditional forms is Scott McCloud’s Reinventing Comics (2000)

The best compendium and browser for Marshall McLuhan is On McLuhan, by Paul Benedetti & Nancy DeHart (1997)




  1. hello!,I like your writing very a lot! proportion we
    communicate more approximately your post on AOL? I require
    a specialist on this house to resolve my problem.

    May be that’s you! Taking a look forward to see you.

  2. Hey, I think your blog might be having browser compatibility issues.
    When I look at your blog in Chrome, it looks fine but when opening in
    Internet Explorer, it has some overlapping. I just wanted to give
    you a quick heads up! Other then that, terrific blog!

  3. Hi there, its fastidious article concerning media print, we all understand
    media is a enormous source of data.

  4. This is really interesting, You are a very skilled blogger.
    I have joined your feed and look forward to seeking more of your excellent post.
    Also, I have shared your web site in my social networks!

  5. Good day! I just want to give an enormous thumbs up for the
    nice information you have right here on this post.
    I will be coming back to your blog for more soon.

  6. Although the storyline is simple yet effective & the performances
    continue to be charming & endearing, the laughs aren’t quite as big or frequently past Shrek films. s bassist on ”. Things like Kristen”.

  7. works well, a must try.

One Trackback/Pingback

  1. By Reader: October 16, 2012 | updownacross on 16 Oct 2012 at 1:16 pm

    […] – New Media = Digital Media = Cybermedia […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: