In the age of the integrated spectacle (cf. Agamben), few of the static two-dimensional images that are presented to us in the course of everyday life — magazine ads, billboards, posters, direct mailings, and the like — are in fact truly depthless artefacts. Rather, they are the result of careful processes in which part-objects have been layered on top of one another, grouped together, and transformed in various ways before being flattened out to the final "static" image.
Generally speaking, these part-objects may be either textual elements or other image elements, that is, the fundamental building blocks of Flusser's line and surface thinking. The graphic design software that facilitates the creation of this final flattened image retains within the file all of the meta-information about each of these part-objects in terms of position, understood as the x-y coordinates of grid plane and the z-index of layer — in other words, the file contains the relations that existed between each part-object before flattening took place.
But a skilled and experienced designer doesn't need the original file to understand the relations that created the final image. Simply by assessing the visual outcome in the context of embodied memory, one is able to unlayer and reconstitute that which has been usurped of its depth in its rendering-spectacular.
The complexity of the spectacular apparatus increases as we move from the processed image into the realm of cinema and television and literally introduce motion to the process. Chion identifies new building blocks that are added to the image and text within the two-dimensional frame, most importantly the audio elements of speech and field sound captured during recording, and the music and sound effects added in post-production. To the moving image we also add the graphic overlay, a visual element that may be static or animated and which is visually distinct from the images that have been captured by the camera during filming. These overlays are increasingly connected to external (relational) databases in the specific example of television, as with statistics during a sports broadcast or with the latest quotes on a news channel stock market ticker.
Nonetheless, the experienced director or video editor may similarly be able to quickly apprehend after the fact the layers and corresponding relations that produced the final cinematic outcome. In doing so, we may already understand that the layer is not a two-dimensional phenomenon, as Chion's inclusion of audio and acoustic space illustrates.
Now consider those works that find smooth passage through categorical barriers identified variously as interventions, conceptual pieces, participation-oriented performances or community-based art projects. Three such examples, different though interrelated, might include Global Village Basketball, HomeShop, and wii would like to play // we don't have tickets. While these works were "framed" with more or less well-defined spatiotemporal parameters, they are most definitely of the realm of the volumetric and hence introduce new complexities to the apparatus.
Of course, with such events there is no "file" to which we have recourse for determining the layers and relations between the part-subjects that comprised their contextual fabric. As Massumi points out, they are ontogenetic. But, as with the processed static and moving video images described earlier, is it possible to unlayer the volumetric interactions of the intervention after the fact? Can we assess the audiovisual outcomes in the context of embodied memory and perhaps in the process identify new building blocks for the becoming-social each work facilitated, such as gesture, tango, translation, risk and exchange?
(a work-in-process between elaine w. ho and sean smith towards "unlayering the relational: microaesthetics and micropolitics," a text for the mediamodes art and technology conference in new york)