Movement 504

The work "Movement 504" reflects on the performative conditioning of interfaces (and their users) and explores this shared condition through a new standard. While the work is a static physical object, it should be considered an installation as well as a performance. The installation part is a spatial representation of Apple's original "slide to unlock" user interface element. The installation is a three-dimensional reinterpretation of the original interface. Viewed from above, it resembles a large-scale replica of the original graphical user interface, magnified at a scale of 78:1 while preserving all original design properties including border radius, typography, and precise visual specifications. Viewed from the side, as the artifact reveals its three-dimensional properties, it becomes a literal slide measuring roughly 4 × 1.2 × 1.5 meters consisting of inflatable PVC material. The slide element features a compact, U-shaped chute that closely follows the user's body as they descend, beginning with a slightly elevated entry and transitioning into a gentle slope that ends in a smooth, flat landing. Inside the slide, an inflatable pillow serves as the representation of the slider handle that is being used to perform the unlock gesture when being moved along the chute. Correspondingly, the user is intended to sit on it to slide down the slide. The material of the installation adopts Apple's and Web 2.0's early glossy style trend, as the pad element shows an arrow graphic and the chute is imprinted with the text "slide to unlock" in Apple's preferred iPhone OS 1 typeface Helvetica.

The performance part of the installation is, like the title and imprint commands, the usage of the interface as a literal slide. In Apple's patent, the gesture of horizontally moving one graphic over another to get access to a device is described as "Movement 504". Within this installation, the user (viewer, performer) performs the labor of conditioning with their entire body and not just a limb. While the viewer is using the slide, the installation asks about the repetitive nature of contemporary interface rituals and the newly forged affordance, because of the transformation the interface went through.

Slide to Unlock X ComfyUI X Photoshop Content-Aware Fill

Conditioning of Interfaces (and Their Users)

With the patent US007657849B2 [1] filed on 23.12.2005 and its infamous demonstration in 2007 [2], Apple introduced a new condition. A new condition in the way of introducing yet another user interface element, which would standardize an apparent universal, one size fits all template, but also a condition, which would become a global performance with ritual-like tendencies, performed multiple billion times a day.

Performed and conditioned to such an extent that now 16 years later, the act of sliding a finger over a glass surface around 80 times a day seems like a very natural thing to do. But what is natural? Every so natural thing is taught and trained to be so, and with that technology's agency fades and phones only become phones because we use them as such. Users construct technology by reading and configuring artifacts. [3] Through the interface, humans condition themselves, through the intertwine of creation and usage. The usage of technology is a highly rationalized performance and the inevitable striving for efficiency is inscribed in it, because of its labor intensive nature -- labor which is performed by bodies within a specific space.

To Be Engaged Means to Be Delusional

Engagement in human-computer interaction is described as an experiential component, particularly prominent in direct manipulation interfaces, which involves a feeling of first-personness. For game designer Brenda Laurel, this feeling arises from the interplay between the user and the interface, where the interface is "literally co-created by its human user every time it is used". [4] Engagement is also understood as "acting within a representation". It is only possible when the user can rely on the system to maintain a consistent representational context. If the user becomes aware of the system as a system -- that is, if the illusion breaks down -- it "explodes the mimetic illusion," leading to a loss of first-personness and thus disengagement. This mirrors the "willing suspension of disbelief" in traditional theater, where the audience accepts the play's illusion as real within its context. [5] The concept also implies a playful, non-rational element, stemming from the cyclical relationship between user actions and machine responses. Merleau-Ponty describes this as a "communion" with the world, a total involvement where interactions are continuous and unreflective, akin to jazz improvisation rather than a structured game of chess. [6]

We are talking about a computer that should be invisible, an interface best non-perceivable. Interacting with the computer without interaction, a complete state of 'Dasein' (as Heideggerian concepts just cannot leave the technology discourse). But as we live in the computer's world, striving for eternal efficiency, how would this fully automated luxury engagement be quantifiable? Or even, where does it take place when it's an illusion?

Experiment setup diagram showcasing mouse button, state and verbal accounts. An interaction with FSA #1

Welcome to "Square World," an experiment by Dag Svanæs, where users press buttons, while describing what they are doing. In "Understanding Interactivity: Steps to a Phenomenology of Human-Computer Interaction," Svanæs designed the experiment to empirically investigate how interactivity is experienced. Square World consists of 38 abstract, non-figurative interactive examples made of black and white squares, formalizable as Finite State Automata. Subjects were tasked with exploring these examples by clicking on the squares and observing their immediate responses, while verbally describing what they were doing and what they perceived (a “think-aloud protocol”). The aim was to uncover the metaphors and mental models that subjects spontaneously used to understand these interactive artifacts. In this research and through the experiment, Svanæs measures the locus of agency of interaction of on-screen computer buttons, where users would eventually include the computer and its equipment into one's perception of themselves. [7]

While interacting, user experience involves a shifting "locus of agency," where the user might initially see events as "just happening" (passive voice), then as caused by the computer or the interface element, and finally as actions directly performed by themselves ("I turn it on"). This progression reflects increasing "first-personness" and "engagement". As users delve deeper into an interactive system, their "body space" can extend into the interface, causing the experienced agency to shift from the computer as a whole to the specific interactive elements. This suggests that the user feels more direct control over the objects, rather than perceiving actions as merely happening or being controlled by the computer. [8]

Experiments with “Square World” showed that subjects quickly became engaged with the interactive examples. The level of engagement varied with the metaphors used to describe the interaction:

Ultimately, while interactivity offers a broad “potential for action,” engagement is tied to the perceived potential for action. This highlights that engagement is not just about what a system can do, but how directly and intuitively a user experiences their ability to act within it. Within Human-Computer Interaction (HCI), Interaction Design or Design in general, this potential for action is often described as affordance and brings multiple of the here described characteristics with it. Not only about the users themselves, but especially about their interface's designers. With affordance in mind, to design means to design the invisible. To design an interaction means to distract the user that there is a computer. To interact with computers means to forget about the computer. To engage with computers means to have no interface, no interaction.

To be engaged means
to be delusional.

TikTok Reverse Swipe, 2024

Affordance of a Gesture

The concept of “affordance” was first introduced by ecological psychologist James J. Gibson. [9] The term was later adapted and reinterpreted for the field of Human-Computer Interaction by Donald Norman. In HCI, affordance is considered a fundamental concept and a basic design principle. Norman's initial interpretation, which gained widespread influence, centered on the idea of manipulating or designing the environment to make utility more easily perceivable. This differed from Gibson's original concept, where an affordance is independent of an actor's ability to perceive it. Donald Norman later admitted to misinterpreting the term, correcting it to “perceived affordances”. [10] He clarified that merely suggesting an action through graphical depiction is not an affordance, whether real or perceived; it is a symbolic communication that relies on user-understood conventions. Despite Norman's correction, the term “affordance” continues to be widely misused in User Experience (UX) Design, often as a synonym for various interface elements like menus or buttons. This terminological mess is considered part of a broader confusion within design disciplines regarding terms like “transparency” and “experience”. This is where netartist, web archivist and Digital Folklore researcher Olia Lialina suggests disconnecting the term affordance from Norman's interpretation, from the ability to perceive, and from the needs of experience design. In their symposium “Once Again, The Doorknob,” they suggest to finally

disconnect the term affordance from Norman’s interpretation, to disconnect affordance from experience, from the ability to perceive (as in Gibson), and from experience design needs; to see affordances as options for possibilities of action, and to insist on the General Purpose Computer’s affordance to become anything if you are given the option to program it; to perceive opportunities and risks of a world that is not restrained to mechanical age laws and artifacts. [11]

This perspective contrasts with the idea that interfaces should provide "strong clues" (like a doorknob for pushing) which limit what an object can do, whereas in a digital world, almost anything could be accomplished with an on-screen button:

A knob can open a door because it is connected to a latch. However in a digital world, an object does what it does because a developer imbued it with the power to do something [...] On a computer screen though, we can see a raised three dimensional rectangle that clearly wants to be pushed like a button, but this doesn't necessarily mean that it should be pushed. It could literally do almost anything. [12]

Affordance also serves as a cornerstone of the User Centered Design (UCD) paradigm, coined and conceptualized by Donald Norman in the mid-1980s, and the User Experience (UX) movement, which he also initiated. These concepts became prominent around 1993 when Norman became head of research at Apple. Olia Lialina notes that this hype around affordances and UX developed in parallel with the disappearance of the “Undo” principle in interface design. Even though Norman left Apple right before the first iPhone was introduced, it might almost be fair to assume that one of the first proper User Experience thoughts went into designing the first mainstream swipe. Since this moment and since Norman's novel job title as a “User Experience Architect”, users didn't stop swiping, designers didn't stop creating experiences and the personal computing tradition would again go on to create humans with a newly equipped embodied gesture.

Steve Job's infamous words "To unlock the phone, I just take my finger and slide it across," the directional arrow on the non-clickable button and the literal prompting of users to "slide to unlock" would mark an historical point of ingenious affordance design by helping users understand that:

To unlock the phone, I just take my finger and slide it across. All right? Want to see that again? We wanted something that you couldn't do by accident in your pocket and just slide it across. Boom. And this is the home screen of iPhone, right here.

Steve Jobs, Slide to Unlock, MacWorld, 2007

The concept of “naturalness” in interfaces is not an inherent property, but a vague and constructed quality. User-centered design aims to create systems whose physical states behave in an “intuitive” or “natural” way, aligning with a user's mental intentions. What felt “natural” with traditional interfaces in the 1980s, such as pushing a scrollbar down to make content move down, evolved as new technologies emerged. For example, tangible user interfaces in the early 2000s were perceived as “more (or really) natural”. This progression suggests that interfaces like the touchscreen (and by extension, “natural scrolling” or “slide to unlock”) are products of a “naturalization” process that creates the very human for which they feel natural, rather than simply replicating a pre-existing “natural” way of interacting. The prevailing cognitivist view in Human-Computer Interaction (HCI) discourse tends to see interaction as a one-way street, where a designed model is conveyed to a user, who then adapts to it. [13] This perception of “naturalness” is deeply intertwined with how technology is designed and integrated into our lives. Apple, for instance, introduced the iPad in 2012 with the philosophy that “technology is at its very best when it is invisible,” [14] allowing users to focus on their actions rather than the device itself. Denying this would hide the creation system's political and ideological values.

Mentioning of naturalness in macOS system settings

Shared Condition Through a New Standard

Engagement pervades throughout the whole stack as a shared standard and manifests itself in various ways. As this proposal doesn't distinguish between automation or surveillance, as both systems live in correlation and are inseparable in their ontological existence, it sees the gesture of "slide to unlock" as a gatekeeping element to alienation through user experience. It penetrates the space in between fauxtomation [15] and the prevailing information asymmetry on the internet.

While its surface remains a shiny interface and the performance is executed happily innumerous times, the system remains intact through the ever displacement of precariously placed humans and their attention — whether the labor is performed to create the interface, to use the interface, or to get rid of it. [16] Within this installation, the viewer (user, performer) performs the labor of conditioning and the making of naturalness, namely "Movement 504", with their entire body and not just a limb.

Computing Has Always Been Personal

As "slide to unlock" and the creation of User Experience has taught us how to forget the computer, what does it mean to design the invisible? What does it mean to design something so generic and universal that it does not appear? It wouldn't appear positive or negative, ugly or beautiful, cold or warm, nor kiki or bouba. What does it mean for the receiving end to be caught within this never ending hyper-average? How does the user of these systems look like, especially in times when there is the promise of hyper-personalization with real-time bidding, automated A/B testing and GenAI ads? What manifests this generic vs. personal contradiction?

At the dawn of the personal computer, the user was unequivocally the "center of attention," developing conceptually prior to the computer itself. Actually, one might argue that the user existed before the computer. Looking closely at one of the earliest texts about computing, "As We May Think" by Vannevar Bush, it becomes apparent that the author was more concerned with the human sitting in front of the device, than the device itself. [17] Or the invention of the computer mouse and hypertext and their contribution to the development of personal computing. Its inventor Douglas Engelbart's vision extended far beyond simply building the personal computer; his core interest lay in "augmenting human intellect" [18] to help people manage increasing complexity and urgency more effectively and solve the problems facing humanity. Properly in line with Human-Computer Interaction's heritage, the early inventions of computer equipment were as much about designing the user than designing the computer or, as sociologist Thierry Bardini states in his book about Engelbart's career:

Engelbart wasn't interested in just building the personal computer. He was interested in building the person who could use the computer to manage increasing complexity efficiently. [19]

To be able to design interfaces, users had to be designed first. Historically, early software development, but also industrial design, relied on these abstracted personas to make assumptions about their users to be then able to defend their design decisions. In design disciplines outside Human-Computer Interaction, Dreyfuss's Joe and Josephine or Le Corbusier's Modulor come to mind. These hypothetical, generalized personas were based on scientific measurement of people and their movements (like Frederick Taylor's scientific management) or sociological analyses that formulated people into interchangeable bureaucratic components (Max Weber).

Ford Motor Co., Ford and Taylor in the 1920s
Updated Joe and Josephine by Henry Dreyfuss Associates

Unlike the standardized users of industrial production, stack economies, particularly cloud platforms, move towards a 1:1 ratio between users and possible personas. Each user's specific profile (e.g., search history, purchases, location) allows for hyper-segmentation of services and content, making the User's profile their ultimate persona.

The Model Human Processor (MHP): Developed by Card, Moran, and Newell in 1983, the MHP was an early attempt to create a comprehensive theory of human-computer interaction. It conceptualizes the user's “cognitive architecture” as comprising three processors (perceptual, cognitive, motor), four memories, and a set of parameters and principles of operation. Interaction is described as “symbolic information processing,” borrowing terminology from computer science, systems theory, and communication theory. [20]

Netflix's components of the User Model (Potential Signals): The “context” or user model is typically represented as a feature vector provided as input to the machine learning model. This feature vector can incorporate a wide array of signals about the member, including: Viewing History: The titles a member has previously watched. Genre Preferences: Their inclination towards specific genres (e.g., romantic movies, comedies). Actor Preferences: Whether they are drawn to movies featuring particular cast members. [21]

At the Intersection of User Assumptions

As the idea of invisible computing pervaded design ideologies, interface design became experience design, trying to make users forget that computers and interfaces exist. "With Experience Design there is only you and your emotions to feel, goals to achieve, tasks to complete." [22] Coming back to the introduction of the new iPad in 2012, looking at Norman's quote, especially at the later part that was missing from the first time quoting, Apple subscribing to Don Norman's future:

We believe that technology is at its very best when it is invisible, when you are conscious only of what you are doing, not the device you are doing it with [...] iPad is the perfect expression of that idea, it's just this magical pane of glass that can become anything you want it to be. It's a more personal experience with technology than people have ever had.

Note the use of experience instead of interfaces, but also people instead of users. At last, it's not only the computer that has been coined invisible, but the invisible user had to follow soon enough. To be precise, in 2012, the user was already long expired, since a crusade was declared to get rid of it at a UX conference in 2008 by yours truly. [23] The argument is that we should design for people, we shouldn't design for users.Benjamin Bratton, director of Antikythera, researching on the future of planetary computation, coined this trajectory the "death of the user". Bratton goes as far to explain that this concept signifies the expiration of a specific kind of user, and the displacement of its soft humanism from the conceptual center of design strategy. This occurs due to the proliferation and predominance of both non-human and non-individuated actors within the expanded field of ubiquitous computation. [24]

Led by scientific management within Taylorism and the fragmentation of work, automation eventually contributed to the design for algorithms. Since Google's page rank algorithm, websites were designed for the search engine, not for the users wanting to view the page's content, and now we are drawn into designing for model collapse itself. [25] These endless engagements, without receiving ends on any side, eventually led to today's template culture [26] at the intersection of user assumptions.

slide to unlock

  1. Apple Inc., Unlocking a device by performing gestures on an unlock image, Google Patents, 2005.

  2. Steve Jobs, Slide to Unlock, MacWorld, 2007.

  3. Lucy Suchman, Located accountabilities in technology production, Scandinavian Journal of Information Systems: Vol. 14, 2002.

  4. Brenda Laurel, Interface as mimesis, User-Centered System Design: New Perspectives on Human-Computer Interaction, 1986, 67-85. Quoted in Scherffig, 71.

  5. Brenda Laurel, Computers as theater, Reading, Mass, Addison-Wesley Pub, 1991, 113.

  6. Maurice Merleau-Ponty, Phenomenology of Perception, Humanities Press, 1962, 320.

  7. Dag Svanæs, Understanding Interactivity: Steps to a Phenomenology of Human-Computer Interaction, PhD diss., Norwegian University of Science and Technology (NTNU), 2000, 143.

  8. Ibid., 158.

  9. James Gibson, The Senses Considered as Perceptual Systems, Allen and Unwin, 1966.

  10. Donald Norman, Affordances and Design, 2005.

  11. Olia Lialina, Once Again, The Doorknob, Rethinking Affordance Akademie Schloss Solitude, 2018.

  12. Alan Cooper, Robert Reimann et al., About Face 3: The Essentials of Interaction Design, John Wiley & Sons Ltd, 2007, 284.

  13. Lasse Scherffig, There Is No Interface (Without a User). A Cybernetic Perspective on Interaction, Interface Critique Vol. 1, 2018.

  14. Apple Inc., Official Apple (New) iPad Trailer, 2012.

  15. Astra Taylor, The Automation Charade, 2018.

  16. Mike Judge, Office Space, 1999.

  17. Vannevar Bush, As We May Think, The Atlantic, 1945.

  18. Douglas Engelbart, Augmenting Human Intellect: A Conceptual Framework, Air Force Office of Scientific Research, 1962.

  19. Thierry Bardini, Bootstrapping: Douglas Engelbart, Coevolution, and the Origins of Personal Computing, Stanford University Press, 2000.

  20. Stuart Card, Thomas Moran et al., The Model Human Processor: An Engineering Model of Human Performance, Handbook of Perception and Human Performance, 1983, 1–35.

  21. Netflix Technology Blog, Artwork Personalization at Netflix, 2017.

  22. Olia Lialina, Turing Complete User, 2012.

  23. Donald Norman, UX Week, 2008.

  24. Benjamin Bratton, The Stack: On Software and Sovereignty (User Layer), The MIT Press, 2016.

  25. Andrej Karpathy, Software Is Changing (Again), Y Combinator, 2025.

  26. Silvio Lorusso, What Design Can't Do — Graphic Design between Automation, Relativism, Élite and Cognitariat, Entreprecariat, 2017.