Arranging Data Into Lists

Arranging Data Into Lists is an exploration of video content with zero views—a statistical visual void in a system predicated on visibility. Here, techniques like domain-specific stopword searches and random prefix sampling are used to uncover the metrified distribution of the so-called attention economy. These videos represent the unobserved, the non-recommended, the unengaged. The emitted light, an ambient-average of unseen pixels, stands in for the attention that never materialized. Through the blending of experimental informatics and digital anthropology, it unfolds a media archaeology of attention—an investigation into how human perception has been rendered computable.

Tired of looking at BAD SCREEN Can't wait to get home and look at GOOD SCREEN
Ever since I was a little girl I just knew I wanted to be chilling on the computer a lot

Blue Sky, Red Sunsets and CVS

The condition is to look at the screen. The prevailing attention economy seems to manifest itself in one major materiality: light emitted from very densely packed LEDs. Light is the transferring medium of pixels—pixels which appear to exist to draw our attention. User Interfaces are rendered as interactive pixels, which are metrified and quantified to become measurable as engagement. Contemporary attention seems to exist intricately linked with screens.

Screens, aka. LCD, OLED, AMOLED are the “primary gateway to nearly everything in modern life” [1]; at least nearly everything that is digital. Today, software and screens are so deeply integrated into everyday objects and environments that they are often unnoticed, with computing and its screens becoming ever more pervasive. So ubiquitous that writers call it “everyware”, as screens mediate our perceptions and interactions. The touchscreen, specifically, has emerged as the default technology for screens, leading to a notable increase in tactile interfaces in our daily routines. The initial applications of electronic touchscreens were seen in public settings like university terminals, sales kiosks, public information services, and museums. This widespread integration has led to a future vision where almost any surface, including windows, mirrors, kitchen worktops, fridge doors, and car dashboards, could function as an interactive screen. Consequently, children today grow up constantly surrounded by various mobile screen devices from a very young age. The rapid adoption of digital screens, particularly for prolonged periods and at close range, has led to increased concerns about its impact on ocular health. [2]

Professional Esports athlete, Are.na Block 26047466
Layers of a color TFT LCD, Liquid-crystal display

The root of concern of the interactive pixels originates in their existence as light. Not with light per se, as in the visible band of the electromagnetic spectrum, but with a specific kind of the engagement photons. The widespread use of blue light in modern technology has been rendered as guilty, evil and possesing “a dark side”. But as it turns out it is not all bad as the blue pixels increase attention according to the Harvard Medical School:

Not all colors of light have the same effect. Blue wavelengths—which are beneficial during daylight hours because they boost attention, reaction times, and mood—seem to be the most disruptive at night. And the proliferation of electronics with screens, as well as energy-efficient lighting, is increasing our exposure to blue wavelengths, especially after sundown. [3]

In the public discourse, it is becoming increasingly evident that increased exposure to blue light via digital screens can have a detrimental effect on visual health, contributing to a condition known as Computer Vision Syndrome (CVS) or digital eye strain. The CVS categorizes a group of vision problems associated with computer use. Research indicates that approximately 70% of computer users are affected by CVS. The symptoms of CVS include, but are not limited to eyestrain, headaches, blurred vision, and dry eyes or even altered blinking patterns and eye movements. Increased exposure to blue light emitted by digital screens, which has a shorter wavelength and higher energy compared to other visible light, can negatively impact ocular health by contributing to CVS. Blue light scatters more easily than other light, which eyes may interpret as visual noise, making it difficult to focus and potentially leading to visual discomfort. [4] One of the leading manufacturers of optical systems and optoelectronics, Zeiss, introduces their vision care solution “BlueGuard” with a scientific-looking infographic that showcases a cross-section of the human eye: dashed white rays enter through the cornea and lens, concentrating toward the retina, while numerous blue dots—representing tiny particles suspended in the vitreous humor—scatter this light in all directions (indicated by blue arrows). [5] The underlying physical phenomenon that the Zeiss infographic attempts to visualize is called Rayleigh scattering.

Most known within atmospheric science, Rayleigh scattering is a fundamental optical phenomenon where light, or other electromagnetic radiation, is scattered by particles significantly smaller than the wavelength of the radiation. This phenomenon is named after the 19th-century British physicist Lord Rayleigh, who first quantified its effects. It occurs when the oscillating electric field of a light wave interacts with the electric polarizability of these tiny particles, causing them to act as small radiating dipoles that re-radiate the absorbed light. This scattering is characterized by a strong inverse proportionality to the fourth power of the wavelength (λ⁻⁴). Consequently, shorter wavelengths, such as blue and violet light, are scattered much more intensely than longer wavelengths like red light. [6]

Phenomenon Explanation
Blue Sky Blue light from the sun scatters more in the atmosphere.
Red Sunsets Sunlight has to pass through more air; blue light scatters out first, leaving red/orange.
Computer Vision Syndrome Blue light scatters in the eye more than other colors, reducing visual sharpness and increasing strain.

As a result, Rayleigh scattering is responsible for everyday observations such as the blue color of the sky and the reddish hues seen during sunrise and sunset. During the day, when the sun is overhead, blue light is scattered in all directions across the sky, making it appear blue, although human eyes are more sensitive to blue than violet and more blue light is present in sunlight entering the atmosphere. Conversely, at sunrise and sunset, sunlight travels a much longer path through the atmosphere. Over this extended distance, the blue and violet light is scattered away, leaving the less scattered red and orange light to reach our eyes, resulting in the vivid reddish hues observed. [7]

Zeiss BlueGuard: Easy on the eyes. More protection. Less reflection.
Rayleigh Scattering, Atmospheric optics concepts

What does it mean that the scattering of blue light in the atmosphere and in the human eye is responsible for exceptional human perceptions and the connected attentional memories? Whether they are manifested into existence with a natural light source as the sun, or an artificial one from the OLED Retina pixels—whether they are from a mid-June heatwave sunset night or the last seen TikTok from a best friend.

Myth and Measurement aka. There Is No Such Thing as Attention

This confrontation with the epistemic and material structures underlying the digital attention economy begins with a simple premise–that attention, as commonly used in a post-snowden and post-adtech world (wide web), might not exist in any objective or causal sense at all. The mere popular concept of attention and its ability to be commodified might just be a feature of its own dependency and an essential part of its existence. The concept of attention has been a central focus in psychology since antiquity, with core issues recognized for millennia. However, parts of the discourse argue that despite extensive investigation, our understanding of attentional phenomena has made only slight progress. This slow progress is attributed to several fundamental misconceptions about the nature of attention itself. The experiment's (Arranging Data Into Lists) first critical gesture is to confront the way attention is falsely attributed causal agency. In psychology, this misattribution is referred to as reification: treating an abstract idea as if it were a concrete, measurable phenomenon. The core argument is that attention never causes anything because “there is no such thing as attention” in this reified sense. Most research even within psychology incorrectly takes a causal position, suggesting, for instance, that “Attention helps optimize the use of our system’s limited resources” or “Attention increases sensitivity of V4 neurons”. Instead, Britt Anderson asserts that attention should be treated as an effect. When experimental manipulations lead to outcomes like faster reaction times, these are often conveniently labeled “attentional,” but the crucial question should be: “what was it about the experimental situation that produced this attentional effect?”. [8] Attention is not causal, but an attentional effect that can be observed. Phrases like “paying attention makes me consume better” obscure the feedback loops between perception, behavior, and interpretation. But the problem lies not just in the everyday misuse of language, but in the way these assumptions are operationalized in software. Within the realms of algo gossip discursive and material manifestations of software blur. While the material thought is very present around planetary computing and its software, the discursive thought and capability is often neglected. The narrative sense-making associated with computation is becoming such a popular coping technique that one might consider software as narrative.

Algo Gossip

Media archaeologist Simone Natale does this by proposing “if software is narrative,” signaling that software's impact operates on two interrelated levels: the material changes it triggers and the discourses/narratives it generates. Narratives about software, including those about AI, influence decisions about governance and use, while debates about software are also informed by its material effects. The paper argues that to fully comprehend the social and cultural impact of software, one must analyze the diverse narratives that form around it (in addition to its material operations). Software's inherent opacity makes it particularly susceptible to being translated into a plurality of narratives that help people make sense of its functioning and presence. These narratives are crucial because they shape wider popular debates about digital media—especially in everyday conversations and everyday sense-making of the world. [9] Natale especially looks at the ELIZA effect and how Weizenbaum designed the infamous computer program to highlight the illusory nature of computer intelligence and to demonstrate that humans are vulnerable to deception when interacting with computers.

In the same way, Natale looks at the opaque nature, human interpretation, and recurring cultural patterns of software, Anderson comes back to regard attention in a similar manner. The author defines attention as an explanatorily empty placeholder and demonstrates it with the use of a handcrafted natural language processing experiment. Using “attention” as a causal term often makes it an “explanatorily empty” placeholder. It can be inserted into explanations without deepening theoretical understanding. For example, when researchers say “Some discriminations appear to be made automatically, without attention and spatially in parallel across the visual field. Other visual operations require focused attention and can be performed only serially,” the references to “attention” can be removed without losing comprehension of the empirical results. This highlights that the word “attention” merely serves as a “theoretical wildcard” or “mental phlogiston” rather than providing specific causal insights. Note that we will come back to this declaration of a domain-specific stop word, as it will influence the sampling technique in the search of video content with zero views.

Building on the claim that “attention” may be an illusory construct, the work dismantles the reification of attention as a measurable entity. It questions the validity of attention not only as a psychological phenomenon but as the foundation of the so-called attention economy—an industry whose primary asset is a proxy for something that remains conceptually undefined. Here, we see the rationalistic tradition at work: a worldview that seeks to reduce problems into formally defined objects with properties and rules. The transformation of attention into trackable data (clicks, views, scrolls) is one such reduction. As Winograd and Flores argue, this tradition frames cognition as a formal system—ignoring the interpretative, situated, and embodied nature of human understanding. [10] Attention, then, is not observed but designed. Thus, the project exposes a central paradox: the attention economy operates not on real attention, but on its simulation. And this simulation is shaped by computational infrastructures that reward predictability, regularity, and the computable. As James Bridle contends, computation doesn’t merely reflect our world; it constructs a world where only that which is computable is possible. [11] Everything else, uncertainty, ambiguity, resistance, is excluded. Similarly, the software systems we build to “understand” attention are not mirrors of human cognition but reflections of our preconceptions. These systems do not tell us how attention works; they tell us how we have chosen to formalize it.

Developed in 1983 to predict what types of interface features would work, The Human Processor Model

Missing Existence aka. Proxies

So how comes that the very thing, which does not mean anything, doesn't exist and is non-computable constitutes itself throughout databases, is literally programmed as the fundamental structure of contemporary online experiences, and is regarded as the business model of the internet? What are the techniques by which digital systems render attention visible, manipulable, and ultimately, profitable? One of the underlying mechanisms of digital capital is the construction of reality of attention itself, as attention can only exist as a proxy of itself.

"Phone Swing Device Perfect for Hatching Eggs or Buddy Candy in Pokemon Go" and "Undetectable Mouse Jiggler Mouse Mover Simulator"

A proxy is generally understood as something that acts on behalf of or stands in for another, often facilitating indirect access or mediating a process. When discussing proxies of attention, the concept specifically refers to the data and behavioral indicators that stand in for or reflect a person's attention. When an individual pays attention to something or ignores it, data is generated that serves as a direct proxy for their attention, reflecting their interests, activities, and values. Already in the midst of Web 2.0, still before Real-Time Bidding, digital platforms used user behavior and search/click/scroll history to microtarget content, not necessarily caring about a user's identity but rather the likelihood of motivating an action, indicating that these behaviors act as proxies for engagement and interest. [12] This computational mediated behavioral data, has not only become the materialization of attention, but that of contemporary concepts of capitalism itself. It is because of this proxian nature, that attention has become a contested field for imitation, as attention can only exist as an imitation of itself.

Automated engagement refers to the practice of generating artificial interactions on digital platforms, often by simulating human behavior through computational systems. This phenomenon is closely associated with “group control systems” and “clickfarms,” which primarily emerged in China around 2019. Automated engagement systems are designed to fulfill this demand by generating “traffic” (引流), including creating profiles, posting reviews, pre-playing mobile games, and inflating livestream views. They evolved from manually operated phone setups to computer-controlled systems and eventually to cloud-based systems that can manage thousands of devices remotely, even using only phone motherboards. Mouse wigglers, phone swipers, and algorithmically generated traffic become infrastructural symptoms of a deeper epistemological instability. These are not glitches; they are systemic exploitations of the very logic that equates human existence with attention.

For a more extensive analysis of automated engagement, see https://human-driven-condition.cnrd.computer

From the Difference in Rendering Pixels to the Rendering of Difference Between Fact and Fiction

To be part of the internet today is to be served (ads). As the financialization of attention emerges in the complex of programmatic advertising, traditional extraction methods like the use of cross-site cookies are gradually losing importance. The Post-Cookie technique of Canvas Fingerprinting is one of many techniques to distinguish user+devices on the WWW. It makes use of the different variations in tech stacks, specifically the interplay of soft- and hardware to create high entropy identifiers, which then can be linked with previously extracted behavioral data. The condition is being stuck on the platform.

Since Web 2.0 and its privatization of infrastructure, the requested (domain-) possibilities shrink continuously. As in Platform Realism, it is difficult to imagine the internet avoidant of infinite scroll interfaces+synthetic content generation, where content's essential purpose is the infliction of extracting interaction. Or, as internet critic Geert Lovink describes the user's decision fatigue and the platforms' recommendation inflation:

We need to reject the predictable platform approach and reinvent the media realm once more as a possibility space, one driven by hyper-sensual curiosity. [13]

The operation of the difference in rendering pixels originates in the interface HTMLCanvasElement that provides "scripts with a resolution-dependent bitmap canvas, which can be used for rendering visual images on the fly." This canvas element is created in the Document Object Model (DOM) as a computational+functional representation within the website's code, however it does not have a visual representation, making it visually non-perceivable by the human user. Through specifically tailored instructions, containing seemingly random text pangrams, basic shapes, color gradients and emojis the bitmap (pix-map) of the canvas can be manipulated in precise ways to serve the extractive aesthetics of the operation. While these instructions aka. code stays the same for every page load, the composition of actual rendered pixels, depending on the difference in system fonts, antialiasing, sub-pixel smoothing, gpu driver or even the physical display, exhibit exactly these differences as proxified rgb() values.

Comparison of antialiasing and subpixel smoothing, Technique de rendu souspixel

The technique finds its conclusion within url = canvas.toDataURL([ type, ... ]) Returns a data: URL for the image in the canvas., which reduces the pixel values of the canvas to a hashable Base64 string, which is then used as an identifier serving as the constructed reality of the user.

Canvas Fingerprint from tiktok.com in 2023 featuring the text "OynG@%t", various symbols, panda and Santa emojis, and a purple circle highlighting some of the characters below

Cultural Techniques is a concept which emerged from the 2000s German media theory discussions. [14] It is one of the many concepts trying to describe the relationship between humans and technology, or more specifically in this case: media. Nevertheless, it is seen as a concept which leaves the realm of media theory and makes use of philosophical and anthropological domains. These are most notably when looking at its ontological capabilities and how it creates realities. Cultural techniques today would describe "human-computer interaction" as a collection of fundamental operations and differentiations that give rise to various ontological and conceptual entities that are considered to construct culture—and it is exactly this centering of medial procedures, which they call operations (and their differentiations), that construct reality through media, which makes this approach so appealing for the further understanding of the difference in rendering pixels.

Looking at the early theorization of digital computing, Norbert Wiener opposes the new technology of the non-analogous machine in stating that the digital is an always artificially produced representation, actively excluding certain phenomena from reality (as in nature). As nature is continuous, non-dividable, it does not play by the calculative binary rules enforced by the digital—there is no on and off, 1s and 0s, or stop and start. This is what Wiener terms "times of non-reality," a transitional state, which for the digital is coined as not real. [15]

Cultural Techniques are interested in precisely these procedures which claim reality through medial operations. In parallel, there is Cornelia Vismann and her work "Files: Law and Media Technology", an exemplary research which shows the methodological relevance of cultural techniques. In her 2000 book, Vismann analyses the legal system, but not as a traditional legal historian would. She would not draw conclusions by looking at institutions and their formalized law in written text, but rather focus on the material files and specifically on the situated medial use of those files. [16] For Cultural Techniques, the "law is not an institution. It is not in the institutional text. It is in the files—the processing of files" as Bernhard Siegert explains in an 2015 interview with Artforum.

Earlier in the text, Siegert shares his research findings in relation to the bureaucratic struggles people faced in sixteenth-century Spain wanting to emigrate to the New World. His surprise finding was a wrinkled piece of paper from the authorities themselves, expressing the belief that everything produced by this massive bureaucratic apparatus is fraudulent. Every questioned person becoming a witness, every written documentation or every signed report, was fake, was made up. According to this historical piece of discourse, Siegert concludes the following:

At the same time that this huge writing machine produces facts, it also produces fiction. Fact and fiction are of the same origin. You have a material procedure consisting of the materialities of reading, writing, hearing witnesses, issuing licenses, registering people: discursive practices, as Foucault would have called them. And they are neither on the side of the facts nor on the side of the fiction. They are, at that moment, producing a difference between fact and fiction. [17]

In many ways, the identifying, categorizing, auctioning algorithms of targeted ads aka. Real-Time Bidding can be considered a networked bureaucratic machine, which has often been compared to the 2008 financial crisis [18] and declared as a self-fulfilling prophecy.

Display Advertising with Real-Time Bidding (RTB) and Behavioural Targeting, arxiv.org/abs/1610.03013

Here, the situated use of the difference in rendering pixels (and its following operations within Real-Time Bidding) is seen as material chains of cultural techniques. A computational extractive technique, as in the online surveillance apparatus being reduced to invisual pixels, that poses severe consequences in material reality—which is constantly producing, rendering the difference between fact and fiction.

Architectures of Participation

The term “Web 2.0,” coined in 2004, refers to a supposed second generation of Internet-based services like social networking sites, wikis, and communication tools that emphasize online collaboration and sharing among users. User-Generated Content (UGC), also referred to as user-created content (UCC), is a key concept in understanding the evolution of digital media and the Internet. It signifies a fundamental shift in how content is produced and consumed online, transforming traditional audiences into active participants and agents of cultural production. The Organisation for Economic Co-operation and Development (OECD) defines UGC as content that is made publicly available over the Internet, reflects a “certain amount of creative effort,” and is “created outside of professional routines and practices”. [19] It often involves remixing, combining, changing, and adapting existing media texts, as well as creating entirely new works. Unlike Web 1.0, which was primarily focused on consumption and required technical knowledge for content creation, Web 2.0 is characterized as a “read/write” medium where users easily and actively participate in designing and editing content and contributing freely to shared services. While explicit participation involves conscious, voluntary acts of cultural production, Web 2.0 increasingly formalized participation as a default design feature, known as “implicit participation.” This means users contribute data and content simply by using a service, often without conscious effort or awareness.

"It's very much a work in progress, but shows the many ideas that radiate out from the Web 2.0 core", Design Patterns and Business Models for the Next Generation of Software

On the Existence

  1. ChatGPT w/ GPT-4o, Response to prompt Why are we looking at screens so much?, 2025.

  2. Wanda Strauven, Touchscreen Archaeology: Tracing Histories of Hands-On Media Practices, meson press, 2021.

  3. Harvard Health Publishing, Blue light has a dark side, 2024. https://www.health.harvard.edu/staying-healthy/blue-light-has-a-dark-side

  4. Wong NA, Bahmani H, A review of the current state of research on artificial blue light safety as it applies to digital devices, Heliyon, 2022. https://pmc.ncbi.nlm.nih.gov/articles/PMC9420367/

  5. Zeiss, BlueGuard: Easy on the eyes. More protection. Less reflection., 2025. https://www.zeiss.co.in/vision-care/need-new-lenses/blueguard.html

  6. Llama-4-Maverick-17B-128E-Instruct-FP8, Understanding Rayleigh Scattering in Materials, 2025. https://www.numberanalytics.com/blog/rayleigh-scattering-optical-properties

  7. Georgia State University, HyperPhysics: Blue Sky, 2000. http://hyperphysics.phy-astr.gsu.edu/hbase/atmos/blusky.html

  8. Britt Anderson, There Is No Such Thing as Attention, Frontiers in Psychology, 2011.

  9. Simone Natale, If Software Is Narrative: Joseph Weizenbaum, artificial intelligence and the biographies of ELIZA, New Media & Society, 2019.

  10. Terry Winograd and Fernando Flores, Understanding Computers and Cognition: A New Foundation for Design, Ablex, 1987.

  11. James Bridle, New Dark Age: Technology and the End of the Future, Verso, 2018.

  12. Geert Lovink, Zero Comments: Blogging and Critical Internet Culture, Routledge, 2007.

  13. Geert Lovink, Stuck on the Platform: Reclaiming the Internet (2022)

  14. https://monoskop.org/Cultural_techniques

  15. Norbert Wiener, The Human Use of Human Beings (1950)

  16. Cornelia Vismann, Files: Law and Media Technology (2000)

  17. Artforum, Material World: An Interview With Bernhard Siegert (2015)

  18. Tim Hwang, Subprime Attention Crisis (2020)

  19. Mirko Schäfer, Bastard Culture! How User Participation Transforms Cultural Production, Amsterdam University Press, 2011.