Sunday, 22 February 2026

SR: Film Screening - Humans in the Loop

Humans in the Loop: Movie Review



This blog has been written as part of the Sunday Reading Film Screening activity assigned by Dr. Dilip Barad Sir after watching Humans in the Loop. The purpose of this activity was to move beyond passive viewing and encourage students to critically engage with the film’s central ideas. It invited us to thoughtfully examine how the documentary represents artificial intelligence, invisible digital labour, and the essential role of human workers within automated systems. Through this reflective exercise, students were encouraged to analyse the realities behind technological advancement and to develop a more nuanced understanding of the social and ethical dimensions of today’s digital world.




Pre- viewing task :


1. AI Bias and Indigenous Knowledge Systems

AI bias can be understood as the systematic distortion that emerges in machine learning systems when the data used to train them reflects existing social, cultural, racial, or geographic inequalities. Rather than being neutral, AI systems inherit the assumptions and blind spots embedded in the datasets they rely on, often privileging dominant perspectives while marginalising alternative ways of knowing.

Since AI models depend on human-annotated data, the values of those who design, manage, and finance these systems inevitably shape the categories through which the world is interpreted. In Humans in the Loop, this tension becomes visible through Nehma’s experience as a data annotator. She is instructed to classify ecological entities plants, insects, animals according to industrial labels such as “pest,” “weed,” or “crop.” These rigid classifications clash with her Oraon Adivasi worldview, which understands nature relationally, contextually, and as part of an interconnected ecosystem rather than through utilitarian binaries.

The film foregrounds the relativity of such terms: what counts as a “weed” in an industrial agricultural framework may be medicinal or sacred within an Indigenous ecological context. This raises an urgent question will an extractive, consumption-driven economy ultimately dictate what counts as legitimate knowledge? Indigenous ecological knowledge (IEK), being place-based and holistic, resists the reductive logic required by machine learning architectures. In doing so, the film reveals how AI systems often reproduce capitalist and colonial epistemologies.

A powerful example appears when an AI image generator, prompted by an Adivasi child who wants to see himself riding a crocodile, instead produces an image of a white boy atop an alligator. The scene sharply illustrates how certain identities are coded as “default,” while others are erased or distorted. Ultimately, the film compels viewers to question who determines “ground truth” in AI systems and whose knowledge traditions are excluded in the process.


2. Labour and Digital Economies

Digital economies rely heavily on forms of labour that remain largely invisible. Tasks such as data annotation, content moderation, and image classification—often low-paid and feminised—constitute the hidden infrastructure behind technologies marketed as fully automated or “intelligent.”

The phrase “artificial intelligence” itself conceals the human workforce sustaining it. In India alone, tens of thousands of workers—many of them rural women—perform repetitive cognitive labour that trains and refines global AI systems. Yet this labour remains obscured: geographically distant from tech hubs, socially marginalised, and erased from the final technological product.

Humans in the Loop carefully captures the texture of this work environment—fluorescent-lit offices, slow computers, constant pressure to meet targets, and the alienation of labelling images using categories disconnected from the workers’ lived realities. By documenting these conditions, the film dismantles the myth of AI as a disembodied, neutral technology. Instead, it exposes complex supply chains of labour that echo older colonial and caste-based systems of extraction.

The film also raises ethical concerns about compensation, recognition, and intellectual ownership. Whose cognitive labour is being commodified to build billion-dollar industries? Who receives credit, and who remains invisible? By centring these questions, the documentary insists that human labour must be acknowledged as foundational rather than peripheral to narratives of technological progress.


3. Politics of Representation

Representation in Humans in the Loop operates on two intertwined levels: first, how AI systems represent or fail to represent Adivasi communities; and second, how the film itself portrays both technology and Adivasi life to broader audiences.

Public discussions surrounding the film emphasise its distinctive perspective: instead of celebrating technological advancement, it interrogates how such progress can deepen exclusion and marginalise Indigenous knowledge systems. Adivasi experience becomes not a background element but the primary analytical framework through which AI is examined.

One of the film’s most striking moments occurs when an AI image generator produces a stereotypical, Europeanised depiction in response to a request to generate a tribal woman. This scene reveals how training datasets embed colonial biases, leading AI systems either to overlook Adivasi identities altogether or to reproduce them through distorted lenses.

At the same time, debates around the film’s own representational politics are varied. Some critics commend its grounded research and authenticity, while others question whether it risks aestheticising Adivasi life for liberal, festival-oriented audiences. Such critiques highlight that representation is never neutral it is always political and structural, not merely artistic.

Even the film’s bilingual use of Hindi and Kurukh functions as a deliberate representational gesture, granting visibility to an Adivasi language rarely heard in mainstream cinema.

Taken together, the documentary encourages viewers to remain alert to a double risk: AI systems may misrepresent marginalised communities, and cinema if uncritical can replicate similar distortions. The film thus calls for ethical vigilance in both technological and cultural production.


While- watching task:

1. Narrative & Storytelling

How does the film connect Nehma’s personal life with larger algorithmic systems?

In Humans in the Loop, Nehma’s individual story is carefully woven into the broader framework of global algorithmic infrastructures. Rather than portraying data annotation as an abstract or purely technical occupation, the film embeds it within the rhythms of her domestic life, economic realities, and cultural environment. By situating digital labour inside the home, the narrative makes visible how international AI systems depend upon localised, often precarious, forms of work that remain largely unrecognised.

Key narrative moments emphasise the entanglement of labour, family, and knowledge. Scenes depicting Nehma balancing annotation tasks alongside household responsibilities foreground the gendered dimension of digital work, where the separation between professional and personal spheres dissolves. These sequences reveal that AI labour is not detached from lived experience but embedded within emotional and familial structures.

Another important narrative shift occurs when Nehma confronts the challenge of categorising ecological entities through rigid industrial labels. Her Oraon understanding of plants and animals—shaped by relational and contextual knowledge—comes into tension with algorithmic classification systems. Through this epistemic friction, the film demonstrates that annotation involves interpretation and negotiation rather than mechanical execution.

Conversations within her family and community further ground her labour in collective life, suggesting that her work cannot be reduced to individual employment alone. Visually, the film contrasts the natural environment she inhabits with the digital interfaces she navigates, underscoring the distance between lived ecological knowledge and machine-readable taxonomies. In doing so, the narrative illustrates how algorithmic systems permeate intimate spaces and reconfigure family life, cultural identity, and local epistemologies.


When Nehma “teaches” AI, what does this reveal about human–machine learning loops?

Nehma’s role in “teaching” AI challenges the notion that machine learning is autonomous or self-sufficient. The film reveals that AI systems acquire intelligence through continuous human input—absorbing judgement, contextual interpretation, and culturally shaped assumptions. In this sense, the so-called learning loop is not purely technological but deeply human-driven.

Her annotation work demonstrates that AI learning depends on subtle acts of meaning-making. Each label she assigns involves decision-making about context and relevance, showing that machines do not directly perceive reality but inherit it through mediated human perspectives. This reframes the human–machine loop as a collaborative yet unequal relationship: humans shape AI’s intelligence, yet their contributions remain invisible in the final technological product.

Moreover, what Nehma transmits to the system is influenced by her own worldview, even when constrained by industrial terminology. The learning loop thus becomes epistemological as well as technical. Certain forms of knowledge are translated into algorithmic structures, while others are compressed or excluded. The film ultimately invites viewers to reconsider AI as a socio-cultural process, raising questions about authorship, agency, and recognition. If humans are the teachers, whose knowledge becomes amplified and whose remains marginal?


2. Representation & Cultural Context

How are Adivasi culture, language, and ecological knowledge portrayed?

The film presents Adivasi culture with grounded authenticity rather than romantic spectacle. Cultural practices appear organically within everyday life through domestic routines, community interactions, and subtle markers of tradition allowing Adivasi identity to emerge as lived experience rather than as an exoticised image.

Tradition is depicted not as static but as coexisting with contemporary digital labour. Nehma’s identity as an Oraon woman shapes her relationship to work, environment, and community, illustrating continuity between ancestral knowledge and modern participation in global economies.

Language becomes a powerful signifier of identity. The contrast between local speech and the English-dominated digital interface reveals linguistic hierarchies embedded within technological systems. While her mother tongue carries cultural memory and ecological wisdom, it remains peripheral in algorithmic spaces, highlighting the unequal valuation of languages within global infrastructures.

Ecological knowledge is central to the film’s representational politics. Nehma’s understanding of nature is relational and experiential, shaped by interaction rather than abstraction. When this holistic perspective encounters reductive labels like “pest” or “weed,” the tension exposes the limits of industrial classification. Through this contrast, the film foregrounds Indigenous ecological knowledge as both resilient and epistemologically distinct.

Overall, Adivasi identity is portrayed as dynamic and adaptive. The film recognises how cultural memory and environmental knowledge persist even as communities engage with digital economies, while simultaneously pointing to the lack of adequate recognition within dominant technological narratives.


Does the film disrupt or reproduce stereotypes about tribal communities and technology?

The film largely works to dismantle prevailing stereotypes surrounding Adivasi communities and modernity. Popular media often casts tribal groups as technologically disconnected or nostalgically bound to nature. Humans in the Loop unsettles this binary by portraying Nehma as simultaneously rooted in her cultural traditions and actively participating in global AI production.

Her role as a data annotator demonstrates that Adivasi communities are not outside technological systems but are embedded within them as essential contributors. The narrative foregrounds her intellectual labour, showing her making interpretative decisions and shaping machine learning processes. This challenges the assumption that technological expertise is confined to urban or elite spaces.

At the same time, the film remains attentive to structural inequalities. By depicting precarious labour conditions and epistemic tensions between Indigenous knowledge and algorithmic frameworks, it reveals how technological systems can marginalise the very communities that sustain them. This nuanced portrayal resists simplistic celebration or victimhood narratives.

Ultimately, the documentary reframes Adivasi identity as modern, thoughtful, and technologically engaged. It shifts the focus from exclusion to recognition, suggesting that the core issue is not absence from technological futures but unequal visibility and acknowledgement within them.


3. CINEMATIC STYLE & MEANING




Mise-en-Scène & Cinematography: Humans in the Loop (2025)


Aspect Ratio — The Film’s Foundational Formal Strategy

The film adopts a 1.55:1 near-square aspect ratio, a deliberate stylistic decision that subtly echoes the proportions of a computer screen. This framing produces a contained, intimate visual field that feels almost like a storybook panel, inviting viewers into spaces and lives that mainstream cinema rarely centres. At the same time, by holding both the forest and the data centre within the same visual proportions, the film resists privileging one environment over the other. Neither nature nor technology is romanticised; both are observed with equal formal discipline.

The Forest

In the forest sequences, wide-angle compositions situate characters within the landscape rather than isolating them from it. The human figure is not positioned as dominant or oppositional to nature but embedded within its textures and rhythms.

Lighting plays a crucial role here. Natural, filtered sunlight creates warmth and softness, dispersing illumination evenly across the frame. There are no dramatic spotlights or harsh contrasts; instead, everything shares the same visual participation.

The camera frequently lowers itself to ground level most notably in the porcupine sequence placing humans and animals along the same horizontal axis. This choice visually encodes a worldview grounded in coexistence rather than hierarchy.

Compositionally, the forest resists geometry. Roots, foliage, and branches interrupt straight lines, producing irregular forms that defy order. The organic fragmentation of the frame stands in quiet opposition to the rigid grid logic of digital interfaces.

The Workspace / Data Centre

By contrast, the data centre is filmed through tighter framing mid-shots and close-ups that compress spatial depth. Walls, ceilings, and screens remain visible, giving the impression of enclosure and limiting any sense of expansiveness beyond the frame.

Lighting shifts dramatically. Fluorescent, artificial illumination flattens surfaces and removes shadow, creating a sterile environment. This flatness mirrors the binary logic of data labelling an object is categorised as one thing or another, with little room for ambiguity.

The cool blue-grey tones of the workspace sharply contrast with the earthy warmth of the forest scenes. Colour itself becomes a structural device, distinguishing pixelated environments from ecological ones.

The computer monitor functions not merely as a tool but as a dominant light source. Its glow falls onto Nehma’s face, symbolically reversing the usual relationship between subject and object. Here, the machine casts illumination onto the human body, suggesting a directional flow of authority.

Workers are often framed in rows, aligned toward their screens. This staging recalls assembly-line production, visually situating digital annotation within the history of industrial labour rather than within narratives of futuristic innovation.

Ritual and Domestic Spaces

Domestic and ritual settings are filmed with medium close-ups that feel attentive rather than restrictive. The camera lingers gently, allowing gestures and textures to unfold without urgency.

Close framing emphasises tactile materials stone, bark, woven fabric, soil foregrounding a sensory relationship to knowledge. This visual texture argues implicitly that Adivasi epistemologies are embodied and materially grounded, not easily reducible to data points.

In scenes of intergenerational exchange, eye-lines are horizontal and reciprocal. Nehma and her children meet at the same visual level, signalling equality and shared learning. This contrasts sharply with the vertical blocking in the data centre, where supervisors often stand over seated workers, reinforcing hierarchies of control.

The Central Visual Thesis

Spatial contrast becomes the film’s quiet argumentative engine. Expansive wide shots of the forest are juxtaposed with compressed interiors of the data centre, articulating a visual dialectic between openness and confinement, landscape and interface, relational knowledge and pixel-based categorisation.

The most striking articulation of this contrast appears in the parallel editing between the AI infant and Nehma’s child. Close-ups of digitised muscle-tracking data are visually echoed by intimate shots of her baby’s limbs. Through this mirroring, the cinematography invites viewers to perceive the two images as analogous. Without explicit commentary, the edit suggests that the labour invested in training the machine parallels and perhaps competes with the attention given to her own child.

Throughout, the film handles the tension between tradition and technological modernity with restraint. Instead of overt visual symbolism, it relies on subtle shifts in colour temperature, framing, and composition to carry its argument. The result is a cinematographic language that articulates conflict and coexistence through atmosphere rather than exposition, allowing form itself to become the site of critique.


How do sound design and editing rhythms contribute to the contrast between analog life and digital labour?


The Sound Team

Sound Design: Kalhan Raina
Score: Saransh "Khwabgah" Sharma
Editing: Swaroop Reghu & Aranya Sahay


Division of Creative Labour

The film’s formal coherence emerges from a clearly structured collaboration. Raina shapes the diegetic universe the tangible world the characters inhabit. Sharma constructs the non-diegetic emotional layer through music, guiding how the audience feels without intruding upon the narrative space. The editors, Swaroop Reghu and Sahay, regulate duration and pacing, determining the tempo at which scenes unfold. Together, they control space, emotion, and time what exists, what resonates, and how long each moment is allowed to linger.


Sound Design: Organic vs. Mechanical Worlds

At the most fundamental level, the film builds a sonic opposition between living environments and technological ones. This contrast is not decorative; it structures the film’s political argument.

Forest and Domestic Soundscapes

In the forest and home environments, sound is layered and non-hierarchical. Bird calls, rustling grass, flowing water, insects, children’s chatter, and communal singing coexist without a dominant centre. The soundscape feels fluid and unpredictable. Rather than isolating a single sonic element, the mix allows multiple textures to overlap, producing a polyphonic field that mirrors a relational ecological worldview.

Significantly, these sounds retain their rough edges. Background hums, environmental interference, and acoustic imperfections are preserved rather than polished away. This refusal to sanitise the track reinforces a sense of embodied presence. Living sound is treated as textured and irreducible not as a clean, optimised signal. The auditory texture itself becomes an argument for the complexity of organic life.

The Data Centre’s Contracted Sound

In contrast, the acoustic field inside the data centre narrows dramatically. The dominant sounds are repetitive and mechanical: keyboard tapping, mouse clicks, fluorescent buzzing, system boot tones, and the faint lag of aging computers. Unlike the forest’s layered unpredictability, these sounds are regular, rhythmic, and uniform.

The mouse click, in particular, takes on symbolic weight. Each click finalises a decision—pest or not pest, crop or weed, valid or invalid. It is a minimal, almost dry sound, yet it carries the force of classification. In its brevity and finality, the click performs the violence of binary reduction. What appears insignificant acoustically becomes politically charged: a single sound enacts the transformation of complex life into data.

The Score: A Deliberate Sonic Bridge

Sharma’s musical approach combines organic instrumentation—guitar and piano—with synthetic textures and ambient electronics. Drawing from ambient, post-classical, and downtempo influences, the score unfolds gently in layered compositions that feel tactile and introspective. Rather than dominating scenes, the music provides a subtle tonal undercurrent.

Importantly, the score does not segregate sonic worlds. Acoustic instruments are not reserved solely for forest scenes, nor are electronic textures confined to technological spaces. Instead, both registers coexist within the same compositions. This blending mirrors Nehma’s lived reality: she inhabits both ecological and digital worlds simultaneously. The music refuses a simplistic binary, just as her life refuses neat categorisation.

In moments such as the opening porcupine sequence, the ambient score adds a dreamlike resonance. The music elevates the visual event without overstating it, transforming a small encounter into something quietly mythic. Yet even here, restraint governs the composition. The score does not swell or dictate emotion. Its understatement leaves interpretive space open an ethical gesture in a film concerned with who has the authority to define meaning.


The Kurukh Music Question: Balancing Authenticity and Legibility

One of the film’s most thoughtful sonic decisions concerns the integration of Kurukh music. Oraon musical traditions often resist fixed metre and stable key structures. Rhythms shift, tonalities change mid-performance, and the flow does not conform to predictable cinematic timing.

The challenge, then, was how to honour this musical authenticity without alienating audiences accustomed to regularised rhythmic patterns. The inclusion of synthesizers, violin, and other bridging instruments creates a hybrid space. These elements provide enough familiarity to anchor listeners while retaining the distinctiveness of Kurukh musical logic.

This negotiation is itself political. The score becomes an audible site of translation between knowledge systems between irregular Indigenous musical structures and the metrical expectations of mainstream cinematic spectatorship. Crucially, the director acknowledges this compromise transparently. The act of naming the negotiation becomes part of the film’s ethical stance.

Editing: Time as Political Form

Editing functions as another site of ideological expression. With Sahay directly involved in the edit, pacing becomes inseparable from thematic intent.

Slowness in Ecological Sequences

Forest and domestic scenes unfold in longer takes with minimal cuts. The edit allows moments to accumulate organically. Encounters—whether between Nehma and a porcupine or between mother and child—are not hurried. This slower tempo encodes an alternative experience of time: attentive, non-industrial, and unmeasured by productivity.

Acceleration in Labour Sequences

Within the data centre, the rhythm tightens. Cuts become more frequent, mirroring the pace of annotation work. Images flash in succession, echoing the rapid click-rate required of workers. The viewer is subtly trained into the same cognitive tempo as Nehma. In doing so, the editing implicates the audience in the rhythm of classification it critiques. The film makes us feel the compression of attention.

The Parallel Edit: AI Infant and Guntu

The most striking editorial gesture occurs in the intercutting between Nehma annotating infant muscle data and close-ups of her own son Guntu’s limbs at home. The temporal rhythm remains consistent across both spaces. The edit neither dramatizes nor accelerates the juxtaposition.

By maintaining equal pacing, the film argues that these two acts occupy the same temporal unit. The labour given to the machine and the care given to the child share the same measure of time and attention. The cut itself makes the argument no explanatory dialogue or emphatic score is required. The linkage is formal, not rhetorical.

Silence as Ethical Space

Restraint ultimately defines the film’s sonic and editorial philosophy. Silence recurs at crucial moments not as emptiness, but as charged suspension. After Nehma refuses to classify a caterpillar as a pest and faces reprimand, the scene is followed by stillness rather than dramatic music.

This silence is deliberate. It withholds emotional instruction and creates a gap. In that gap, the audience must confront the implications of what has occurred. Who determines meaning? Who speaks next? By refusing to fill the space, the film shifts responsibility to the viewer.

Through sound, score, editing, and silence, the film transforms formal choices into political ones. Its argument is not delivered through overt declaration but through rhythm, texture, and restraint allowing aesthetics themselves to carry critique.


4. ETHICAL & POLITICAL QUESTIONS



What ethical dilemmas are depicted when training AI with culturally specific data?


1. Who Decides the Category? — The Epistemic Conflict

A pivotal moment occurs when Nehma refuses to classify a creature as a “pest,” drawing on her community’s ecological understanding that the organism does not harm crops. Her refusal becomes grounds for reprimand. Through this incident, the film exposes the rigidity of algorithmic systems that depend on fixed, universal categories while dismissing localised, experience-based knowledge.

The deeper ethical issue concerns authority: who determines what counts as valid knowledge? The AI’s taxonomies are structured for an industrial, export-driven agricultural economy, largely shaped by clients located in the Global North. In contrast, Nehma’s insights emerge from generational observation rooted in a specific landscape. Ironically, her contextual knowledge arguably more accurate within her environment is treated as error because it does not conform to pre-set classifications. The system does not evaluate truth; it enforces compliance. Nuance, complexity, and relational understanding are sacrificed to fit machine-readable boxes.

2. Extraction Without Return — The Logic of Data Colonialism

The film also points to a second dilemma: the extraction of value from marginalised communities without meaningful compensation. Datasets that include images of Indigenous people, their languages, and their ecological knowledge are built through the labour and cultural contributions of communities like Nehma’s. Yet the benefits generated by these enriched datasets flow elsewhere.

As AI systems become more “representative” through the inclusion of Adivasi faces and knowledge, the question arises: who profits from this representation? Global technology companies depend on the data labour of rural Indian workers, yet those workers neither own the systems they help build nor share in their financial rewards. Nehma’s cultural knowledge is absorbed into a commercial infrastructure over which she has no control.

This dynamic exemplifies data colonialism: the appropriation of epistemic resources images, language, lived knowledge from communities positioned at the margins of global power structures. Extraction occurs not through land or minerals, but through data. Consent, credit, and compensation remain absent.

3. Inherited Bias — The Cycle of Reproduction

The film further explores how AI systems do not produce neutral outcomes; they inherit the assumptions embedded within their training. As Nehma begins to recognise that the machine learns through her labour, she confronts an unsettling realisation: the technology she helps build may replicate the very forms of exclusion she experiences.

Machine learning systems absorb the biases embedded in their classification frameworks. These frameworks are designed elsewhere, yet enacted through the labour of women like Nehma. In this sense, she occupies a paradoxical position. She is subject to systemic discrimination, yet through constrained annotation practices, she may also participate unwillingly in reproducing new forms of bias.

The ethical tension lies here: can someone simultaneously be marginalised by a system and implicated in sustaining it? The film suggests that structural inequality makes such contradictions inevitable.

4. Representation as Vulnerability — The Image Dilemma

When Nehma attempts to correct the AI generator’s distorted representations by uploading her own image and those of her community, the film introduces another ethical complication. AI companies increasingly seek culturally diverse datasets to improve accuracy and expand markets. Yet the communities supplying these datasets rarely receive recognition or agency in return.

To counter misrepresentation, Nehma must submit herself to the same technological apparatus that previously erased her identity. Visibility comes at a cost. In order to be accurately “seen,” she must surrender her likeness to a system structured around commercial value rather than human dignity.

This creates the film’s most acute ethical bind: inclusion becomes another mode of extraction. Representation does not automatically equal empowerment; it may simply render communities more legible for further appropriation.

5. Diffused Responsibility — The Accountability Gap

The film also dissects the hierarchy through which algorithmic decisions are produced. Dataset guidelines, client briefs, and managerial oversight embed dominant assumptions long before Nehma begins labelling images. What appears as technological neutrality is in fact structured by layered decisions about whose faces matter, which categories dominate, and what stories are excluded.

The chain of authority stretches across geographies: international tech clients define objectives; local managers enforce compliance; data workers execute instructions. Responsibility is distributed across this chain in such a way that accountability dissipates. No single actor appears solely responsible for the system’s consequences.

The outsourcing architecture itself functions as insulation. By fragmenting labour and decision-making, it prevents ethical responsibility from settling in one place. The film raises the pressing question of collective accountability: who answers for the algorithms being built in humanity’s name?

Core Ethical Proposition

Across these interconnected dilemmas runs a unifying argument: the ethics of artificial intelligence cannot be separated from existing structures of caste, gender, class, and geography. Training data is never neutral. It carries the imprint of those who conceptualise, fund, and supervise its creation.

When the burden of producing that data falls on communities already positioned at the margins of global power, the imbalance intensifies. The film insists that AI systems do not merely reflect technical design choices; they encode social hierarchies. Every dataset carries an ethical residue. And when that residue originates from unequal worlds, the moral debt accumulates at every level of the algorithmic chain.


How does the film’s human-in-the-loop metaphor operate beyond the technical term—politically, socially, and culturally?


The Technical Definition

In machine learning terminology, human-in-the-loop (HITL) describes a system in which human oversight improves algorithmic performance. A person reviews outputs, corrects errors, and refines the model through repeated feedback cycles. Within this framework, the human figure appears empowered positioned as evaluator, guide, and guarantor of accuracy. The term suggests balance, agency, and collaborative benefit.

Humans in the Loop begins from this technical premise but gradually unsettles the assumptions it carries. The film asks: what if the “loop” is neither balanced nor beneficial to the human participant?

The Political Loop: A Circuit of Extraction

The title gestures toward a closed feedback system between humans and machines each shaping the other in an ongoing cycle. Yet the film reveals that this loop is asymmetrical. Economic and epistemic value moves in one direction, while constraint and discipline move in the other.

Cognitive labour and cultural knowledge flow upward from Jharkhand to global technology hubs where they are processed into profitable AI products. What returns is not empowerment but standardisation: pre-determined categories, productivity metrics, and hierarchical oversight. This structure mirrors older forms of colonial extraction. Instead of raw materials like minerals or crops, the resource being mined is human interpretation and cultural insight.

The loop is also self-perpetuating. Biased systems generate biased outputs. Those outputs further marginalise the very communities whose labour trained the system. Nehma corrects the algorithm, but the economic architecture simultaneously reshapes her informing her which knowledge counts, which distinctions are permissible, which realities are recognised. She remains inside the loop, but the loop does not circulate in her favour.

The Social Loop: Caste, Gender, and Class as Recursion

The film situates Nehma’s work within broader social hierarchies she did not choose. Her presence in the data lab is layered atop existing structures of gender, caste, and class.

As a woman, her labour whether domestic or digital is treated as endlessly available and undervalued. The system depends on her attentiveness yet offers limited recognition. The metaphor extends subtly: a world driven by domination and efficiency sidelines relational, nurturing modes of knowledge.

Caste introduces another layer of irony. A job centred on rigid classification is outsourced to communities whose lived realities exceed simple binaries. Workers are trained to categorise the world in machine-friendly terms, even as their own identities defy reductive labels. To teach machines how to simulate humanity, they must temporarily suspend their own pluralities.

Class dynamics ripple forward through the next generation. Dhaanu’s attraction toward a more urban, upper-caste identity signals assimilation as a recursive social pattern. Hierarchies reproduce themselves not through a single act of exclusion but through repetition across generations.

In this framing, the “loop” signifies more than a technical cycle. It becomes a social recursion an exclusionary system that regenerates itself through institutions, families, and now algorithms.

The Cultural Loop: Translation as Erasure

The film’s most original expansion of the metaphor lies in its epistemological dimension. During training sessions, AI is described as childlike it must be taught how to see. This metaphor transforms into a site of contestation. If the machine is a child, who determines its education? What worldview is encoded in its lessons?

Nehma enters the loop with a deeply relational ecological knowledge system. Yet the transmission of that knowledge occurs through categories she did not author. A caterpillar must be classified as either harmful or harmless; a plant as weed or crop. Her understanding is filtered through binary logic before entering the dataset.

The loop therefore does not fully absorb her knowledge. It converts it into simplified data, strips away its contextual richness, and reintroduces it into the world as “objective” information. The machine learns from her labour but fails to recognise her existence. She trains the system to see, yet remains unseen within it.

The cultural loop reaches its sharpest point here: she is indispensable to the process, yet invisible as a subject within it.

Mutual Training: An Unequal Exchange

The film’s most incisive insight is that training operates in both directions. While Nehma teaches the algorithm, she is simultaneously conditioned by its demands. To feed the system, she must think within its constraints suppressing intuitive, relational reasoning in favour of industrial binaries.

The violence of this loop is not only extractive but substitutive. The AI economy does not simply take her knowledge; it replaces it with a compressed version and asks her to internalise that reduction. The loop reshapes cognition itself. In teaching the machine to process the world in rigid categories, she risks being reshaped by those categories in return.

A Meta-Loop: The Film’s Own Dilemma

Some critics argue that the film may replicate the very structure it critiques. While addressing marginalisation, it risks positioning Adivasi experience as material for global festival audiences. In highlighting data bias, it may inadvertently frame Indigenous life through an external lens. In condemning invisibility, it may not fully redistribute visibility or agency to the community depicted.

Rather than undermining the film’s central metaphor, this critique intensifies it. The “human-in-the-loop” problem extends beyond data centres into cultural production itself. Representation, like data annotation, can become another circuit of extraction if not accompanied by structural reciprocity.

Thus, the loop expands: from algorithmic systems to social hierarchies to cinematic representation. The film suggests that the challenge is not merely to insert humans into technological systems, but to question how those systems are structured and who ultimately benefits from the loop’s circulation.


POST-VIEWING REFLECTIVE ESSAY TASKS


TASK 1 — AI, BIAS, & EPISTEMIC REPRESENTATION

Critical Reflection: Humans in the Loop (2025)

Technology, Knowledge, and the Politics of Classification

Introduction

Contemporary discourse frequently frames artificial intelligence as autonomous, self-optimising, and fundamentally impartial. Humans in the Loop (2025), directed by Aranya Sahay, dismantles this assumption from its very first moments. Set among Adivasi communities in Jharkhand, the film follows Nehma, an Oraon woman employed as a data labeller in an AI training facility. Through her experience, the film reveals a core paradox: systems marketed as objective are constructed from human labour that is categorised, extracted, and subordinated often from those positioned furthest from technological power.

This essay contends that Humans in the Loop portrays algorithmic bias not as a glitch awaiting correction, but as an inevitable consequence of epistemic ordering. The issue is not faulty code but the politics of classification: who defines categories, whose worldview is embedded in datasets, and whose existence is rendered legible to the machine. Drawing on Louis Althusser, Jean-Louis Baudry, and Stuart Hall, the film can be understood as both a story about digital labour and a formal critique of knowledge hierarchies in the age of AI.

Algorithmic Bias as Structured Perspective

The film’s most revealing conflict unfolds around an unassuming creature: a caterpillar. Assigned to categorise it as a “pest” within an agricultural AI system, Nehma resists. From her Oraon ecological understanding shaped by lived, generational knowledge the caterpillar feeds on decomposing plant matter and contributes to regeneration. Within her epistemology, it is not destructive but necessary.

Her supervisor Alka, bound by productivity metrics and directives from an American tech client, overrides this assessment. Nehma is instructed not to “use her brain.”

The moment crystallises what feminist science theorist Donna Haraway calls situated knowledge: all knowledge arises from particular historical and material positions. The AI’s categories pest/non-pest, weed/crop reflect the priorities of industrial monoculture agriculture. They are designed for optimisation and yield, not ecological reciprocity.

The conflict is not about miscommunication. It is about incompatibility. To integrate Nehma’s ecological logic would require rethinking the economic assumptions that underpin the AI system. The suppression of her judgment is therefore not incidental but systemic. The machine functions precisely as intended: it stabilises the worldview of those who commissioned it.

Bias, the film suggests, is not an error within the system. It is the architecture of the system.

Apparatus, Visibility, and Ideological Framing

In his essay “Ideological Effects of the Basic Cinematographic Apparatus” (1974), Jean-Louis Baudry argued that cinema’s technical structure naturalises a centred, unified spectator. The apparatus produces ideology not through content alone but through the organisation of vision itself.

Humans in the Loop extends this logic to AI systems. The question becomes: what kind of subject does the algorithm presume, and what forms of life remain outside its perceptual field?

This question is dramatised when Nehma and members of her community prompt an AI image generator to depict a “tribal woman.” The output is a pale, Europeanised figure. The system reproduces the statistical dominance of Western faces within its training data and presents this output as normative. The failure is not merely representational; it is structural. The dataset has encoded absence as universality.

Here, Louis Althusser’s concept of ideological state apparatuses becomes instructive. Ideology does not simply misrepresent the world; it produces subjects who recognise themselves within its categories. The data centre in the film functions in precisely this way. Nehma is trained not only to classify images but to internalise the classificatory logic itself. She must learn to suppress relational thinking in favour of binary decisions.

The “human in the loop” is thus not the master of the machine but a subject gradually reshaped by it.

Representation and Epistemic Hierarchy

Stuart Hall’s theory of representation clarifies the deeper stakes. Meaning, Hall argues, is constructed through representational systems governed by power. The film juxtaposes two such systems: Adivasi ecological knowledge, which is contextual and interdependent, and algorithmic categorisation, which seeks universality and scale.

The hierarchy at work is not simply preferential but ontological. Nehma’s interpretation of the caterpillar is not acknowledged as alternative expertise; it is dismissed as error. Her understanding is excluded from the domain of valid knowledge altogether.

This dynamic aligns with philosopher Miranda Fricker’s concept of epistemic injustice: harm enacted upon individuals specifically in their capacity as knowers. Nehma’s labour is exploited, but more profoundly, her knowledge is delegitimised.

The film maps the flow of authority that enables this injustice. An American tech client defines classification standards from afar. An Indian manager enforces these standards locally. Nehma performs the cognitive labour at the base of the chain. The closer one is to the data, the further one is from defining what counts as accurate.

“Neutrality” dissolves when traced through this hierarchy. Categories are not discovered they are imposed.

Form as Argument

The film’s critique is embedded not only in dialogue but in form. Its 1.55:1 near-square aspect ratio echoing the proportions of a computer monitor situates the audience within a frame that recalls the interface of classification itself. We observe Nehma through a screen-shaped space, mirroring her own act of viewing and labelling. The spectator occupies the same visual constraint as the machine.

The contrast between environments reinforces this argument. Forest sequences are composed through wide angles, organic asymmetry, and warm, diffused light. Human and non-human life share horizontal planes. The visual field is textured, layered, and non-hierarchical.

In the data centre, by contrast, tight framing compresses space. Fluorescent lighting flattens depth. The blue glow of monitors illuminates Nehma’s face, reversing the usual hierarchy of light — the machine casts illumination onto the human. Authority is literally directional.

The film’s most incisive formal gesture arrives in its parallel editing sequence. Nehma annotates infant muscle-movement data to train an AI walking model while her own child, Guntu, begins to walk at home. The cuts align the rhythms of these two acts. Close-ups of digital motion correspond to close-ups of human limbs.

The equivalence is deliberate. The same attentiveness required to nurture a child is redirected toward the machine. Cognitive care becomes extractable labour.

The argument unfolds without explicit commentary: artificial intelligence is not built from abstraction alone. It is assembled from human time, perception, and relational attention. What appears autonomous is sustained by lives that remain unacknowledged within its outputs.


Conclusion:


Humans in the Loop advances a clear and pressing claim: what we call “algorithmic bias” cannot be repaired by better code alone. It is not a software malfunction but the visible trace of much older hierarchies hierarchies about whose knowledge is legitimate, whose categories structure reality, and whose judgments carry authority. Unless those prior arrangements of power are addressed, no technical refinement will fundamentally alter the system’s logic. Through Nehma’s journey, the film reveals that AI taxonomies are not universal descriptors of the world but culturally produced frameworks shaped by dominant economic priorities. The same communities whose labour and insight are mined to train these systems are excluded from determining what counts as accurate within them.

Read through apparatus theory, the AI system resembles cinema’s own technical machinery: a structure that presents its perspective as transparent and objective while quietly organising vision according to ideological assumptions. Stuart Hall’s account of representation clarifies that meaning never arises neutrally; it is constructed within regimes of power. The AI’s inability to generate recognisably Adivasi faces is therefore not a minor flaw but a predictable result of whose images saturate the dataset and whose visual norms define the “default.” Louis Althusser’s notion of ideological apparatus further sharpens this analysis. The data centre does more than extract labour; it shapes workers into subjects who internalise its classificatory logic, aligning their cognition with the needs of the system.

Ultimately, the film shifts the conversation away from optimisation and toward accountability. It poses a question that lies beyond computation: what kind of social order are we reproducing through the systems we design? The answer cannot emerge from the algorithm itself. It belongs to the political and economic structures that commissioned it and to the viewers, thinkers, and citizens willing to confront what the machine is structured not to perceive.


Thank you.


No comments:

Post a Comment

Mechanics of Writing

 Chapter 3   The Mechanics of Writing Hello Learners. I'm a Student I'm writing this blog as a part of thinking activity. This task ...