Paper 205A: Limitations of Cultural Studies in the Age of AI and Algorithmic Culture
Paper 205A: Limitations of Cultural Studies in the Age of AI and Algorithmic Culture
This blog is a part of the assignment of Paper 205A: Cultural Studies
Limitations of Cultural Studies in the Age of AI and Algorithmic Culture
{getToc} $title={Table of Contents} $count={false}
Academic Details:
- Name: Rajdeep A. Bavaliya
- Roll No.: 21
- Enrollment No.: 5108240006
- Sem.: 3
- Batch: 2024-26
- E-mail: rajdeepbavaliya2@gmail.com
Assignment Details:
- Paper Name: Cultural Studies
- Paper No.: 205A
- Paper Code: 22410
- Unit: 4 - 1) CS in Practice: Reading ‘To His Coy Mistress’ and ‘Writer and his Market’ 2) Limitations of Cultural Studies
- Topic: Limitations of Cultural Studies in the Age of AI and Algorithmic Culture
- Submitted To: Smt. Sujata Binoy Gardi, Department of English, Maharaja Krishnakumarsinhji Bhavnagar University
- Submitted Date: November 7, 2025
The following information—numbers are counted using QuillBot:
- Images: 1
- Words: 4948
- Characters: 34987
- Characters without spaces: 30118
- Paragraphs: 108
- Sentences: 325
- Reading time: 19m 48s
Abstract:
Cultural Studies (CS) has long focused on interpreting texts, audiences, and power relations within media and everyday life. Traditional CS methods such as textual analysis, audience reception studies, and cultural materialism emphasized human agency in producing and interpreting culture. However, the rise of AI-driven content creation and algorithmic recommendation systems has fundamentally reshaped cultural production and consumption. Algorithms now play a key role in sorting, classifying, and hierarchizing cultural texts, often outside the purview of classical CS approaches. This paper critically examines the limits of conventional Cultural Studies methodologies in understanding digital, algorithmically-mediated culture. It reviews literature on algorithmic culture, identifies how algorithms influence audience exposure and interpretation, and evaluates the gaps in CS theory when confronted with AI-generated content and personalized media streams. The analysis draws on audience reception theory (Hall), cultural materialism (Williams), and emerging scholarship on algorithms as cultural agents. Case studies (e.g. AI-generated narratives, platform recommender systems) illustrate where classical CS tools fall short. We argue that although CS concepts remain useful (e.g. “meaning is negotiated”), they must be extended with computational and ethical perspectives. In conclusion, the paper suggests that Cultural Studies must integrate critical data studies, algorithmic literacy, and new interdisciplinary methods to address algorithmic mediation.
Keywords:
Cultural Studies, Algorithmic Culture, AI, Audience Reception, Media Power, Digital Culture, Algorithmic Bias, Cultural Production.
Research Question:
How can Cultural Studies evolve its theoretical frameworks and research methods to account for the mediating influence of AI algorithms on cultural production, consumption, identity, and power?
Hypothesis:
Traditional Cultural Studies methodologies, rooted in textual and audience analysis, are insufficient to fully explain cultural phenomena in an age dominated by AI and algorithmic media, and thus must be fundamentally adapted.
Introduction
![]() |
| Source: Gemini/(Nano Banana Pro) - Representational |
Cultural Studies (CS) traditionally examines culture as a set of symbolic practices and meanings, focusing on how media texts are produced, circulated, and consumed within power relations. Seminal CS theorists (e.g. Hall, Williams, Fiske) highlighted that culture is “the whole way of life” – not merely “high art” but also everyday practices and popular media (Williams 1958). Classic CS methods include textual analysis (close readings of media texts), audience reception studies (how different viewers interpret messages), and cultural materialism (how culture relates to economic and social structures). For example, Stuart Hall’s encoding/decoding model emphasizes that audiences actively interpret (and sometimes oppose) the meanings encoded in media. Such approaches assume that a human agent (producer or audience) ultimately shapes meaning.
However, the media landscape has radically changed. Platforms like Netflix, TikTok, YouTube, and Spotify use AI and machine learning to curate and even generate content. As Born and Diaz note, “recommender systems play an important role [in streaming platforms], using algorithms optimized for engagement, retention, and revenue”. In effect, algorithms now mediate cultural exposure, selecting what users see and implicitly shaping their tastes. Moreover, AI can create cultural texts (e.g. DALL·E art, ChatGPT narratives) without direct human authors. This shift means culture is increasingly algorithmic: a hidden layer of computation influences what cultural items circulate and how audiences encounter them.
Such algorithmic mediation raises questions about Cultural Studies’ adequacy. If machine logic partly determines what art we see and how it is “textualized,” can traditional CS methods still unpack cultural meaning? Ted Striphas argues that human beings have been “delegating the work of culture – the sorting, classifying and hierarchizing of people, places, objects and ideas” – to computational processes. In other words, algorithms now perform tasks of cultural gatekeeping that used to be done by curators, critics, or consumers. As a result, “culture” in practice may involve algorithmic choices that have no analogue in earlier media environments.
This paper explores the limitations of classical Cultural Studies approaches in this AI-infused context. We first review how algorithms influence cultural production and audience reception, and identify where CS concepts (e.g. audience agency, textuality) may need rethinking. We then examine specific cases (AI-generated art/music, algorithmic recommendation feeds) to illustrate the gaps. Finally, we discuss the implications for identity, power, and the future of CS scholarship. The goal is not to dismiss Cultural Studies, but to argue that it must evolve: its object of study (culture) is changing, and so too must its methods of study.
Literature Review: Cultural Studies and Algorithmic Culture
Classical Cultural Studies Methodologies
Cultural Studies emerged in the 1960s–70s (e.g. at the Birmingham CCCS) to challenge elite concepts of culture and to analyze popular media as sites of meaning. Key methods included textual analysis (reading media texts for ideology and representation) and audience studies (investigating how different social groups interpret media). Stuart Hall’s encoding/decoding model famously posited that a text contains an encoded meaning, but “the codes of encoding and decoding may not be perfectly symmetrical”, allowing audiences alternative, negotiated readings. Raymond Williams’s cultural materialism argued that culture both shapes and is shaped by social structures and material conditions. In effect, Cultural Studies sought to understand how culture is produced, contested, and experienced in everyday life.
These methods assume that cultural objects are authored by humans and interpreted by humans. The audience is seen as an agent who negotiates meaning; the critic as someone who can decode hidden ideologies. However, early CS also acknowledged the importance of institutions and industries: for example, media institutions had power to privilege certain cultural forms. But the unit of analysis remained human-centered. Culture was “something people do,” not something done by machines.
Over time, scholars pointed to limitations within CS itself (e.g. lack of a cohesive political agenda, struggles with globalization and decolonization). Kumar (2025) describes Cultural Studies as an “unfinished project,” noting that the field has democratized scholarly inquiry but “has consistently struggled to formulate a cohesive political program”. Even by 2025, Kumar argues, CS faces “novel challenges of platform capitalism [and] algorithmic regulation” – suggesting that digital platforms and algorithms pose new problems for the field.
Algorithmic and AI-Driven Culture
A growing body of scholarship examines “algorithmic culture” – the idea that algorithms are now part of cultural production and meaning-making. Ted Striphas (2015) coined the term, observing that over recent decades “human beings have been delegating the work of culture … to data-intensive computational processes”. This shift, Striphas contends, “significantly alters how the category culture has long been practiced, experienced and understood,” leading to what he calls algorithmic culture. He emphasizes that offloading cultural work onto computers (e.g. social media algorithms, recommendation engines) changes the vocabulary of culture itself – terms like “information,” “crowd,” and “algorithm” acquire new meaning.
Other scholars similarly highlight how algorithms mediate and even create culture. Gillespie (2016) warns against treating “the algorithm” as a single deterministic force; instead he notes that culture arises from the interplay of production and reception. Citing Bourdieu, Gillespie reminds us that “Cultural objects are designed in anticipation of the value people may find in them and the means by which they may circulate”. In other words, human creators still anticipate audience needs even in an algorithmic system, but the context of circulation (via algorithms) has changed.
Recent empirical studies reveal how recommendations shape content exposure. Born and Diaz (2024) find that recommender systems can distort culture by preferentially amplifying some content while ignoring others. For instance, in music or streaming platforms, algorithms tuned for engagement often reinforce popular genres and margins out niche or emerging styles. Although these studies document the distortion, Born and Diaz note that they do not prescribe how culture should be curated. They suggest bringing public service media (PSM) principles to bear: for example, designing algorithms to foster diversity and social value, rather than purely commercial metrics. This indicates a normative concern absent from classical CS: how might digital platforms serve the public good?
Another relevant stream is scholarship on AI-generated content. Voinea (2025) develops an “algorithmic auteur” framework to show that AI in film and media is not just a tool but a “non-human agent that actively co-creates” works. He highlights that AI is trained on vast cultural datasets that carry existing biases. Indeed, researchers note “AI systems, trained on historical data, inevitably inherit and often amplify the biases present in that data”. Thus, AI-generated texts replicate past cultural conventions (the “nostalgia paradox”), reinforcing rather than questioning them. This raises questions about creativity, representation, and authorship that classical textual analysis may not fully capture.
Finally, digital culture scholars highlight the emergent “algorithmic gaze” as a new power structure. Voinea cites the idea that algorithms monitor and predict user behavior, forming “a new form of power”. In media, this gaze not only personalizes content (filter bubbles) but also drives content production itself: creators optimize for algorithmic metrics (views, likes). In sum, the literature on algorithmic culture converges on the view that AI/algorithms are not neutral; they are socially and culturally embedded forces that shape which cultural artifacts gain prominence.
Identified Gaps
The reviewed literature suggests several limitations in classical CS approaches. First, methodological opacity: algorithms are complex and proprietary, often inscrutable to outsiders. CS tools like close reading or interviews don’t easily reveal an algorithm’s logic. Jenna Burrell (2016) distinguishes types of algorithmic opacity (intrinsic complexity, corporate secrecy, etc.), but CS has few tools to “decode” an algorithm’s codes. Second, data-driven dynamics: CS’s focus on textual content may miss how data (clicks, likes, watch time) drives content visibility. The “mediatization of everything” (Livingstone 2013, cited in [54]) suggests that everyday practices across domains are now quantifiable and algorithmically mediated. CS must grapple with the quantification of culture – something Raymond Williams or new historicism never anticipated.
In particular, audience studies face challenges. Traditional reception theory assumes that individuals choose content and interpret it. But algorithms often pre-select content: people may not see a particular film or news item unless an algorithm recommends it. This alters the audience’s role. As Livingstone (2018) notes, in today’s datafied age personal data is “ceaselessly generated” and yields power to those who control the data. In other words, audiences have become data profiles in corporate databases, complicating notions of audience agency.
Similarly, textual analysis and cultural critique must account for new forms of text. When AI composes music or writes poetry, who is the “text”? Traditional critics might analyze a poem’s language or a painting’s imagery; but if the artist is an algorithm, critics might instead analyze the dataset and training process. Works like Voinea’s suggest that we need conceptual tools (e.g. “algorithmic auteur”) to address AI-created texts that classical literary criticism lacks.
In sum, while CS theories remain useful for understanding meaning, ideology, and power, there is a gap: how to incorporate the algorithmic processes and AI tools that increasingly mediate those phenomena. The next sections will examine these gaps in detail via theoretical discussion and concrete examples.
Theoretical Framework
Audience Reception and Algorithmic Mediation
Cultural Studies’ audience reception tradition (Hall, Morley, Ang) posits that media consumers actively interpret texts within their social contexts. Hall famously described three decoding positions (dominant, negotiated, oppositional) based on how viewers accept or resist the intended meaning. However, algorithmic culture complicates this model in at least two ways. First, algorithms can influence which texts audiences even encounter. For example, a newsfeed algorithm may rarely show content that contradicts a user’s existing beliefs, limiting the possibility of oppositional decoding. Second, users now both interpret content and generate data that feeds into algorithmic systems (e.g. liking or sharing a post). As Pronzato (2024) emphasizes, algorithms are “culturally enacted by the encoding and decoding practices of their producers and end users”. In other words, our online behaviors (commenting, clicking) encode cultural signals that algorithms “decode” to personalize media. Thus, reception theory must expand to consider users not only as decoders of texts but also as data sources that co-construct the algorithmic system.
This insight aligns with Livingstone’s suggestion to revisit the circuit of culture model. Livingstone (2018) argues that theorizing platform and algorithmic power requires “opening up the hermeneutic and action space between production and consumption”. The circuit of culture (production–consumption–regulation etc.) can highlight how audiences’ data become part of cultural production. For CS, this means supplementing audience ethnographies with analysis of how audience data are collected, categorized, and fed into algorithmic processes.
Cultural Materialism and Platform Capitalism
Cultural materialism (Williams et al.) considers how culture is shaped by economic relations and power. In the digital era, platform capitalism – where a few tech firms control major cultural platforms – is a key context. Scholars note that Google, Amazon, Facebook, etc. now “dwarf” traditional media companies, and they possess immense “databases, processing power, and data-mining expertise”. This concentration of power suggests that culture is being structured by corporate algorithms. For example, Striphas (2015) recounts how Amazon’s algorithmic catalog decisions (initially a “ham-fisted cataloging error” in tagging LGBT books) had massive cultural implications. A single database attribute change removed thousands of titles from global listings, highlighting how computational decisions affect cultural heritage.
Classical CS critiques of political economy (e.g. media conglomeration) can extend to algorithmic platforms: the question becomes not just who owns media, but who owns the code. For instance, who decides the rules by which recommendations are made? Tarleton Gillespie argues that we should not reify “the algorithm” as an autonomous actor; instead, we must examine how algorithms are produced and used. He observes that culture remains a dialogue between producers and audiences. However, the shape of that dialogue is now filtered through profit-driven systems. Born and Diaz (2024) suggest that PSM principles (diversity, common good) could re-orient algorithms toward social ends. This raises a critical-cultural question: do algorithmic incentives reinforce dominant culture (by favoring mainstream content) at the expense of subaltern voices? Cultural materialism would press us to analyze algorithmic production processes, policies, and labor (e.g. content moderators) as part of the cultural material base.
New Historicism and Cultural Context
New historicism emphasizes that texts must be understood within their broader historical and cultural context. In the algorithmic age, context includes large-scale data environments and evolving norms around AI. For example, a digital meme or TikTok trend is not just a text but part of a rapidly shifting online culture shaped by technological affordances. Cultural studies scholars interested in “everyday life” must now consider how people interact with content algorithms as a normal part of daily experience. The algorithm itself has become a historical actor – a technocultural force that needs documentation. Striphas’s (2015) keyword analysis (information, crowd, algorithm) is an example of looking at historical shifts in the meaning of “culture” in digital times.
However, new historicism in CS has rarely engaged with code or AI as historical artifacts. There is a call for “digital adaptation” of historicist methods: for instance, situating AI texts within the history of media technology, or using computational tools (like topic modeling) to analyze large corpora of digital texts. In other words, traditional CS interest in context must widen to include data ecosystems and algorithmic histories. This raises methodological questions about how to blend humanistic close reading with data-driven analysis – a gap that this assignment seeks to highlight.
Case Studies and Analysis
Algorithmic Platforms and Cultural Consumption
Recommender Systems: Streaming and social media platforms use algorithms to curate content. As Born and Diaz (2024) note, these algorithms are optimized for engagement and revenue, which tends to amplify some genres or groups while overlooking others. For example, a music recommendation algorithm may continually suggest already-popular artists, marginalizing niche genres and reproducing existing inequalities. Similarly, a video platform’s “recommended” feed can keep viewers in a narrow set of interests, even if they attempt to seek diversity. This phenomenon is often discussed as “filter bubbles” or “echo chambers,” although large-scale studies show mixed evidence of their societal impact. Nonetheless, the cultural effect is that not all cultural texts reach all audiences equally; algorithms act as gatekeepers.
Classical CS tools have limited ability to analyze these processes. Textual analysis of a film or song does not reveal why it was (or was not) suggested to viewers. Audience ethnographies may interview users about what they watched, but without access to algorithm logs, researchers cannot trace why certain content surfaced. Critical CS must therefore engage with data analytics. Some scholars have begun using platform data or APIs to study recommendation biases, but this requires skills beyond traditional humanities.
Social Media Trends: Platforms like TikTok and Instagram use complex algorithms to surface trending content. A meme or dance challenge might explode globally because the algorithm “propagated” it. In effect, virality can be algorithmically engineered. This raises the question: can cultural studies explain such phenomena with existing concepts? One could see trending memes as new forms of popular culture, but their genesis (algorithmic boosting) is novel. The “publicness” of culture seems to shrink, as Striphas warns that algorithmic culture involves “the gradual abandonment of culture’s publicness” – meaning culture happens privately in our feeds rather than in open collective spaces.
To illustrate, consider YouTube’s recommendation algorithm. Scholars have found that YouTube tends to promote sensational or polarized content to maximize viewing time. CS might analyze the ideological content of such videos (e.g. conspiracy videos), but must also account for the fact that the algorithm itself nudged audiences toward them. For instance, after the Christchurch tragedy (2019), YouTube’s algorithm inadvertently promoted extremist content, illustrating how algorithmic “amplification” can have dangerous social consequences. Traditional CS approaches would treat this as a content issue; algorithmic analysis would point to systemic biases in the recommendation logic. Both lenses are needed, but CS scholars must stretch into computational domains to fully understand the phenomenon.
AI-Generated Texts and Cultural Production
AI in Literature and Art: Recent tools like GPT-4 and DALL·E generate creative texts and images. These creations challenge CS because the “text” has no identifiable human author or encoded ideological intent. For example, an AI-written short story might incorporate tropes from its training data without conscious purpose. Critics have begun to analyze AI-generated art in terms of aesthetics and meaning, but CS must also ask: how do audiences interpret an AI text? The answer may depend on disclosing AI authorship. If readers know a poem was written by an algorithm, they might decode it differently than if they assumed a human wrote it. This raises ethical and interpretive questions: should we treat AI writing as just another cultural text, or as a symptom of machine labor and data culture?
Voinea (2025) argues that AI in media functions like an “Algorithmic Auteur” – a networked creative agent shaped by technical architectures, datasets, and economic logics. Under this view, CS analysis might resemble auteur theory (studying the style of a filmmaker), but the auteur is now a machine. For instance, an AI trained on 19th-century literature might produce a novel in a Victorian style. A traditional literary analysis might praise the narrative as a homage to Dickens; CS, however, would question the production process: the novel emerged from archived texts and code, not from human memory or social experience. This raises issues of authenticity and creativity. Some emerging scholarship frames AI as revealing the biases of its cultural dataset – for example, an AI art generator might over-represent certain demographics because its training images were unbalanced.
AI in Media Industries: Beyond art, AI is reshaping how media is made. News outlets and studios now use algorithms for editing, color grading, and even writing draft articles. This blurs lines between culture and automation. CS traditionally examines media institutions (e.g. Hollywood studios) for ideology and labor practices. With AI, a new set of questions arises: How does the “deskilling” of certain creative tasks affect media workers? How do corporate decisions about AI adoption (cost-cutting vs. creativity) influence cultural output? Voinea notes that studios are using AI analytics to predict box office success (Lash & Zhao, 2016) – a shift from art to data-driven speculation. Cultural materialism would urge analysis of these economic drivers.
The key point is that AI is now a cultural producer itself. Unlike classical texts that emerged from identifiable social contexts, AI texts result from data-driven processes. Cultural critics must adapt their categories: perhaps asking not just “what does this text mean?” but “what does its production process mean for culture?” This often requires technical understanding of AI, which is beyond the remit of traditional CS scholarship. For example, to critique an AI-generated news story, one might need knowledge of machine learning biases – something cultural studies programs may not teach.
Discussion: Limitations and Needed Evolutions
Gaps in Classical CS Approaches
The above examples highlight concrete gaps in Cultural Studies methods. First, methodological gap: CS has few tools for analyzing algorithms directly. As Gillespie warns, focusing only on content misses how culture is mediated: “we will certainly come up short if we tell simple cautionary tales about the mechanisms of production and distribution”. In practice, most CS work on digital culture treats platforms as “black boxes.” Scholars may discuss YouTube or Facebook abstractly but often cannot inspect the code. This means CS critiques may be speculative (e.g. saying “algorithms are biased”) without empirical grounding. Bridging this requires methodological innovation (e.g. digital ethnography of platforms, reverse-engineering algorithms, big data analytics) – skills not traditionally in CS training.
Second, conceptual gap: CS concepts like “audience,” “text,” and “representation” need redefinition. Who or what is the audience when algorithms personalize content? Are we analyzing an audience of users or an audience of data points? Similarly, the notion of a fixed text breaks down with algorithmic personalization: each user effectively sees a unique version of “the platform,” even though the underlying media text is the same. CS must grapple with these new definitions. Pronzato’s integration of Hall’s model suggests one path: treat algorithms themselves as cultural texts or agents, produced and consumed by social actors. But this is still nascent in CS.
Third, epistemological gap: AI introduces new questions about knowledge and truth in culture. Filter bubbles and recommendation bias can distort public discourse. CS must engage with debates in AI ethics and epistemology (e.g. what is “truth” in an algorithm-sorted world?). For example, algorithmic curation can create echo chambers, which critics worry may harm democracy. CS has a tradition of critiquing propaganda and ideology, but now it must consider algorithmic propaganda. This is partly addressed in media ethics and AI policy fields, but CS scholars need to incorporate these concerns (e.g. through critical data studies, as suggested by Livingstone and others).
Implications for Identity, Power, and Everyday Life
The research questions ask about identity and power. Algorithmic culture affects identity in two ways. On the one hand, algorithms can constrain identity expression by favoring “trending” formats or safe topics; those who deviate (e.g. minority artists) may be less visible. On the other hand, personalized feeds allow niche identities to flourish in private sub-communities. Cultural Studies is well placed to analyze identity politics, but it must account for algorithmic shaping of identity. For instance, an AI-driven social platform might reinforce gender or racial stereotypes by recommending biased content. Voinea’s analysis of AI bias suggests cultural studies of identity must examine how training data and algorithm design reproduce social inequalities.
Power is clearly central: algorithms concentrate power in the hands of those who design and control them. Livingstone (2018) warns that ordinary people’s data (and thus selves) “are increasingly tracked, sorted, and monetized”. This commodification of identity differs from older cultural commodifications; it is immediate and granular. CS traditionally critiques capitalism and power, but algorithmic culture calls for an updated critique of surveillance capitalism (Zuboff 2019). For example, content creators (like YouTubers) find their success mediated by platform algorithms that they do not control. This creates a new form of labor exploitation where visibility – and thus livelihood – depends on opaque algorithms. CS must include these algorithmic labor relations in its analysis of the culture industry.
Everyday life is more mediated: friends, news, art, and even intimate relationships are increasingly filtered through apps and algorithms. Cultural Studies’ focus on everyday practices now extends into the digital quotidian. However, access to cultural content is no longer uniform. Born and Diaz (2024) use PSM principles to propose that cultural experiences be oriented toward social development. This suggests CS researchers should ask normative questions about how technology should shape culture – an extension of activism and policy engagement beyond pure criticism.
Towards Evolving CS Methods
Given these limitations, how can Cultural Studies evolve? Drawing on the literature and analysis, we suggest several directions:
- Interdisciplinary collaboration: CS researchers should work with data scientists, computer scientists, and network analysts. Understanding algorithms may require tools like computational text analysis, machine learning interpretation, or large-scale user data. For example, analyzing TikTok trends might combine qualitative interviews with quantitative analysis of trending hashtags. CS curricula might incorporate data literacy and coding basics to empower such work.
- Digital ethnography and platform studies: Methods should expand to include platform ethnographies (studying the culture inside tech companies) and user data studies (working with anonymized user logs under ethical protocols). This can reveal how algorithmic governance works in practice. For instance, studying how Netflix engineers tweak recommendation algorithms would inform cultural critique of streaming media.
- Critical algorithms analysis: Borrowing from Science and Technology Studies (STS), CS could treat algorithms as socio-technical artifacts. Tarleton Gillespie and Tarleton Dupont suggest examining the institutional contexts of algorithm design【64†】. Cultural Studies can adapt by analyzing the political-economic context of algorithms: who funds them, who regulates them, and how public values are (or are not) embedded.
- Ethics and norms integration: As Born and Diaz demonstrate, aligning algorithms with values (e.g. diversity) requires clear normative frameworks. CS can play a role in articulating these norms through its theory of power and ideology. For example, critiquing an AI-generated image for bias or pornography involves ethical arguments grounded in representation theory.
- Hybrid theoretical frameworks: Theoretical integration is needed. Our analysis shows that combining CS insights with digital theory yields new concepts like “algorithmic culture” and “algorithmic auteur”. Theoretical frameworks might explicitly incorporate algorithmic logics – for example, treating platform metrics as another form of “text” to analyze.
In essence, Cultural Studies must adapt and augment rather than discard its legacy. The classics (Hall’s negotiated meanings, Williams’s emphasis on lived culture) still apply: audiences still interpret texts, and culture still connects to social history. But how culture is delivered and who controls it has changed. Scholars must be reflexive, acknowledging that culture now has two audiences – the human user and the algorithmic curator (Striphas 2015) – each with different “decoding” processes.
Conclusion
In the age of AI and algorithmic culture, classical Cultural Studies approaches face serious challenges. While foundational CS concepts (e.g. textuality, audience agency, ideological critique) remain valuable, many assumptions no longer hold. Algorithms now shape what cultural texts are seen and valued, often invisibly. AI can create cultural content without a human author. Platform capitalism has concentrated cultural power in code and data. These changes mean that analysis of culture must include computational processes and data relations.
This paper has argued that to remain relevant, Cultural Studies must embrace new tools and interdisciplinary perspectives. At minimum, CS scholars should adopt a stance of algorithmic awareness: always interrogating how algorithms might mediate cultural phenomena. For example, an analysis of online fandom must consider not only fan texts but also how social media algorithms might amplify or silence those texts.
Practically, this means expanding research methods to include big data analysis, coding skills, and collaborations across fields. Theoretical frameworks must extend to cover concepts like algorithmic bias, datafication, and computational agency. Educators might incorporate courses on digital culture and AI ethics into CS programs.
Ultimately, the task is to evolve Cultural Studies without abandoning its critical spirit. As Kumar (2025) suggests, CS has long been an “unfinished project”. Its future lies in finishing that project – now including algorithmic regulation, surveillance, and environmental digitality as part of culture’s fabric. By doing so, Cultural Studies can continue to interrogate power and meaning in the modern world, bridging humanities and technology to understand who we are becoming in the AI age.
References:
Born, Georgina, and Fernando Diaz. “A Public Service Media Perspective on the Algorithmic Amplification of Cultural Content.” Knight First Amendment Institute, 24 July 2024, knightcolumbia.org/content/a-public-service-media-perspective-on-the-algorithmic-amplification-of-cultural-content.
Gillespie, Tarleton. #trendingistrending: When Algorithms Become Culture. Essay in Algorithmic Cultures: Essays on Meaning, Performance and New Technologies, edited by Robert Seyfert and Jonathan Roberge, Routledge, 2016, pp. 1–17.
Kumar, Pramod K. V. “The Unfinished Project: A Critique of the Legacy of Cultural Studies from Birmingham to the Algorithmic Present.” International Journal of English Literature and Social Sciences (IJELS), vol. 10, no. 5, Sept.-Oct. 2025, pp. 479–483.
Livingstone, Sonia. “Audiences in an Age of Datafication: Critical Questions for Media Research.” Television & New Media, vol. 19, no. 2, 2018, pp. 140–57. (Accepted version, LSE Research Online.)
Pronzato, Riccardo. “Enacting Algorithms Through Encoding and Decoding Practices.” Italian Sociological Review, vol. 14, no. 10S, 2024, pp. 531–52.
Striphas, Ted. “Algorithmic Culture.” European Journal of Cultural Studies, vol. 18, no. 4-5, 2015, pp. 395–412.
Voinea, Dan Valeriu. “The Algorithmic Auteur: AI, Cultural Production, and the Reconfiguration of Audiovisual Media.” Social Sciences and Education Research Review, vol. 12, no. 1, 2025, pp. 256–68.
