Context and Problem statement generated with Chat GPT as I was ill and wanted to make sure I had something to show and talk about in my supervisor meeting.
Digital and physical world swap
Swapping the functionalities of the digital and physical world. Would we tolerate what we experienced digitally if we encountered it in the real world? Would the digital space be less overstimulating if it functioned like the physical world?
Context
The digital world operates under different rules than the physical one—endless notifications, constant tracking, algorithmic content curation, and intrusive advertising shape our experiences. Yet, we rarely question whether we would tolerate these intrusions if they occurred in real life. The rise of augmented reality, the metaverse, and AI-driven interactions makes this contrast more relevant than ever. While some research explores digital well-being and online ethics, few projects directly compare digital experiences with their physical-world equivalents. By swapping the functionalities of these spaces, this project challenges our assumptions and asks whether digital environments could be designed with more human-centered considerations.
Problem Statement
Would we accept the level of surveillance, interruptions, and manipulation we experience online if they occurred in real life? Conversely, would the digital space be less overwhelming if it followed the more organic flow of the physical world? This project examines the impact of overstimulation, loss of privacy, and behavioural conditioning in digital spaces, questioning whether current design choices serve users’ well-being. Through this exploration, the project seeks to reimagine digital environments that prioritise human agency, balance, and ethical interaction.
Visualising your digital unpaid experiential labour ( personal data)
How much of our data do we give for free? Where do we give our data and how much of it? What value does our data hold? Who benefits from our data? Visualising this invisible transaction.
Context
Every day, users generate vast amounts of personal data through social media, browsing, smart devices, and digital interactions. This data fuels a multibillion-dollar economy, yet most people remain unaware of the scale and impact of their contributions. While privacy advocates and researchers have highlighted concerns about data exploitation, there is still a lack of accessible ways for individuals to grasp the full extent of their unpaid digital labor. Existing solutions, such as privacy policies and data transparency reports, are often opaque and difficult to interpret. This project seeks to make the invisible transaction of personal data tangible by visualizing the scale, value, and beneficiaries of the information we freely give away.
Problem Statement
Users unknowingly contribute immense value to tech companies through their personal data, but they rarely see direct benefits or even understand the extent of their digital labor. This imbalance raises ethical concerns about consent, ownership, and exploitation. Who profits from our data, and what does it truly cost us? This project aims to expose and illustrate the hidden economy of personal data, making it easier for individuals to understand their role in this system and advocate for fairer, more transparent digital practices.
The danger of believing in the natural neutrality of technology
There is a belief that technology is neutral but is it truly neutral? Is neutrality a good thing? If you have to teach technology to be morally good should we integrate it so deeply into our lives? Exploring the dangers around this belief.
Context
Technology is often perceived as a neutral tool, operating independently of human bias or influence. This belief stems from the idea that technology itself does not make decisions—humans do. However, as artificial intelligence, algorithms, and digital infrastructures shape critical aspects of society, the neutrality of technology becomes questionable. Researchers in fields such as ethics, HCI, and AI bias have demonstrated that technology often inherits the biases of its creators, reinforcing systemic inequalities. Despite efforts to mitigate harm, many current solutions focus on reactive fixes rather than questioning the assumption of neutrality itself. This project seeks to challenge the myth of neutral technology, bringing attention to the ethical consequences of unchecked technological integration in daily life.
Problem Statement
The assumption that technology is inherently neutral can lead to its unchecked influence in decision-making, allowing biases to persist and expand at scale. This impacts marginalized communities disproportionately, as biased algorithms influence hiring, policing, lending, and access to information. If we must actively teach technology to be fair or morally good, can we afford to integrate it so deeply into our lives? This project aims to explore the dangers of this belief, highlighting the risks of uncritical technological adoption and advocating for responsible, transparent, and ethical development.