Introducing
Your new presentation assistant.
Refine, enhance, and tailor your content, source relevant images, and edit visuals quicker than ever before.
Trending searches
Now What?
As our lives become increasingly intertwined with AI, we must question how to remove bias to ensure equity and equal representation for all. Even if our conversations lead to more questions than answers, prompting others to think about our interactions with AI and each other challenges us to build tolerant and human-centered technology. When narratives of the particular or marginalized are added to the story of humanity, a society that conjoins empathy with progress is formed.
This is only the beginning. In this sparse field of ethics and AI, every voice matters. We encourage you to pause and scrutinize the technology we have accepted into our lives, and to never stop asking questions.
Special thanks to:
Smart home graphics created by Amber Park and Marie Cheng. Interviews conducted by Athena Yao.
Sources can be found on our website:
Ariana Eily and Jeff Ward for their endless support and kindness throughout this project.
Thank you to Sabrina Golling, LaQuana Palmer, Salma Alrowaie, Rishabh Java, Padmanabh Kaushik, Samuel Carpenter, and Abigayle Peterson for their insight and contribution.
Is technology making us more or less human? What can we learn about ourselves through technology?
Still feel comfort-able with smart home devices?
Introduction
How often do you encounter artificial intelligence? If you said not often, let us rephrase: how often do you use face ID to unlock your phone? Scroll through social media? Or make a simple google search? All of these instances and more use AI to improve and personalize your experience.
Artificial intelligence has infiltrated our lives, both for better and for worse. Our project, “Camera Digita,” aims to paint portraits of communities that are under-served and underrepresented by emerging technology. With the reaches of big tech rapidly expanding, we want to pause and evaluate the ethics and humanity of these advancements.
Artificial Intelligence:
The ability of a machine to perform tasks commonly associated with intelligent beings, such as the ability to reason, generalize, and learn from past experiences.
Feel free to go back to highlighted objects by clicking on them
Method
Through community-based research, our team of three undergraduate students have discovered many facets of AI’s impacts on daily life. We cover the topics of bias, human interaction, design justice, and more. To display our findings, we created an interactive Smart Home in order to ground our research in human experience.
Automatic vacuum cleaners were used in 14.2 million households in 2018. These popular devices use artificial intelligence to learn the layout of rooms and calculate the most efficient cleaning routes.
A 2020 report estimated that 1 in 4 US adults own a smart speaker. That means that there are 60 million Americans using 157 million devices, with each household having an average of 2.7 devices. AI is being rapidly implemented into these devices to maximize user convenience.
For instance, DeepThinQ 1.0, a platform developed by LG, is able to learn and predict human habits. With facial recognition, it can distinguish between household members. Air conditioning systems built with DeepThinQ can learn a member's preferences and adjust a room's temperature accordingly. With multiple devices linked, the system can control the house independently: turning off lights when the owner leaves and turning on the air purifier when they return.
In 2015, MIT graduate student Joy Buolamwini created the "Aspire Mirror," an invention that allows the user to see inspirational quotes and images reflected onto their face. However, the facial recognition software in the mirror couldn't recognize Joy's face...
...Until she held a white party mask up to her face. The software was able to read the blank white mask before it could recognize a dark-skinned human face.
Upon further investigation, Joy found that other facial recognition software by Microsoft, Amazon, and IBM also performed worse with dark-skinned female faces.
Joy went on to create the Algorithmic Justice League, feature in multiple documentaries, and even speak at a House of Representatives hearing on facial recognition technology.
Although Joy has won many battles, the war against bias an AI has only just begun. Limitations in AI are unfortunately common. In fact, an instance of it occurred at Duke University just last year.
The Duke PULSE project used AI to sharpen pixelated faces. However, their algorithm yielded lower success rates for black faces. The creators suggested that this bias was inherited from StyleGAN, a neural network that they used to generate the faces. A study on StyleGAN and bias found that almost three fourths of generated photos were of white faces, with only ten percent representing black faces. The study concluded that the algorithm overwhelmingly favors generating young, white, female faces.
Data
Model
Feedback: refining model
Where did this bias come from? It may not be the algorithm itself but the data that it was trained on.
Machine learning is an integral part of artificial intelligence. Large amounts of data are used to "train" algorithms. In this way, models can learn without direct human input. For instance, a model may be fed thousands of pictures of animals in order to learn what is and isn't a dog. With each additional piece of data, the model becomes increasingly accurate.
Predicted output
Evaluation:
correct/not
Labeled data fed in
Evaluation:
incorrect
Feedback: model improves
Is this a dog?
Data
Model
Predicted output
Guess: Dog
Using known data to predict
Gaps in data are often reflected in the model's output. After all, the machine can only learn from what its given. PULSE was trained on images generated by StyleGAN, which was trained on pictures from Flickr, a photo-sharing platform. While this allows for mass amounts of cheap data, it also means that StyleGAN inherits the gaps in Flickr. Now, is it so surprising that data from a social media platform yields an output that reflects of the Western beauty standard?
StyleGAN learns to make faces from biased data.
PULSE has less training on non-white faces.
Flickr has unrepresentative data.
PULSE performs worse with POC
Fixing gaps in data doesn't always mean adjusting it to just be proportional. AI only replicates reality, and reality isn't near perfect. Often, minority groups must be overrepresented in the data in order to be represented equally in the output.
This shows us that equal is not enough. In a world still full of bias, we need representation to be equitable. We should train AI on the reality we strive for, not the reality that we know was built on inequality.
Reflection Sketches by Marie Cheng
Who deserves a face
Inspired by the Duke PULSE case, I wanted to create contrast between the faces that the media elevates and those that are forgotten or excluded.
In the Image of your Creator
I wanted to emphasize the impact that people have on their creations, intentional or not. Without greater accountability, the extreme power imbalance between AI creator and consumer will only worsen.
You are walking down the street when a stranger notifies you that there are facial recognition surveillance cameras down the next block. This facial recognition technology will take a biometric photo of you and keep it in a database without your consent. Uncomfortable with the fact, you cover your face and try to get through the section quickly. Officers stop you, asserting that they have legal authority to be suspicious of your behavior and even fine you for avoiding the cameras.
Although this sounds like science fiction, in the UK it is an emerging reality. This exact scenario played out in Coded Bias, a documentary on ethics and AI. The facial recognition software meant to identify criminals on the street was inaccurate 98% of the time.
In 2018, Amazon developed a similar software. However when tested against the 535 members of congress, it misidentified twenty-eight as criminals. Of the identified members, a disproportionate amount were people of color.
When algorithms are proven to be inaccurate and biased, why are they already being implemented?
Although this sounds like science fiction, in the UK it is an emerging reality. This exact scenario played out in Coded Bias, a documentary on ethics and AI. The facial recognition software meant to identify criminals on the street was inaccurate 98% of the time.
In 2018, Amazon developed a similar software. However when tested against the 535 members of congress, it misidentified twenty-eight as criminals. Of the identified members, a disproportionate amount were people of color.
When algorithms are proven to be inaccurate and biased, why are they already being implemented?
Marie Cheng
An Alternate Perspective
Marie Cheng
I also created this zoomed out perspective of "In the Image of Your Creator" to establish scale and to characterize the creator. The box that surrounds him represents the narrow image of who we view as experts. Through this project we hope to expand the definition of "expert" to include members from underrepresented communities and perspectives.
My creations reflect this perspective in collaboration with my mission as a visual storyteller. Through art, I expressed my candid feelings and reactions to the information I absorbed. I hope that my sketches will resonate with viewers in the same way that I did with our research.
At many times, I felt like a speck shouting under the shadow of an unstoppable wave. What can a team of undergraduates do to change the inevitable path of technology?
When I feel discouraged, I remind myself of the impact of individuals, of one MIT student trying to make a mirror. I remind myself that while there are still people with that queasy feeling in their stomach, there are experts to continue research, the human way.
Marie Cheng
For me, the most difficult part of this project was readjusting my definition of “expert”—I thought, what could I know? Let me ask the experts, the professionals: what is the best way to solve bias in AI? What I really should’ve been asking is “what is the human way?”
When reaching out to communities and conducing my own research, I began to approach an answer. My team and I experienced repulsion, excitement, shock, curiosity. Every emotion that fell between the binary yes and no. Hesitation, intrigue, frustration, ambivalence. The inability to express our emotions other than it just feeling wrong.
Although my research presents potential avenues for change, the greatest takeaway should be the value of sitting in uncomfortable feelings. As the “experts” obsess over efficiency and profitability, they start to resemble the machines they manufacture. To counteract this force, my team and I rooted our research in human narratives.
Art, AI, and Creativity
Automating the Hiring Process
In 2018, "Portrait of Edmond Belamy," the first piece of AI-generated artwork, sold in an auction for a staggering $432,500. The signature on the bottom right corner of the painting was defined by an algorithm.
Who is the artist here? Is it the algorithm, or is it the programmers who created the algorithm and trained it with data sets of paintings from throughout human history?
We often consider our creativity, our ability to make art that expresses our emotions and view of our world, as being uniquely human.
If an AI can produce indisputable works of art—works nearly indistinguishable from that made by humans—then what is art, and what does it mean to be creative? Were we ever alone in our creativity?
The Alignment Problem
In 2014, Amazon started a project to use AI to review job applicants’ resumes and automate the hiring process.
What's your initial reaction to this idea?
Over the course of a few years, however, the company soon realized that the system was biased against women. It penalized resumes that included the word “women’s” (for example, “women in STEM club president") and downgraded graduates of two all-women’s colleges. It turns out that Amazon’s computer models were trained to evaluate applicants by observing patterns in employee success and submitted resumes over a 10-year period: most of which came from men in this historically male-dominated industry.
What's your reaction to this information?
AI researchers from around the world have been working on the "AI control problem," or specifically the "alignment problem": how do we build AI systems that are aligned with human values and objectives?
Years of research have still not yielded a solution.
How do we define human values or objectives?
Who gets to determine these values or objectives?
How do you define these for AI systems?
Everyone Wants to Live Forever
The Alignment Problem (cont'd)
What if you could upload your consciousness to a computer? What if an artificially intelligent form of you could live on with your memories, your experiences, your hopes and dreams for the future?
Is that AI really you? Would you be living forever?
Or, in a slightly different, more realistic sense: What if, after you died, an AI system could mine all of your photos, videos, and actions on social media to create an AI persona based on you to bring comfort to your loved ones? Is THAT AI really you? Would you be living forever?
Let's say you define an AI's objective as follows: "make all humans in the world happy."
Well, opioid make humans happy, don't they?
Should the AI go around giving everyone drugs to accomplish its objective?
What if you went in another direction with this objective: "solve the issue of climate change and make sure our planet is sustainable."
What if the AI (assuming it has some level of sentience) eventually determines that humans are the problem contributing to climate change?
Should it destroy humanity to save the planet?
If you aren't paying for social media, then how do these companies make money?
As described in interviews with former social media companies in the documentary "The Social Dilemma," companies succeed by capturing as much of our attention as they can and then selling that attention to the highest bidders or advertisers. Algorithms are literally designed to be addictive--to analyze what your browsing habits are and react accordingly. Social media sites and search engines often show you what you want to see--resulting in issues such as political polarization and the spread of fake news or misinformation.
When you start a quick Google search...
The results that pop up differ from person to person.
These different auto-fill results are based on a variety of factors, from your search history to previous searches by others around your location. In short, Google shows you what it thinks you want to see--not necessarily what is objectively true.
When millions of people around the world are seeing search results and differently tailored realities on their screens--realities in which the people on their version of the Internet think a certain way and say the same things as they do--is it a wonder that political polarization and misinformation have run rampant?
What comes to mind when you hear the term AI?
“AI is in everything. Google is AI..Cortana, Alexa, Siri are AI. It makes life easier—machine learning. I think it’s the only thing you use that becomes smarter with time.” - Salma Alrowaie (Saudi Arabia)
“I know that there’s a lot of AI around me, but it’s not as visible as it should be.” - Sam Carpenter, tech enthusiast (Massachusetts, USA)
"As a computer science major, what comes to mind is the dynamite decision making algorithm, but speaking of the stereotype within Indian society, what comes to mind is "robot"...to give a name to it, Sophia." - Padmanabh Kaushik, computer science student (India)
"The word AI is super ambiguous...it’s basically something that, in my personal definition, emulates human behavior in a much faster way...what I find really interesting is that AI was originally based on the human brain." - Abigayle Peterson, founder of AI-powered mental health tool Magnify Wellness (Washington, USA)
What should we be using AI for? What positive impacts have you seen in your community?
"I’m always thinking about how AI and technology can be used to address the top needs in North Carolina: food, transportation, interpersonal safety, housing...and how we can use technology and innovative methods of addressing these needs.” - LaQuana Palmer
“...[As an example], when the pandemic first hit, an AI department managed by the Saudi government used a token app to tell people if they have been in contact w someone that might have COVID, register for a test, take the vaccine...They’re not necessarily spying; they’re using the data that we’re allowing them to have in order to make sure that everyone’s safe.” - Salma
"I noticed that there's a gap in the mental health space...fear of stigma...Maggie [the AI-powered chatbot] helps people find a sense of community..." - Abigayle
What should we be thinking about when we are incorporating AI innovations into communities?
“Access is key...who is able to access [the technology] and what is the availability for that community?...How do we use technology to bring people closer together? How do we use technology to be able to have these discussions?
Really think about community impact. As you’re developing AI, how is that going to impact the community not just for a day, not just for two days, [but] for a long period of time, for generations...what is the next thing that could help us with those unmet social needs across the United States?” - LaQuana
“You have to raise awareness...because there’s the common misconception that ‘what I don’t know can’t hurt me.’ - Salma
“In India, it’s a constant race. We’re just running fast and we can’t just stop to look at how far we’re leaving other people behind...otherwise, we will lose the race...I don’t think people really care [or know] about the devices that we have...that’s where the fear arises.” - Padmanabh
How should we prevent bias in AI technology?
“What prevents biases is making systems more equitable to begin with...if you try to standardize it, you lose the nuance.” - Sabrina Golling, Rural Forward North Carolina (United States)
“I keep going back to representation because when I think about AI, that’s what comes to mind--ensuring that we have enough representation when you’re trying to develop something artificial but also considered intelligent...when you don’t have enough voices at the table, it can do more harm than good.” - LaQuana
“You know how you have certifications? For example, a medical platform needs to be HIPAA compliant. if you have a website/video with medical applications, it needs to be ADA compliant. In the case of AI..they need to be data set certified which would involve showing that the data they use is unbiased, basically setting up some sort of multi-step/multi-stage clearance and certification process for companies to verify that their AI systems are not biased and their AI is tested before they can be used by medical/legal firms.” - Rishabh
“AI is a system that’s man made, so if you don’t have a diverse team working on the system itself, then obviously it’s going to be biased. It’s going to catch on to the humans’ bias.” - Salma
Where do you see the future heading? What would you envision as an “ideal world” involving humans and technology?
"I do see us reaching a kind of middle ground in which AI and humans are working together, where AI is augmenting humans rather than trying to take over. I might be being too optimistic, but I do believe that that’s what’s going to happen.” - Rishabh Java, tech entrepreneur (United Arab Emirates)
“[My ideal world] is one where everything is open source and everyone has enough technological literacy to at least understand how the technology that they use works...for example, I’ve decided that Windows Hello is worth my time. It’s kinda creepy...but I've saved a lot of time with it and I’m willing to make that trade-off...I feel like a lot of people can responsibly make that trade-off...[My ideal world] would be where everyone has the information and ability to make that trade-off” - Sam
“I’ve always thought of the future of AI as in AI taking over the world...[An] ideal world sounds very unrealistic to me...Humans are very complex creatures and unless you’re living in a dystopia in which you’re trying to make everyone look alike, think alike, like copy-paste versions of each other, then I don’t think we’re all going to all agree on everything and have peace in general...which, I know, I’m being very cynical…but you never know what the future holds.” - Salma
“My ideal world would be a world where everyone had the access and opportunity to understand the basics of AI and computer science...The more we are informed and educated abt AI, the more we can make conscious decisions about what AI should or should not do.” - Abigayle
Athena Yao
"Humans are creating themselves in their own image and likeness quite literally. Racism is becoming mechanized, robotized." - Coded Bias
“AI is a system that’s man made, so if you don’t have a diverse team working on the system itself, then obviously it’s going to be biased. It’s going to catch on to the humans’ bias.” - Salma Alrowaie, Saudi Arabia
This painting explores the idea of "pulling back the veil" on the algorithms that dominate our lives. A robot arm towards the right symbolically holds up a mirror "to society," reflecting the biases embedded in humanity--biases that are not eliminated, but rather continually perpetuated, by resulting AI systems.
A World Where Many Worlds Fit
Athena Yao
What would it look like if everyone lived together on one street, in a world where many worlds fit? How would technology and AI play a role in building and shaping such a world?
Checkmate on Humanity
Athena Yao
The advent of social media and technology has been described as holding the potential to result in a "checkmate on humanity": an existential threat that will turn the interconnectedness of our world against us. Who is responsible for ensuring that this will not happen? Is this an issue for corporations? For governments? For all of us?
At its core, AI—a form of intelligence designed to simulate that of humans—stands at the precarious intersection of technology and humanity. Throughout this project, I’ve come to recognize the intricate complexities of our relationship with AI in the world around us and consider what this says about what it means to be human: If AI can produce indisputable works of art, then was creativity ever unique to us, and what does it mean to be creative? If AI algorithms are a reflection of data sets originally created by humans, then what do the resulting biases say about us? To what extent can and should automation be used to replace human abilities, and where do we draw the line between humans and machines?
These are just a few of the questions I considered throughout my team’s research. As I interviewed and connected with members of different communities from across the United States and all around the world—our conversations often resulted in more questions than answers.
Oftentimes, I’d end up feeling lost and helpless in the belief that technological “progress” would continue to advance in spite of the issues and consequences that our research had begun to reveal.
I wondered what our musings and artwork could do to stem the flow towards a dystopian world in which (as in the Jurassic Park quote) researchers and tech enthusiasts “were so preoccupied with whether or not they could, they didn’t stop to think if they should.”
Yet as I looked past the initial despair, I realized that these conversations marked the first step towards building a brighter future, one in which our technology empowers all. It is my hope that these first steps lead to more intentional strides in the right direction; I hope that they compel us to take the time to pause while we run the race of technological progress in order to reflect upon how far we’ve come, recognize those we’ve left behind, and think about why we continue to run the race in the first place.
As we raise public awareness, elevate underrepresented communities, and ask the difficult questions for which there are no easy answers, we build towards a future in which there is transparency, equity, and intentionality—one in which, as described during an interview with LaQuana Palmer of Rural Forward NC, we “use AI to help and enhance [rather than] replace” humans, one in which technology and humanity truly exist in harmony.
Athena Yao
The software used to predict Loomis’s risk of re-offense is known as COMPAS. There is substantial evidence that COMPAS shows bias against African Americans: analysis shows that COMPAS predicts that African American convicts are 1.87x more likely to reoffend than their white counterparts.
Despite this, the Wisconsin Supreme Court ruling allowed COMPAS to be used during Loomis’s sentencing with nothing but a small warning that there could be mistakes. A study determined that training COMPAS on a balanced dataset of white/black and reoffend/not gave more fair and accurate predictions. In comparison, an unbalanced dataset had much higher false positive and negative rates.
--Sheila Jasanoff, Ethics of Invention: Technology and the Human Future
Algorithmic Justice
Athena Yao
"[The use of AI in the criminal justice system is] trying to eliminate the negative personal biases that people bring in making decisions in the criminal justice system but in reality, it takes away the ability for more nuanced consideration...when you put people in buckets, it always stands the risk for turning into a serious equity problem." - Sabrina Golling, Rural Forward North Carolina
AI, Manifestations of Human Desire
Amber Park
Based on the facial recognition technology that we studied, I wanted to create a piece that depicted technology’s creeping influence on humanity. On the flip side, however, we are the ones who create insidious innovations and have the power to change their course. We control who will win the struggle between the good (implementation of ethics and equity into our future technology) vs evil (autocracy of mega corporations and exclusion of/inaccessibility for marginalized groups).
Amber Park
I wanted to be creative and imagine what AI could look like in the future. The robot judge was inspired by the COMPAS system, in which the data it processed ultimately determined the sentence of Eric Loomis. Though they won’t replace police forces anytime soon, automated police robots are not an impossibility; in some countries like Dubai, robots are being used to report crime, enter dangerous locations, and detonate bombs.
Amber Park
When I was younger, I really enjoyed visiting Walmart and striking conversions with the cashiers since each conversation was really welcoming and refreshing. However, my local Walmart changed half its registers to self-checkout ones; it still saddens me now every time I walk past that silent section of the store, filled with only the beeps and automated voices of the machines.
The narrative of technology often centers around the bad and the ugly. While some stories are based on myths and paranoia, most are products of mainstream media. We live in an age where we are increasingly polarized and isolated because of our assumptions and biased interpretations of the dominant culture’s single story. Seemingly a domain for solely experts, AI and its effects are far-reaching and extend into the daily lives of everyday people; in fact, the experiences of these everyday people, especially marginalized communities, matter most in thinking about our future with AI.
My work with the “Camera Digita” team shed light on both the real risks and potential boons of AI. With the goals of empowering all in mind, I began to question those responsible for the bad data behind algorithmic bias and learn how to better facilitate discussions about ethical tech through design justice.
Addressing intolerance starts with recognizing that we have biases and often subject people to what we believe to be true. A key part of this is to consider the nuances of diverse experiences, allowing us to develop the mindset and attitudes necessary to dismantle inequity. When narratives of the particular or marginalized are added to the story of humanity, a society that conjoins empathy with progress is formed.
Ultimately, we are the creators of our own destiny with AI. From implementing smart AI gadgets in our homes to fighting for privacy rights against non-consensual facial recognition, we determine how we engage with evolving technology. With that, I hope that my team and I were able to capture some of these perspectives in our final product. We hope that it becomes a mission for all to become each other’s ally and advocate, navigating complex socio-technical systems together.
Much thanks,
Amber Park