AI is Everywhere
Have you unlocked your iPhone with your own face? Have you listened to a generated playlist on Spotify? Have you taken a group photo on your phone and it knows your friend’s faces? Have you scrolled through your feed on social media or streaming service? If the answer to any of these questions are yes, then you have interacted with a hotly debated piece of technology-Artificial Intelligence!
Artificial Intelligence (AI) and Machine Learning (ML) are how we teach computers to find patterns and make decisions in a way similar to humans—but it’s not a magic app. These systems, while seeming like miracle solutions to the world’s problems, are not always fair. Both AI and ML are sciences that combine math, data, and people’s data as resources. When we understand how it works, we can make smarter decisions about the technology we use every day. We can also ask better questions: Who built this? Is it fair? Is it helpful? This makes science more exciting, more honest, and more responsible.
What is Artificial Intelligence?
Artificial Intelligence is an overarching term for a technology that allows computers and other machines to learn like humans, process information, solve problems, and make decisions. Development of this system started in the 1950s as mathematician Alan Turing, best known for helping crack secret Nazi codes during World War II, asked a groundbreaking question: Can machines think? He imagined a future where computers could learn and make decisions. His “Turing Test” became a famous way to ask whether a machine’s behavior was smart enough to be called “thinking. Soon after, computer scientists like Arthur Samuel developed early machine learning programs, such as a game of checkers that got better the more it played. In 1958, Frank Rosenblatt created the “perceptron,” one of the earliest models that mimicked how the human brain might learn. Over the last 70 years, great strives have been made to create computers that have the same or greater intellect as human beings.
What is Machine Learning?
Machine learning, one of the most popular subsets of AI, is a kind of computer science where machines “learn” by studying large amounts of data from many different sources. This means instead of being told exactly what to do, the machine looks at tons of examples and figures out the patterns. For instance, how does a phone recognize your face? It learns what eyes, noses, and smiles look like by seeing thousands of pictures labeled “face.” Then it starts to recognize new ones—even yours!
Consider how your smartphone recognizes a user’s face to unlock the screen. Instead of being programmed with a generic set of facial features, the phone is trained using thousands of images labeled as “face.” By analyzing these examples, the system learns to detect the characteristics that define a human face—such as the arrangement of eyes, nose, and mouth—and can then apply this knowledge to new, unlabeled images, including the user’s own face.
Machine learning systems can analyze various types of data: images, numbers, words, sounds, and more. This process is more akin to teaching than traditional coding. A human designs the learning environment, selects the goals, and determines what a successful outcome looks like. While machines perform the pattern recognition and decision-making, it is human insight and oversight that guide the process.
The essential takeaway is that machine learning allows computers to learn from examples and improve over time. However, behind every intelligent system is significant human effort—in gathering data, designing models, and ensuring that the system behaves in responsible and meaningful ways.
Artificial Intelligence at the core of machine learning is the ability to detect patterns in data and use those patterns to make predictions. There are two primary methods that machines use to learn: supervised learning and unsupervised learning.
In supervised learning, the system is trained using data that includes both input and the correct output. For instance, a model might be given thousands of labeled images—each marked as “cat” or “dog.” By studying these examples, the system learns the features that distinguish one category from the other. When presented with a new, unlabeled image, it can then predict whether it is more likely a cat or a dog. This approach is highly effective when large amounts of labeled data are available and the task is well-defined.
Much like people, ML computers think in terms of inferences. Machine learning supervised learning decision trees make decisions by asking a series of questions based on the features of the information. It starts with an initial root node and moves down the tree, following branches that correspond to the answers (yes/no). At each internal node, it splits the data based on the feature that best separates the outcomes. This process continues until it reaches a leaf node, which gives the final prediction. Essentially, it’s like a flowchart that learns rules from data to make decisions.
Unsupervised learning, on the other hand, involves training a system without labeled outcomes. Here, the machine is asked to examine the data and organize it based on similarities and patterns it detects. For example, an unsupervised algorithm might analyze a large collection of music and group songs based on their tempo, instrumentation, or rhythm—without knowing the genre or artist. This kind of learning is particularly useful for discovering hidden structures or categories within complex datasets.
Although machine learning systems can achieve impressive results, they do not understand information the way humans do. Their predictions are based on mathematical models, statistics, and probabilities—not emotions, context, or meaning. A machine may learn to recognize a cat in a picture, but it has no concept of what a cat actually is beyond the data it has been trained on.
Another crucial consideration is the quality of the data used in training. Machine learning models are only as good as the data they are given. If the data is biased, incomplete, or inaccurate, the model will likely produce flawed results. This makes data collection and preparation an essential part of the process.
Ethical Concerns of AI
AI is changing the way we live, learn, and work, but there are many things we must consider with this new technology. While these tools can be helpful and even fun, they also raise serious questions about fairness, safety, and responsibility. Some AI systems exhibit hidden biases, treating people unfairly based on how they speak or where they’re from. Others are used in ways that increase policing in certain communities, often making long-standing problems worse instead of better.
But the impact of AI goes beyond people; it affects our planet too. Concerningly, AI uses massive amounts of electricity and water, how it can spread false information online, and how hidden workers behind the scenes make these systems function. Understanding all of this helps us use AI more wisely and push for a future that’s better for everyone.
Most AI systems, especially the large ones called “generative AI,” are trained using supercomputers that process huge amounts of data. These computers run day and night for weeks—or even months—to help an AI model learn patterns in language, images, or speech. Training one large AI model can use as much electricity as hundreds of U.S. homes use in an entire year. And that’s just for training. Every time someone uses a chatbot or asks AI to generate an image, it requires even more electricity. Over time, these small actions can add up to a lot of energy use across the world.
The energy used by AI systems usually comes from data centers—giant buildings full of powerful computers. These data centers often require enormous amounts of cooling to prevent machines from overheating. In some cases, they use millions of gallons of water each day to stay cool. The Guardian reported in April 2025 that Elon Musk’s AI company, xAI, planned to build a large data center near Memphis, Tennessee. The project would consume vast quantities of electricity and water—resources that local communities may also need for homes, farms, and businesses. Some residents are now concerned about how these AI projects could affect their access to clean water and reliable power.
Another challenge is that AI’s environmental impact isn’t the same everywhere. Wealthier regions, where most AI companies are based, often benefit the most from AI technology. Meanwhile, the energy and water costs can fall more heavily on less wealthy or rural areas—especially in places where data centers are built. This means that the environmental burden of AI can be unfairly shared, with some communities carrying more of the costs while receiving fewer of the benefits.
Anyone can be fooled by AI generated images or text. The creation and rise of generative AI applications makes the spread of false information very easy. Often when reading information, especially on social media platforms, the user is not prompted to interrogate the source from which the information is being shared-this can lead to absorbing untrue or purposefully malicious facts.
Since 2020, there has been a large influx of fraudulent information surrounding contentious topics such as climate change, extreme weather events, COVID-19, and various political elections. Thanks to AI tools that can generate realistic images, voices, and text, fake content can now look incredibly convincing. During recent U.S. election cycles, for example, researchers at the University of Maryland observed a rise in AI-generated content, including misleading images and even fake robocalls. These tools allow people to spread misinformation faster and more widely than ever before.
One of the biggest challenges with AI-generated misinformation is that it often plays on strong emotions like fear or anger. According to researchers, posts that stir up emotions are more likely to get shared—whether they’re true or not. Some experts warn that the goal of certain types of misinformation isn’t even to make people believe a specific lie, but to make them doubt everything they see online. This kind of confusion weakens trust in news, science, and other important sources of information.
To make matters more difficult, it’s getting harder to tell the difference between real and fake content. AI-generated videos, known as deepfakes, and realistic text can fool even experienced professionals. Tools that claim to detect AI-generated content aren’t perfect either—they often make mistakes or miss subtle clues. This means we need better technology to help us identify misinformation, but we also need to be smarter about how we consume and share content.
Thankfully, schools and universities are starting to take action. Educators are helping students develop media literacy skills—like checking multiple sources, being aware of emotional reactions, and asking critical questions about what they see online. Some new AI tools are also trying to help by adding invisible “watermarks” to generated content. This could make it easier in the future to spot AI-made images or videos.
In the meantime, there are simple steps we can all take to protect ourselves and our families. If a post makes you feel very angry or upset, take a moment before sharing. Search for the same story on trusted news sites, and try a reverse image search to see where a picture really came from. These habits can help stop misinformation from spreading and keep communities better informed.
AI systems, especially large language models, reflect not only explicit but also covert biases, emphasizing racial stereotypes embedded within dialects such as AAE (African American English). Researchers from Stanford HAI and the Allen Institute found that while language models have become adept at suppressing overtly racist content, they still assign negative attributes—like “lazy,” “dirty,” or “stupid”—to speakers of AAE, compared to Standard American English, even when race isn’t mentioned. Alarmingly, these covert biases translate into decisions about employment and criminality, causing AAE speakers to be matched with lower-prestige jobs and judged more harshly in hypothetical court scenarios—even facing a higher likelihood of convictions or severe sentences.
This covert racism isn’t limited to language. AI systems often mirror societal biases due to skewed data and lack of transparency, raising concerns about discrimination in many areas including education and employment. For example, AI-driven hiring tools trained on historical resumes may unconsciously reproduce gender and racial disparities embedded in past hiring trends. Negating careful/inclusive design, AI may go from seemingly objective to unjust.
AI usage in the criminal justice system is particularly harrowing. Tools such as predictive policing algorithms, a software developed to forecast crime, often rely on historical data heavily skewed by over-policing of Black communities. They actively reinforce long held assumptions about minority communities by sending more law enforcement resources toward already over-surveilled neighborhoods. Institutions such as the NAACP calls for transparency, regulatory oversight, and legislative safeguards to prevent these AI tools from perpetuating systemic racial bias.
Mechanically, AI has difficulty recognizing certain types of people as well. Facial recognition technology is deeply flawed and largely unregulated. Studies highlight a large disparity in accuracy: the error rate can be as low as 0.8 percent for light‑skinned men, but are extremely high for dark‑skinned women. Because datasets used to train these systems frequently overrepresent white men, individuals who are women, people of color, and nonbinary are much more likely to be misidentified. In law enforcement contexts, these misidentifications can have serious, even life‑threatening, consequences—such as wrongful arrests and invasive surveillance disproportionately targeting marginalized communities.
Luckily these biases are being addressed. Black in AI, a global organization of Black researchers in artificial intelligence, has long emphasized the lack of diversity in AI development. Their work highlights how bias in AI doesn’t only come from the data, but also from the people who design and train these systems. When the field is dominated by voices from a narrow set of backgrounds, the tools they create may ignore—or even harm—those who are left out.
AI technologies can seem impressive, even magical, as they answer questions or respond to commands instantly. But AI is not magic. It relies on thousands of hours of human labor—some of it hidden, and some of it exploitative. Understanding how AI systems are built means recognizing the real people whose work makes these systems possible.
Behind every AI model is a massive amount of data. AI systems “learn” by studying millions of examples—conversations, images, videos, or texts. But this data doesn’t organize itself. It has to be labeled, sorted, and cleaned by humans. In many cases, this work is outsourced to workers in low-income countries who are paid very little to do difficult, repetitive, and often emotionally distressing jobs.
Select large language models, such as ChatGPT, outsourced workers in Kenya who were paid less than $2 an hour to review and label harmful and disturbing content. These workers were exposed to graphic material, including instances of violence, abuse, and hate speech, in order to help the AI model recognize and block such content later on. One worker described it as “torture,” yet their work was essential to building a “safer” AI system for users worldwide. The article made clear that while AI is marketed as cutting-edge and clean, its creation involves labor conditions that are neither modern nor fair.
This kind of labor has been called “ghost work” by experts because the people doing it are nearly invisible in public conversations about AI. Their names are rarely mentioned, and their contributions are not celebrated. Yet without this workforce, much of it made up of people on different continents, AI would not function at all.
In Empire of AI, author Karen Hao draws parallels between AI and historical labor systems built on inequality and extraction. Just as industrial empires depended on underpaid workers to fuel their machines, modern AI companies depend on vast networks of hidden labor to power their algorithms. This work is often outsourced, hidden from consumers, and stripped of dignity and visibility.
All of this raises important questions: Who is building AI, and under what conditions? Who benefits most from its success? And who bears the burden of its development?
As students and families learn more about AI, it’s important to go beyond the surface. While the technology can be helpful and exciting, its development often relies on systems of inequality.
AI-vengers Assemble
As AI systems become more powerful and more embedded in our daily lives, a growing number of researchers are asking tough questions about who these technologies serve—and who they harm. While AI is often marketed as neutral or objective, it reflects the values, assumptions, and priorities of the people and institutions who design it. That’s why it matters who gets to be part of the conversation. Around the world, scientists, technologists, and advocates are working to ensure that AI is more fair, transparent, and accountable—not just faster or more profitable. They are challenging the dominance of Big Tech, bringing attention to hidden harms, and proposing new ways to build technology that is rooted in equity, safety, and sustainability.
The following changemakers—Timnit Gebru, Dr. Joy Buolamwini, and Dr. Sasha Luccioni—are leading that charge. Each approaches the challenges of AI from a different angle: ethics, racial justice, and environmental sustainability. Together, their work reminds us that responsible AI isn’t just about better code. It’s about reimagining who builds these systems, how they’re used, and what kind of future we want to create.
Timnit Gebru
Timnit Gebru is the founder of the Distributed AI Research Institute (DARI), an organization that advocates for diversity within AI research and decentering the global power of Silicon Valley’s artificial intelligence research. Fired from Google in late 2020 for voicing her concerns centering the company’s AI ethical practices, she strives to develop spaces that center underrepresented communities within the field. DARI’s belief is that the harms from this rapidly evolving industry are not set in stone and can be mitigated through prioritizing health and wellbeing of researchers, expanding the identities of developers within the space, and decentralizing Big Tech’s power to advance their products. She makes us question if AI is a solution for simple problems when it causes the aforementioned resource and labor risks.
Dr. Joy Buolamwini
Dr. Buolamwini is a creative science communicator who founded the Algorithmic Justice League, an organization that blends art and research to explain AI’s impact on society at large. Building on her own discriminatory experiences working with antiblack facial recognition software, she decided to dedicate her life to holding AI developers accountable by pointing out how human biases are coded into these systems. For instance, algorithms used in law enforcement prioritize predictive policing and risk assessment technologies that perpetuate unjust racial disparities within the incarceration system. These separate elements, when looked at from a birds eye view, lead to loss of personal and communal freedoms, economic stagnation, and social stigmatization. It is being argued that white supremist ideologies are being infused into new technologies which have lasting effects.
Dr. Sasha Lucciconi
Dr. Sasha Lucciconi is tackling environmental risks of AI. Starting with a simple, yet important question, “when I Google something how much energy is that using?” After that she started investigating coding’s environmental footprint. She created a tool that calculated how much energy is exerted for one piece of code. This, in turn, lead to the discovery that CO2 expenditures from select AI applications exceeds 50 metric tons over it’s duration-that is equivalent emissions to flying from New York to London 80 times. She now works as the Climate Lead at Hugging Face, an international open-source AI company, where she is responsible for developing research and science communication strategies to advocate for a more sustainable AI powered future.
Me-Chine Learning
Quick Draw! is a fun, fast-paced online game developed by Google Creative Lab that also serves as a fascinating demonstration of how machine learning works. At first glance, it looks like a simple doodling game—but behind the scenes, it’s powered by a sophisticated neural network designed to learn how humans draw everyday objects.
The game asks players to sketch recognizable objects within a limited time—typically 20 seconds per drawing—while an AI model attempts to guess what the player is drawing in real-time. Each drawing contributes to a global dataset used to improve the system’s ability to recognize visual patterns, helping researchers better understand how machines learn from human input.
It’s both a game and a research project, showcasing how artificial intelligence is trained using large datasets to become more accurate over time.
Go to https://quickdraw.withgoogle.com using any modern web browser
Click “Let’s Draw”
For each round, the game gives you a word like “bicycle,” “cake,” or “toothbrush.” You have 20 seconds to draw each prompt.
Use your mouse (or finger/stylus on touchscreen) to sketch the object as best as you can. The AI voice will try to guess what you’re drawing while you’re drawing.
At the end of the round, you’ll see all your drawings along with whether the AI guessed them or not. See similar drawings and compare them to other players.
What did you observe?
Quick, Draw! is more than just a drawing game—it’s a global collaboration in building smarter AI systems. By playing, you’re not only having fun, but also contributing to the future of artificial intelligence. It’s an example of how interactive design and machine learning can come together through play.
What’s the Science?
Computer science is the study of how we use computers to solve problems. It combines logic, math, and creativity to help people design programs, build websites and apps, send messages, play games, or even explore space. Unlike just using a computer, computer science is about understanding how and why computers do what they do—and imagining what they might do next.
Computer scientists write instructions, called algorithms, that tell a computer exactly how to complete a task, like sorting a list or calculating a rocket’s path. This process is called programming. But as computers grow more powerful, scientists have found new ways for machines to go beyond step-by-step instructions. This is where machine learning comes in. Instead of telling a computer exactly what to do, we show it examples and let it find patterns—so it can learn, in its own way, how to solve the problem.

Play is one of the most powerful ways to learn—especially when it comes to understanding something as complex as artificial intelligence. When you draw in a game like Quick, Draw!, Teachable Machine, or build a silly chatbot just for fun, you’re actually doing what real scientists and engineers do every day: testing ideas, noticing patterns, and learning from trial and error. Making mistakes isn’t just okay—it’s part of the process! In fact, computers learn the same way. They make guesses, try things out, and improve with feedback. That’s what experimenting is all about. Whether you’re drawing, building, or playing with a new tool, you’re training your brain to think like a creator, not just a user. The more you play and explore, the more confident and curious you’ll become—and that curiosity is the first step toward making real discoveries.
You can make a difference in how technology grows. Right now, by asking questions, learning about how AI works, and thinking critically about what it can and can’t do, you’re preparing yourself to be a future leader in science and technology. The more voices and perspectives we bring into the world of AI, the better it becomes for everyone. When people from all kinds of backgrounds get involved—girls, boys, artists, whoever, we build tools that are more fair, more creative, and more kind. So keep wondering, keep questioning, and keep imagining. As Timnit Gebru emphasized, the future of AI isn’t written yet and your ideas could help write it.