Skip to Main Content

Algorithms, AI, and First-Year Academic Skills

Ethical Concerns of Using AI

AI WASN'T BUILT ON ETHICAL OR EQUITABLE WORKING STANDARDS.

"Mophat Okinyi, a quality-assurance analyst on Mathenge’s team, is still dealing with the fallout. The repeated exposure to explicit text, he said, led to insomnia, anxiety, depression, and panic attacks. Okinyi’s wife saw him change, he said; not long after, she left him. “However much I feel good seeing ChatGPT become famous and being used by many people globally,” Okinyi said, “making it safe destroyed my family. It destroyed my mental health. As we speak, I’m still struggling with trauma.”

"The data vendors behind familiar names like OpenAI, Google, and Microsoft come in different forms. There are private outsourcing companies with call-center-like offices, such as the Kenya- and Nepal-based CloudFactory, where Joe annotated for $1.20 an hour before switching to Remotasks."


AI ISN'T NEUTRAL.

"“The real question of alignment is whose preferences are we aligning LLMs with and, perhaps more importantly, who are we missing in that alignment?” asks Diyi Yang, professor of computer science at Stanford and senior author of the study..."

"In Sora’s world, everyone is good-looking. Pilots, CEOs, and college professors are men, while flight attendants, receptionists, and childcare workers are women. Disabled people are wheelchair users, interracial relationships are tricky to generate, and fat."


AI ISN'T SENTIENT.

"Moreover, through a close attention to human flourishing, a virtue approach tends to foster richer reflection on specific (often intrinsic) goods that technological change can jeopardize but that are not easily named or dealt with through the other approaches. Because of its emphasis on character development through proper socialization and communal moral instruction, virtue ethics forces us to ask how AI might alter our social practices and our very conception of the good."

"Rather than checking and managing lay persons anthropomorphic tendency it tends to support it. But to the extent that the tendency to anthropomorphize shapes how people behave toward the anthropomorphized entity, such anthropomorphism has ethical consequences. First, perceiving AIs as humanlike entails considering them as moral agents, and their actions as the result of an autonomous decision-making process (Waytz et al. Citation2010), thereby having a normative impact that they should not have."

"A lot of our technology we judge according to its utility—if it’s functioning well and delivers what we want and value. My worry is that we could let those criteria contaminate our ability to recognize people’s intrinsic dignity. We must remember that dignity is not dependent on performance or utility. Noreen mentioned social media, where young people, in particular, have to be able to project an attractive image of themselves—one that’s not necessarily true, but that will command the attention of others. I think we have to rediscover that sense of the intrinsic worth and value of a person that’s not simply down to their functionality or usefulness."

"ChatGPT is good enough at doing what it’s designed to do, predicting text continuation based on statistical patterns in the huge corpus of textual data it is fed, to give readers a strong impression of having certain personal properties, and specifically, something like understanding or at least a kind of sensitivity to the meanings of terms in natural language. But this is also the reason why it is not likely to be as reliable or trustworthy as it appears: it is not actually tracking the meaning of our words, or the logic of our inferences, reasoning, and commonsense thinking.

When I answer someone in a conversation, I do so by understanding what they are asking and responding to that—I don’t consider the likelihood of that string of signs being followed by other kinds of signs. If the AI is responding to or “answering” a question, it is not the question we ask, but something like: “What is the statistically most likely next string of symbols given that one?” (Analogy: compare someone who actually understands a language with someone who has merely memorized the typical patterns of sounds or symbols in a language.) "

Source: Raspberry Pi Foundation

"For Weizenbaum, we cannot humanise AI because AI is irreducibly non-human. What you can do, however, is not make computers do (or mean) too much. We should never “substitute a computer system for a human function that involves interpersonal respect, understanding and love”, he wrote in Computer Power and Human Reason. Living well with computers would mean putting them in their proper place: as aides to calculation, never judgment."​​​​​

THE ENVIRONMENTAL COST OF AI IS INSURMOUNTABLE. 

"Tech firms have reported that the increasing energy demand from building and running these data centres is pushing up their global greenhouse gas (GHG) emissions. Microsoft, which has invested in ChatGPT maker OpenAI and has positioned generative AI tools at the heart of its product offering, announced in May 2024 that its CO2 emissions had risen nearly 30% since 2020 due to data centre expansion. Google’s 2023 GHG emissions were almost 50% higher than in 2019, largely due to the energy demand tied to data centres."

"After PJM tapped the company to build a 36-mile-long portion of the planned power lines for $392 million, FirstEnergy announced in February that the company is abandoning a 2030 goal to significantly cut greenhouse gas emissions because the two plants are crucial to maintaining grid reliability."

"While a mundane search query finds existing data from the Internet, she says, applications like AI Overviews must create entirely new information; Luccioni’s team has estimated it costs about 30 times as much energy to generate text versus simply extracting it from a source."

"While electricity demands of data centers may be getting the most attention in research literature, the amount of water consumed by these facilities has environmental impacts, as well. Chilled water is used to cool a data center by absorbing heat from computing equipment. It has been estimated that, for each kilowatt hour of energy a data center consumes, it would need two liters of water for cooling, says Bashir."

PERSONAL PRIVACY CONCERNS

ChatGPT users may want to think twice before turning to their AI app for therapy or other kinds of emotional support. According to OpenAI CEO Sam Altman, the AI industry hasn’t yet figured out how to protect user privacy when it comes to these more sensitive conversations, because there’s no doctor-patient confidentiality when your doc is an AI.

“For consumers, this means the era of “fair” pricing is over. The price you see is the price the algorithm thinks you’ll accept, not a universal rate.” 

"In 2019, algorithms from both companies were included in a federal study of more than 100 facial recognition systems that found they were biased, falsely identifying African American and Asian faces 10 times to 100 times more than Caucasian faces."

"For example, generative AI tools trained with data scraped from the internet may memorize personal information about people, as well as relational data about their family and friends. This data helps enable spear-phishing—the deliberate targeting of people for purposes of identity theft or fraud. Already, bad actors are using AI voice cloning to impersonate people and then extort them over good old-fashioned phones."

"The CEO of OpenAI and the chair and co-founder of Tools for Humanity was responding to concerns about World Network, a crypto project that collects irises to authenticate humans online through an open-sourced orb-shaped scanner."

NATIONAL SECURITY CONCERNS

"Russia is waging wide-reaching information warfare with the Western world. A significant part of its ongoing efforts takes place on social media, which Russia has employed to spread disinformation and to interfere with the internal politics of other countries, targeting varied audiences, including the U.S. military."

" In fact, a separate experiment run by Michael Horowitz and Erik Lin-Greenberg found just this: participants were more willing to retaliate when an adversary’s AI-enabled weapon accidentally killed Americans than when only a human operator was involved, demonstrating greater forgiveness of human error than that of machines."

"What they'll use it for is behavior change campaigns, disinformation campaigns, for really targeted messaging as to what Western audiences like, what they do," he added."

"In Saturday’s hack on the Municipal Water Authority of Aliquippa outside of Pittsburgh, authorities say Cyber Av3ngers, which researchers believe has ties to the Iranian government, breached a digital control panel made by an Israeli-owned company, Unitronics, and disabled it. The group also took over the control panel’s digital display screen — which is used to automatically adjust water pressure — to make it read: “Every equipment ‘Made in Israel’ is Cyber Av3ngers legal target.”