Skip to Main Content

Algorithms, AI, and First-Year Academic Skills

What isn't AI? AI is not human.

"ChatGPT is good enough at doing what it’s designed to do, predicting text continuation based on statistical patterns in the huge corpus of textual data it is fed, to give readers a strong impression of having certain personal properties, and specifically, something like understanding or at least a kind of sensitivity to the meanings of terms in natural language. But this is also the reason why it is not likely to be as reliable or trustworthy as it appears: it is not actually tracking the meaning of our words, or the logic of our inferences, reasoning, and commonsense thinking.

When I answer someone in a conversation, I do so by understanding what they are asking and responding to that—I don’t consider the likelihood of that string of signs being followed by other kinds of signs. If the AI is responding to or “answering” a question, it is not the question we ask, but something like: “What is the statistically most likely next string of symbols given that one?” (Analogy: compare someone who actually understands a language with someone who has merely memorized the typical patterns of sounds or symbols in a language.) "

 

Source: Raspberry Pi Foundation

"For Weizenbaum, we cannot humanise AI because AI is irreducibly non-human. What you can do, however, is not make computers do (or mean) too much. We should never “substitute a computer system for a human function that involves interpersonal respect, understanding and love”, he wrote in Computer Power and Human Reason. Living well with computers would mean putting them in their proper place: as aides to calculation, never judgment."

Commonwealth | AI and Personhood

"A lot of our technology we judge according to its utility—if it’s functioning well and delivers what we want and value. My worry is that we could let those criteria contaminate our ability to recognize people’s intrinsic dignity. We must remember that dignity is not dependent on performance or utility. Noreen mentioned social media, where young people, in particular, have to be able to project an attractive image of themselves—one that’s not necessarily true, but that will command the attention of others. I think we have to rediscover that sense of the intrinsic worth and value of a person that’s not simply down to their functionality or usefulness."

"Moreover, through a close attention to human flourishing, a virtue approach tends to foster richer reflection on specific (often intrinsic) goods that technological change can jeopardize but that are not easily named or dealt with through the other approaches. Because of its emphasis on character development through proper socialization and communal moral instruction, virtue ethics forces us to ask how AI might alter our social practices and our very conception of the good."

​​​​​​​"Rather than checking and managing lay persons anthropomorphic tendency it tends to support it. But to the extent that the tendency to anthropomorphize shapes how people behave toward the anthropomorphized entity, such anthropomorphism has ethical consequences. First, perceiving AIs as humanlike entails considering them as moral agents, and their actions as the result of an autonomous decision-making process (Waytz et al. Citation2010), thereby having a normative impact that they should not have."

Bias in AI

"In Sora’s world, everyone is good-looking. Pilots, CEOs, and college professors are men, while flight attendants, receptionists, and childcare workers are women. Disabled people are wheelchair users, interracial relationships are tricky to generate, and fat people don’t run."

AI Development Had Exploited the Most Vulnerable

"Mophat Okinyi, a quality-assurance analyst on Mathenge’s team, is still dealing with the fallout. The repeated exposure to explicit text, he said, led to insomnia, anxiety, depression, and panic attacks. Okinyi’s wife saw him change, he said; not long after, she left him. “However much I feel good seeing ChatGPT become famous and being used by many people globally,” Okinyi said, “making it safe destroyed my family. It destroyed my mental health. As we speak, I’m still struggling with trauma.”

​​​​​​​"The data vendors behind familiar names like OpenAI, Google, and Microsoft come in different forms. There are private outsourcing companies with call-center-like offices, such as the Kenya- and Nepal-based CloudFactory, where Joe annotated for $1.20 an hour before switching to Remotasks."