The city is quiet at night, the only sound being a faint rumble emanating from the bus’ engine, though even that is drowned out by the music blasting from James’ headphones. Looking out the window, the city’s sights slowly drift in and out of view. Suddenly, the bus stops and James looks up, his cheek red from resting on his palm. This is his stop. He stands up, his backpack tossed over one shoulder and his basketball wedged under the other. He lazily shuffles down the aisle, still sore from tonight’s game. Stepping off the bus, James inhales the crisp evening air, soon cut by the exhalation of the bus that drifts over the horizon.
He returns home a quarter after nine and throws together a quick meal for himself. His mother is working a late shift at the hospital tonight, and his father is on a work trip in Phoenix. Walking to his room, he sighs wearily. His history teacher assigned a paper about the Industrial Revolution due tomorrow, but James wasn’t able to work on it all day. Additionally, he had already been learning about the topic for a week now, which was ample time to understand the subject matter.
So James opens up his laptop and types chatgpt.com into his search bar and starts a new chat with the AI. He thinks for a moment before typing ‘Write me a 300-word essay about the industrial revolution’ and hitting enter. The chatbot thinks for a moment before vomiting out an essay that James skims over, before deciding is satisfactory. He pastes the text into a document and uploads it, satisfied that he didn’t have to stay up all night. Plus, who’d care if he used AI for a short paper?
Part I: Money Makes The World Go Round
This story about James is fabricated, but the situation is real. The students of yesteryear plagiarized Wikipedia or a more obscure website that teachers surely wouldn’t look into, maybe even putting the work in to Frankenstein a plagiarized paper together from various sources that are bursting at the seams with mismatched writing styles and opinions. But now, ChatGPT will do all of that but better (or so it seems) – the writing is cohesive, the tone is the same, and all for less time and effort! However, there are many issues AI struggles with that aren’t being solved. This carelessness hurts the consumers of AI, including the students, workers, and experimenters who are genuinely interested in utilizing the technology in beneficial ways.
Section I: The Problem is the Producers
I, by no means, am here to argue that the concept of AI is bad; rather, I’m here to discuss how the implementation of these technologies has proven messy and a net negative for our society. This may all change soon, but, for the time being, models like OpenAI’s GPT-4o (and o1) and Google’s Gemini are all given to the public unfinished and riddled with problems all for the same reason: greed. In his book Taming Silicon Valley, author Gary Marcus talks about his personal experience with artificial intelligence and how tech giants realized the potential profits that releasing AI models as soon as possible provides.
“Greed has been a big factor, and the technology we have right now is premature, oversold, and problematic; it’s not the best possible AI we could envision, and yet it’s been raced out the door. Developed carelessly, AI could easily lead to disaster” he writes. Marcus is an expert on artificial intelligence who founded multiple successful AI start-ups and even testified to the Supreme Court on AI safety, saying “I come as a scientist, as someone who has founded AI companies, and as someone who genuinely loves AI – but who is increasingly worried.”
And he should be worried, as should everyone. Profit is causing developers of artificial intelligence to lose sight of what AI is really about and instead worry about being able to drive their stock prices up. And an even bigger problem – it’s working. After markets closed on December 5th, 2023, Google’s parent company Alphabet (NASDAQ: GOOG) was valued at 132.39 USD per share. The next day, Google’s DeepMind development team unveiled Gemini, a “chance to make AI helpful for everyone, everywhere in the world,” according to Google CEO Sundar Pichai. But the real help was from investors, who continued to propel the company to an all-time peak of 192.66 USD per share the following July. Even now, on October 17th, 2024, shares are trading for a modest 162.51 USD, falling partly due to their court ruling as a monopoly, but also due to other market-related factors. Below, you can see the substantial 45% price increase in Alphabet Inc’s stock price.
Section II: Putting the Profit in Nonprofit
Quickly before getting back to talking about the ethics of AI, you probably want to know why I didn’t use OpenAI as the stock market example above, which is a valid question you might be asking. After all, aren’t they the biggest name in Large Language Models (LLMs) since ChatGPT was released in November 2022? LLMs are chatbots, the most famous being ChatGPT, which are trained on a dataset of language from across all media and can use that (and the web now, too) to answer queries from a user. And while OpenAI is indeed (one of) the most gargantuan Silicon Valley tech companies, valued at over 150 billion dollars, they are a private company (meaning they can’t be publicly traded like Google stock), something that has remained true since its days as a start-up way back in 2015. Then, they were a nonprofit organization dedicated to researching safe, smart, and – dare I say open – AI, but now the company has been restructured to (possibly intentionally) be a confusing, sketchy mess of for-profit and nonprofit jargon.
But here’s a brief way of explaining it: in 2019, OpenAI became an LP, or limited partner, which blends aspects of for-profit and nonprofit organizations alike. They could receive venture capital but were still overseen by their nonprofit committee. Thus, the for-profit subsidiary that released products to consumers could make money off ChatGPT Plus subscriptions. Enterprise licenses, and, when DALL-E 2 was still a thing, image credits. However, as the subsidiary (OpenAI Global, LLC) is a capped-profit company, said profit cannot exceed 100 times the investments they receive. To muddy the waters even further, Microsoft is also tied to OpenAI Global, LLC. While they don’t have a formal stake in the company, they did invest by giving OpenAI Microsoft Azure cloud credits, the official Microsoft Blog citing “Azure will power all OpenAI workloads across research, products and API services.” How much are all of the credits worth? An eye-watering 10 billion dollars worth of credits were invested into the capped-profit subsidiary, but that doesn’t mean they don’t want the money back. The Verge’s Tom Warren says that Microsoft will “receive 75 percent of OpenAI’s profits until it secures its investment return and a 49 percent stake in the company [after that].”
Source: OpenAI
So, OpenAI formatted their company in such a way that they could still be a research organization, while also raising billions of dollars from investors and even more from subscribers to ChatGPT Plus, a 20-dollar-a-month service that nets you access to DALL-E 3 and GPT-o1. It is possible, albeit incredibly difficult, to say that there might be a shred of good intent for OpenAI’s structuring. And it is much easier to say that the prospect of making billions of dollars led to this change.
Now that you know more about how companies are using artificial intelligence as a means to drive up profits, you need to know why what Gary Marcus calls “Responsible AI” is important and how, right now, such artificial intelligence doesn’t exist. As he writes, “We can’t realistically expect those who hope to get rich from AI are going to have the interests of the rest of us at heart.”
Part II: Playing With Fire
There is no stopping artificial intelligence. While that sentence seems dystopian, it’s the truth. OpenAI, Google, Meta, and lesser-known companies like Anthropic are pushing AI as the solution to all our problems and strife, lauding it as a miracle worker that can do no wrong. However, this is untrue. AI, in its current (crude) form, is known for spreading misinformation. Many of the most widespread and viral mishaps these LLMs have produced were quickly removed from the database. For example, asking Google “How many rocks should I eat per day” used to provide an AI overview suggesting at least one small rock should be ingested, even though the article it pulled from was the well-known satirical website The Onion.
No AI Overview in sight – but the debacle is still ingrained in the cultural zeitgeist of our digital world.
Or, when a user looked up how to get their cheese to stay on pizza better, the overview suggested adding ⅛ of a cup of glue to the sauce. However, this came from the popular forum website Reddit, where a user sarcastically put forth such an option. Humans are usually able to discern satire and sarcasm from reality, but artificial intelligence can’t. It takes everything at face value, which can lead to potentially disastrous results. And since Google struck a deal with Reddit, they can now use all of Reddit’s data to train their models. While this could be a good opportunity to make sure LLMs can understand the difference between real and fake; serious content versus that made for comedic purposes, I fear the current trajectory of said LLMs will only confuse them, leading to a higher outpouring of bad – even hazardous – advice or answers.
Plus, this post is over 10 years old – talk about a timely piece!
Section I: AI is Only Helping the Degradation of Services
Google, once seen as an all-knowing beacon of truth, is now nothing more than a treacherous battlefield that leads only to mediocre results, and that’s if you can dodge the sponsored content, the content engineered to flood your search results, and the poor-quality articles and recipes made to rise to the top of your page, using SEO (Search Engine Optimization) to get you to click, only to bombard you with ads and pop-ups. This combined with the purchase of Reddit’s data leads to a combination of untrustworthy and low-effort content that is presented to you as if it’s the crème de la crème, carefully selected by their algorithm to be presented to you.
And that’s the kicker – there is no careful algorithm that’s sifting through a mountain of meaningless garbage to find the answers hidden within. The AI is just there to condense all the slop into an inescapable overview at the top of the page. Another point against the overview is how slow it is. Google prides itself on how fast it searches for millions of results, going so far as to put said metric at the top of the search page. But the AI Overview is different, making you wait for a solid second or more (while blocking the rest of the content), a problem not found when searching without the overview. When looking up “what is the Sumerian writing system called,” the AI Overview took an average of 1.66 seconds to procure the answer, cuneiform, while the search engine itself took an average of 0.22 seconds and found nearly half a million results. So not only does the overview push content optimized for the search engine, but it takes over seven times longer to do so.
Section II: An Untrustworthy Source
Many teachers have banned students from using these overviews to answer questions and do research, as many do Wikipedia – while there are sources, you don’t know what they are. If you want a legitimate answer to your question, then look at the primary sources themselves, and be wary of what artificial intelligence does and knows. Large Language Models don’t understand what they’re saying. Anything they read or write is separated into “tokens,” which means that instead of being able to digest or create a whole piece of information, it is split into many smaller fragments that make it harder for LLMs to pick up context clues. For example, many LLMs like ChatGPT-4o (which, mind you, came out May 2024) can’t understand how many times the letter ‘r’ appears in the word ‘strawberry,’ which an article by TechCrunch cites as “the AI [knowing] that the tokens ‘straw’ and ‘berry’ make up [the word] ‘strawberry,’” but not the individual letters.
Another source that is banned, and even more widely distrusted than Google’s AI Overview? ChatGPT. Ever since its public release in November 2022, a rift has emerged between two sides: the optimists who believe that AI will revolutionize the tech industry, and those who believe the chatbot is nothing more than a glitzy cornucopia of plagiarism and misinformation. While many people fall on either side of this debate, there are also plenty, such as myself, that fall somewhere in between. While it is true that LLMs like ChatGPT are wont to ‘hallucinate,’ a term used to describe the fabrication of information, there is a future I see where AI can streamline menial tasks like data entry so humans can focus on what they’re best at, which is creative and innovative work. However, companies that create LLMs like OpenAI and Google aren’t following that path – they’re trying to make as much money by forcing AI to be creative, attempting to enamor the public with subpar performances by machines that only know what has already been made, not what is to come.
Additionally, what chatbots do know is often a hodgepodge of random words they learned thrown together; a total lie that it convinces itself (and hopes to convince you) is true. As I said before, hallucinations are when LLMs fabricate information or give blatantly false answers to questions. As the New York Times reports, “a new start-up called Vectara, founded by former Google employees” is looking into how “chatbots invent information at least 3 percent of the time — and as high as 27 percent.” The lesser hallucination rate is often when AI is told to summarize information, with which it has a data set to work with. When told to create information, the AI must pull from across the data set it was trained on, which for many LLMs includes a portion of the internet, which has plenty of misinformation circulating. However, AI doesn’t know that. It’ll spout misinformation verbatim, proudly proclaiming it as the truth. That’s why we need to be extremely careful when using these systems as a reputable source of information. If you do use AI, you need to make sure it lists sources and check them out for yourself, lest you further the spread of misinformation.
Part III: AI Isn’t Responsible, But You Can Be
Companies have made it very difficult for us, as consumers, to use AI ethically. LLMs plagiarize and hallucinate, boiling down to nothing more than an unreliable word spouter. Image generators like DALL-E are also plagiarizers, even more so than LLMs (but that’s a topic for another day). So what can we do about it? As someone who frequently tinkers with AI models of all varieties, here’s what I’d say:
Step One: Don’t Pretend You’re The AI (You’re Better Than That)
Don’t be like James in the opening story, because what I wrote in the beginning is a good example of how not to use AI. Human beings, as I said earlier, are creative and innovative, while AI is not. You have the power to craft an epic narrative, the knowledge to write a compelling essay, and the life experience to make others feel emotions. Some people may be okay with being disingenuous and writing their name on something they didn’t make, but you shouldn’t be.
You, whether you think so or not, are hugely gifted as a human and can do what no other species can: you have reasoning skills, speak a complex language, and are inherently introspective. AI isn’t capable of any of those. It doesn’t speak in words, it speaks in tokens. And as a machine, no introspection or reasoning is necessary. You have real intelligence. Telling AI to write something and passing it off as your own isn’t just dishonest – it’s an insult to yourself. James would’ve gotten penalized for cheating, but he’d also be inadvertently telling himself that he’s no better than an algorithm that lies and copies with blatant disregard for what it’s doing.
Step Two: Supplement Using AI, Don’t Complement
Now that we’ve established our first rule of staying original, what should we use AI for? I genuinely believe that AI is a powerful tool that can supplement your work. It’s there as an additional resource for select cases. It shouldn’t be used to complement work, which is to say be ingrained into your work as a necessity. Artificial Intelligence can work with analytics at blazing speeds and has the potential to operate at factories via machinery on an assembly line, for example, so humans can work in more critical positions that can be more fulfilling and safer for people.
A good example of supplementing AI into your workflow, whether for school, work, or personal means, actually made headlines a short while ago. On October 15, 2024, ABC News reported that the parents of a student who attends Hingham High School in Boston, Massachusetts sued the school because he was punished for allegedly using AI to cheat on a history paper. According to his parents, he merely “used AI to assist with research for a history paper, but not to write the paper itself.” If true, this highlights two key points.
Firstly, the student only used AI as a research tool. ChatGPT, while not great at generating text or answering questions, is surprisingly good at surfing the web. I’ve often experimented with asking it to find things from across the internet, and it does it much quicker and cleaner than Google’s aforementioned mess of a search engine. ChatGPT is even experimenting with a feature called “SearchGPT,” which seeks to directly compete with the likes of Google. But the student using AI in this way, like the parents said, wasn’t to write the paper. I see no harm in utilizing AI to aid in finding sources or inspirations.
Secondly, there needs to be better rules surrounding AI usage. His mother said “I’d also like them to put in place an AI policy that makes sense,” a sentiment shared widely throughout the world – not just in schools. Gary Marcus discusses in his book that the tech world is currently being governed by unelected leaders who have their own company’s best interests at heart. While government systems can (in)famously take a long time to adapt to the current world climate, steps are being taken to figure out how technology should work in our lives. This case is one example, as is the case that the Department of Justice filed against Google in January of 2024 for monopolizing internet searching, which allowed them to degrade their search engine as I mentioned in part two, all to maximize profits. If adequate laws are enforced everywhere, from schools to countries, we can make sure AI works for everybody how it’s supposed to.
Step Three: This is a Tool, Use It
The first step, I’ll admit, was very negative, and painted AI in a harsh light. I think what I said was true, though, and it’s important to speak the truth. Then, in the second step, I encouraged the use of AI while simultaneously outlining the need for better laws surrounding AI. Now, finally, I want to be an optimist. While artificial intelligence today is made for profit, it’s still a potentially really useful tool if you try to use it as such. The ways it’s being demonstrated today, whether to write a social media post or generate an image that isn’t real, just aren’t what we want or need it for.
Many people could be interested in tinkering with those features, but more as a plaything and less as a legitimate resource. What we need is AI that can streamline our workflow, whether for personal use, work, or school. This can only be done if it is trained to do those things correctly and quickly. And many just aren’t optimized for that. However, many can do the aforementioned analytics and math-based processes much quicker than humans. Or, it can find sources for what you’re trying to write. All you have to do is look for what suits your needs, then try to use it. If it doesn’t work, then that’s too bad. But if it does, you can do yourself a service by eliminating the need for menial tasks. Do what you do best, and let AI do the rest.
Why?
So, after all is said, what should be done? I want AI to work, and I want AI to be ethical. But it’s hard to be an optimist when greed is such an overwhelming factor in the reason AI exists as it does. Don’t forget the three steps I outlined. They are essential in maintaining integrity and encouraging cynicism about the AI of the present while also promoting the use of AI and showing that you need to be optimistic about the future of AI. Being curious also means being responsible. But you can only do so much. As a collective, we have the power to encourage lawmakers to hold corporations accountable for the AI they’ve created and make sure that the digital landscape of the future is governed by the people, not the profit-motivated corporations. So please, find out how AI could help you, and make sure you are always being responsible. I want to see AI as a helpful tool because it can be. But we must make that happen.
Important!
This article was written by Sophomore Luke Fann as a part of his Personal Project, which every 10th-grade student at an International Baccalaureate school participates in. To help Luke with his project, please fill out this form. Thank you.
Works Cited
Marcus, Gary F. Taming Silicon Valley: How We Can Ensure That AI Works for Us. MIT Press, 2024.
Metz, Cade. “Chatbots May ‘Hallucinate’ More Often Than Many Realize.” The New York Times, 16 November 2023, https://www.nytimes.com/2023/11/06/technology/chatbots-hallucination-rates.html. Accessed 17 October 2024.
Microsoft Corporate Blogs. “Microsoft and OpenAI extend partnership – The Official Microsoft Blog.” Microsoft Blog, 23 January 2023, https://blogs.microsoft.com/blog/2023/01/23/microsoftandopenaiextendpartnership/. Accessed 17 October 2024.
Moeller, Karla, and Moshe Blank. “Human Animal Differences | Ask A Biologist.” Ask A Biologist |, 12 May 2017, https://askabiologist.asu.edu/questions/human-animal-differences. Accessed 17 October 2024.
Oakley, Shawn. “senate remarks gary marcus 16 May 2023.” Senate Judiciary Committee, 16 May 2023, https://www.judiciary.senate.gov/imo/media/doc/2023-05-16%20-%20Testimony%20-%20Marcus.pdf. Accessed 17 October 2024.
Pichai, Sundar. “Introducing Gemini: Google’s most capable AI model yet.” The Keyword, 6 December 2023, https://blog.google/technology/ai/google-gemini-ai/#sundar-note. Accessed 17 October 2024.
Reinstein, Julia. “Parents sue school in Massachusetts after son punished for using AI on paper.” ABC News, 15 October 2024, https://abcnews.go.com/US/parents-sue-school-massachusetts-after-son-punished-ai/story?id=114819025. Accessed 17 October 2024.
Primack, Dan. “OpenAI’s next step: Consider going public via IPO.” Axios, 3 October 2024, https://www.axios.com/2024/10/03/openai-funding-ipo-public-future. Accessed 17 October 2024.
Silberling, Amanda. “Why AI can’t spell ‘strawberry.’” TechCrunch, 27 August 2024, https://techcrunch.com/2024/08/27/why-ai-cant-spell-strawberry/. Accessed 17 October 2024.
Warren, Tom. “Microsoft extends OpenAI partnership in a “multibillion dollar investment.”” The Verge, 23 January 2023, https://www.theverge.com/2023/1/23/23567448/microsoft-openai-partnership-extension-ai. Accessed 17 October 2024.
Willing, Nicole, et al. “How Does OpenAI Make Money? Revenue Model Explained.” Techopedia, 11 April 2024, https://www.techopedia.com/how-does-openai-make-money. Accessed 17 October 2024.

LUKE FANN
Editor-in-Chief Luke Fann is a junior at City and freelances for Rapid Growth Media's Voices of Youth program. He also attends Michigan State University's MIPA Summer Journalism Workshop, receiving the Sparty Award in Journalistic Storytelling and the Art of Storytelling. Additionally, he recieved an Award of Excellence in the Level Up: Leadership for Media program in 2025 and earned an honorable mention for his piece on AI and LLMs at the 2024 MIPA Spring Awards.
Luke began writing in 7th grade and became an editor by the following year. By his sophomore year, he was Managing Editor and then Editor-in-Chief. As for writing, he focuses on business and technology news, taking a deeper dive into topics rather than focusing solely on breaking news. He also covers personal interests, and his weekly editorials offer unique takes on timely issues.
If you're interested in writing for The City Voice, especially as a middle schooler or Underclassman, reach out to Luke or attend a meeting. Journalism is a great way to express your passions. No matter your background, The City Voice wants to hear your voice.























































