November 22, 2024 | Vol. 53, Issue 22

The only bilingual Chinese-English Newspaper in New England

The Future of ChatGPT Regulation

If you’ve been on the internet recently, you most likely have heard of ChatGPT – a new AI natural language processing tool which has gained significant public traction over the past several months. The chatbot was developed by OpenAI, a startup founded in 2015 by Elon Musk and Sam Altman. Released on November 30, 2022, ChatGPT is capable of everything from returning human-like conversational responses to generating a resume to even writing software on its own. ChatGPT is able to do this because it is trained on large amounts of human-written data, such as books, websites, and its own user inputted content.

With the release of its newest iteration, GPT-4, the chatbot has hit significant knowledge benchmarks. According to OpenAI, GPT-4 was able to pass the bar exam in the top 10% of scorers, while its previous iteration scored only in the 10th percentile. GPT-4 is also currently able to accept inputs in the form of images along with text. The program is currently free to use for the public on OpenAI’s website.

Given the versatility of generatable human-like content, questions are being raised about how ChatGPT can have the potential to influence politics, bend limitations on academic honesty, and potentially hurt the human labor market. With estimates that over 100 million people used ChatGPT in the 2 months after its launch, it is important to understand how ChatGPT has the potential to shape our world.

Earlier this month, Jared Mumm, a professor at Texas A&M, went viral after accusing his students of submitting papers written by ChatGPT. According to a report by Pranshu Verma at the Washington Post on May 18th, in an email to his students, the professor claimed that he had asked ChatGPT to check if the program itself had written each of the submitted essays. For a significant portion of his class, the program responded that it had indeed written the essays, and the students received no credit. Several students came forward to defend themselves, claiming that the essays submitted were human-written. ChatGPT is not capable of accurately checking if text is AI generated, and even OpenAI’s own AI detection tool mislabels text 9% of the time. Due to AI generated essays being nearly impossible to differentiate from human-written ones, ChatGPT simultaneously raises significant concerns about academic dishonesty, as well as false accusations of AI tool use in schools.

Another concern about the development of AI technologies such as ChatGPT lies in what jobs could potentially be replaced by AI in the future. In April, reporters Aaron Mok and Jacob Zinkula from Insider compiled a list of jobs that experts expect ChatGPT to eventually replace in coming years. For some of these industries, such as customer service, increasing automation over the years is predictably expected to expand – potentially overtaking the human-run part of the industry. Other jobs, such as legal assistants, software developers, and media jobs may also soon be on the chopping block as AI continues to improve in its ability to recall information as well as generate original, creative content. While such technologies may reduce the overall labor cost for various industries, it is important that technologies are regulated to perform these jobs both competently and ethically. For many human workers, concerns about the future of human jobs abound.

Of note, the potential of ChatGPT to impact democratic systems cannot be overlooked. Last January, data and security experts Nathan Sanders and Bruce Schneier wrote an Opinion article in the New York Times expressing concerns that ChatGPT may have the potential to exploit points of power within the American democratic system. ChatGPT may eventually have the ability to identify people in governmental positions of power and target them with humanlike messaging in order to lobby support for specific political goals. With the cost of such an endeavor being extraordinarily high, this type of lobbying technique would be limited to benefit the rich and powerful at the expense of democracy. While social media sites have long dealt with filtering through automated comments, the humanlike responses of an AI chatbot poses new challenges, as this text may be indistinguishable from human written messages.

This being said, ChatGPT is not built without safeguards. The program is designed to be trained on a carefully curated dataset, with the chatbot being unwilling to respond on controversial or unethical topics. This data training is not without cost, however. On January 18th, Billy Perrigo reported with Time magazine on the underpaid labor behind ChatGPT’s ethical safeguards. Being paid less than 2 dollars an hour, Kenyan workers tirelessly combed through ChatGPT’s data to prevent the program from training on hate speech. Despite OpenAI’s claims that ChatGPT was created with ethics in mind, the method of achieving these goals seems to be built on exploited, underpaid laborers.

At a May 16th Congressional hearing, OpenAI CEO Sam Altman spoke about the issues facing ChatGPT in the future. Altman stressed the need for governmental regulation for the AI industry, pushing for “a new agency that licenses any effort above a certain scale of capabilities, and can take that license away”. With regards to potential job loss as a result of this technology, Altman made it clear it was his “greatest nightmare”, urging people to view ChatGPT as simply a tool to assist with human jobs. Additionally, Altman mentions his concerns regarding how ChatGPT may disseminate misinformation, saying he feared “the more general ability of these models to manipulate, to persuade, to provide… one-on-one interactive disinformation”. While Altman took a strong stance towards regulation at the Congressional hearing, days later Altman warned that OpenAI may pull ChatGPT out of Europe in response to the AI Act, which would impose harsh regulations on AI technologies.

In the face of the potential risks of AI technology like ChatGPT, it is clear that it is more important than ever to become informed. As the technology evolves, so must public policy to regulate it. It is these policies that will dictate our future relationship with AI and how it will shape our lives.

Related articles

Author Charles Yu Talks About His Work on ‘Interior Chinatown,’ His Start as a Lawyer

Writer Charles Yu has seen his career transform from law, to book author and now to television. That latter shift will be further proven when “Interior Chinatown” – his award-winning book – airs on Hulu and Disney Plus on Nov. 19. Produced by Taika Waititi, with a pilot that is also directed by Waititi, the edgy, fast-paced show tackles weighty themes of race, class, and immigration with a sense of humor that left the audience at the recent screening at […]

September Events and Celebrations

Kwong Kow Chinese School Annual Fundraising Gala Kwong Kow Chinese School held its annual Fundraising Gala on September 23, 2022. In attendance was former board member and honorary fundraising committee chair Boston Mayor Michelle Wu. Other attendees included from the left: Vice Chairman Felix Lui, City Councilor At-Large Ruthzee Louijeune, State Rep. Donald Wong, Principal Ping-Jung Huang, Director Paul Chan, behind Paul Chan is Board treasurer Tak-Chee Stephan Chan, TECO Boston Director General Jonathan Sun, Director Linda Huang, Director Lily […]

404 Not Found

404 Not Found


nginx/1.18.0 (Ubuntu)