When my 19-year-old son returned home recently after completing his freshman college finals, he told me that he was relieved he only had one exam. His other classes required him to submit a final paper rather than taking a test.Â
“Easy: ChatGPT,” he said.Â
My hand met my forehead and I sighed, speechless, as he continued playing FIFA 2020. I began to reflect on artificial intelligence (AI), AI ethics, and its potential to shape our education system and society in ways we’ve only begun to imagine. Â
I have always been a huge advocate of technology, especially EdTech. Yet in that moment, I began to realize the importance of weighing the implications of AI ethics.Â
For the first time in my life, I feel frightened by technology and the potential unknowns that lie ahead. My son created an entire final essay within a matter of seconds using ChatGPT. He later explained to me that he didn’t plagiarize. He used ChatGPT to write a template that he massaged into his own words. Â
Still, I question how much he learned. And if AI did the bulk of the work for my son in this instance, how many more students used AI to accomplish the assignment? What could the future of AI and education look like? Will elementary, middle school, and high school students eventually depend on AI? What about medical students? Law students? Engineering majors? If technology can produce the necessary research within seconds and produce a formatted research paper, where will it end?Â
Over the past few years, AI has made significant strides, contributing in big ways to innovations such as self-driving cars, medical diagnoses, and chatbots for customer service. Yet as we continue to push the boundaries of AI development, we need to consider AI ethics and the implications of these technologies. AI for education is one of many potential use cases worth considering.
Some Ethical Considerations For AI Developers
First and foremost, the use of AI raises concerns about the displacement of human labor. As machines become increasingly capable of performing tasks that were once the sole domain of human workers, we risk exacerbating existing inequalities in the labor market, and potentially creating new ones. Although AI might bring economic benefits, we must ask ourselves: At what cost will we pursue this technology, in light of potential effects on human workers and society at large?
Another AI ethics concern is the use of AI in decision-making. Already, we are seeing AI for education applications involving decisions about which words and research to place into student papers. AI can analyze vast amounts of data and make predictions with great accuracy. But it is important to remember that algorithms are only as unbiased as the data on which they are trained. If we rely too heavily on AI to make decisions that have significant consequences for individuals and society, we risk perpetuating and amplifying existing biases and injustices.
AI can also be used for surveillance, which raises serious concerns about privacy and civil liberties. As we collect and analyze more data about individuals and communities, we might create a world in which our every move is monitored and recorded. This has the potential to erode trust and create a chilling effect on free speech and dissent.
Futher, the act of creating machines that can think and act autonomously raises ethical questions about the nature of consciousness and the value of life. As AI becomes more advanced, we might one day face the question of whether machines are alive and whether we have specific moral responsibilities toward them.
Finally, and most importantly, in my mind: How will AI affect trust in anything we hear, read, or see moving forward? AI already can create human-like text, lifelike photos, and uncannily real videos. Every news story could potentially be construed as a conspiracy theory.Â
For example, if the president were to announce an attack by a foreign nation, could we believe it? Or could a bogus AI program generate a fake yet convincing video making such claims? Moving forward, what percentage of news stories will be falsified and pawned off as authentic? Can we trust that all sources of news we see are independently verifiable and honest? What’s to keep AI from developing news platforms and delivering false reports? Sound AI ethics could play a significant role in maintaining trust across societies.Â
We are at a point in which telling the difference between fact and fiction depends solely on whether the source delivering the information can be trusted. This is true for AI and education or any other industry.  Â
As a society, we might soon need to decide whether to accept and embrace AI without guardrails–or whether instead to draw a line in the sand that prohibits the development of AI beyond a defined ethical boundary. The topic of the morality of AI raises more questions than answers.Â
Although AI has the potential to bring significant benefits to society, we must be mindful of the ethical implications of its development and use. As we continue to push the boundaries of AI ethics, including AI for education, we need to ask ourselves what kind of world we want to create. We must decide how to ensure that our technological advancements align with our values and respect the dignity and autonomy of all individuals.
Note to Reader: This blog was written in part by Matthew Koop, M.Ed., University of Toledo (in plain font) and ChatGPT technology (in bold font – with light editing for clarity and flow). Comments, questions, and feedback are welcomed.
Leave a comment
You must be logged in to post a comment.