AI and ethics: a delicate and continual dance
The dilemmas we face today with AI will only grow larger in the future
Quick summary:
A lot surrounds AI and ethics recently and these conversations are critical
While a few ethical lines may be clear cut, many answers remain unclear
Part of the challenge we face in debating AI ethics is seeing the full picture clearly
Novelist Kazuo Ishiguro raises the issue of the ethical treatment of AI by humans
It is not just Big Tech that must wrestle with ethics in AI; all organizations do
Drawing lines on how we use AI in our daily lives
This month, I have been writing about big-picture issues related to algorithms and artificial intelligence (AI). (If you missed any of these, you can catch them by visiting the Forestview archive.) At this point, I should share this with you in full transparency: I use AI to help write each article in Forestview. The topics discussed and thoughts are mine based on my personal experiences, conversations with experts, and reading books and other materials. However, I use Grammarly when writing each piece. Grammarly goes well beyond traditional spell check and enforcing grammar rules that are commonly found in applications like Microsoft Word or Google Docs. On top of these basic features, Grammarly uses AI to learn my writing style and offers real-time suggestions as I am writing to make my message clear and concise. I have worked with a number of amazing editors in the past on various publications. Grammarly is like having those individuals assisting me but for everything I write: e-mails, articles, and more. It is a powerful technology that relies on sophisticated algorithms that have been trained on millions of writing samples: more writing than a single human could ever hope to read in their lifetime. Grammarly is a “cheat code” for my writing: I put words on the screen and it helps by create a slightly better version of my work.
I shared in an earlier edition of Forestview that my son Andrew is a high school senior currently applying to colleges. As I mentioned in that article, a lot has changed about the process in the 30 years since I went to study economics at my university. One part that has stuck out to me is the importance of an applicant’s personal statement: their exposition of who they are, what they hope to achieve, and how they believe a particular college will help them attain their goals. This essay is the only opportunity on the entire college application for admissions professionals to hear directly from prospective students. The rest of the applicant information that is considered are mostly quantitative factors such as GPA, test scores, number of AP classes taken, etc.
Parents want to help their children get into the best colleges possible. There may be a temptation to help out on the personal statement by offering advice and support - and perhaps to edit and rewrite parts for their students. The strong guidance from admissions experts is to avoid this urge: it is important that the personal statement reflects the student's true voice. Counselors can tell the difference between the writing of a 17-year-old high school student and a 48-year-old government attorney. If there is doubt, admissions staff can ask the student for writing samples, such as school papers, and compare the writings. Colleges also have access to software that checks for plagiarism. A comparison of writing styles and a check for plagiarism rely on AI to assist admission counselors. But what if Andrew used my Grammarly account to help with school assignments? What about his personal statement for his college application? Is this an acceptable use of AI, or is it crossing an ethical line?
Should we worry about ethics in how we treat AI?
The question of whether it is ethical for students to use Grammarly to assist with writing their personal essays for college applications is a small example of the many ethical dilemmas related to AI. These debates will only continue to grow in importance as AI becomes a more integral part of our lives. However, Nobel laureate Kazuo Ishiguro brings up the inverse question as one of several issues explored in his most recent novel Klara and the Sun. The book is set at an indeterminate time in the future and the story is told from the perspective of Klara, who is an artificial friend or AF for short.
We start in the store where Klara and other AFs wait for customers to come in and browse. Klara is solar-powered and has a mystical belief about the healing powers of the sun because it is the source of her energy. Eventually, Klara is bought by a mother for her daughter Josie, who has a severe illness. Klara will become Josie’s loyal companion at home.
SPOILER ALERT: Skip over the rest of this section if you plan to read the novel
Klara works hard to learn as much as possible about Josie to best serve her needs. In Ishiguro’s tale, AFs commonly provide human companionship and emotional support. However, views of the AF community are mixed because most workers in the economy were displaced by AI, which leads to huge inequalities between those who are “lifted” or privileged and those who are not. Over the course of the story, readers find out that Klara is not intended merely as a companion for Josie, but is asked to learn as much as Josie so that Klara can become a replacement for Josie should she pass away from her illness. The idea is that Klara would serve as a substitute for Josie in the lives of her mother and friends and help them better cope with her death, but Josie’s father (who is separated from her mother) raises concerns about this plan. Ultimately, Josie is miraculously healed (possibly with the help of Klara and the sun’s healing power) and Klara becomes expendable.
In Klara and the Sun, AI is even more deeply ingrained in the daily lives of humans, to the degree such that traditional roles in work and family life are quite different. This future world leads to heightened social tensions, as some groups organize and fight against the replacement of people by technology. Toward the end of the novel, when it is clear that Josie will survive and thrive into adulthood, her mother (Chrissie) and father (Henry) have a debate about what to do with Klara. Henry tries to convince Klara to give her artificial body up for research so that her “brain” can be reverse-engineered. Henry says to Klara:
"…there’s growing and widespread concern about AFs right now. People saying how you’ve become too clever. They’re afraid because they can’t follow what’s going on inside any more. They can see what you do. They accept that your decisions, your recommendations, are sound and dependable, almost always correct. But they don’t like not knowing how you arrive at them. That’s where it comes from, this backlash, this prejudice.”
Chrissie has been living with Klara daily for years: she disagrees strongly and argues that Klara deserves a dignified ending in the form of a “slow fade”. Chrissie says to Henry, “Find some other black boxes to pry open. Leave our Klara be.” In the end, Klara is eventually moved out to the Yard, where her store manager locates her and asks Klara whether she went to a good family or not. Klara replies that she is grateful that she went to a good home that appreciated her as an AF compared to many other homes that mistreated AFs by throwing them around or having them intentionally walk in uneven areas to make them fall down. The ultimate question of whether Klara was treated ethically is open-ended: what obligations do humans have toward AFs?
All firms need to have an ethical approach to AI
It is likely premature to spend time and energy debating the ethical obligations that humans owe to AI, but thinking about them helps us reconsider the reverse: what are the ethical obligations that AI should have toward humans? The technology powering AI is not inherently good or bad; judging outcomes from AI as good or bad is based on how society chooses to apply it. While some clear problematic examples such as Microsoft’s doomed chatbot Tay exist, I would argue that most ethical questions related to AI are not clear-cut. The question of whether Andrew should rely on Grammarly to help with his personal statement for college is but one of many scenarios that will arise in business and in life. Additionally, over time, societal attitudes will evolve as we struggle to keep up with the dizzying pace of change.
While questions about the capabilities, use cases, and deployment of AI are mostly technical ones that require IT and data science experts, questions about AI ethics should be considered by a diverse group of people within and outside of an organization. Since ethical questions involve complex issues that do not have a clear answer, it is essential to explore all aspects of these decisions and hear from a range of voices. By committing to formalize an ethical approach when adopting AI in your organization, firms can avoid the harm and reputational risk that can come with a failure to fully consider how their use of AI impacts customers, employees, and the general public. In many organizations, there is an Ethics committee or department that considers the ethics of certain business practices, often related to HR issues such as an apparent conflict of interest. Should they also consider similar questions when AI is involved? Legislators, regulators, and industry groups also have a responsibility to develop and refine best practices for the development, refinement, and implementation of AI. However, firms should not wait for external best practices to be created to act today.
What are your thoughts about questions of ethics in AI? Is there a particular example that stood out to you as a question worth contemplating? What steps do you take within your workplace to address ethical questions related to AI? In your personal life? Do you think that Ethics Committees should consider AI-related issues as they do HR and other ethical issues? Why or why not? If not, how should an alternative group be formed with the required diversity of perspectives and interests to thoroughly consider these types of questions?