College of Health and Human Performance

Investigating the Ethics of AI

Investigating the Ethics of AI

by Manny Rea

The rapid emergence of artificial intelligence has confronted all of UF’s colleges with a new era of learning. But how can students and faculty stay on the same page about the ways the technology will affect their fields of study?

Delores James, Ph.D., is an associate professor in the Department of Health Education & Behavior as well as a member of the UF Teaching Innovations Committee. As a committee branched from the UF Center for Teaching Excellence, the group is dedicated to finding new approaches toward higher education on campus. Their efforts stem from the input of university-wide faculty, instructional designers and administrators. They’re now making their dive into AI concepts — something James believes all incoming students will come equipped with in the next few years.

“The freshman class in four years will not look the same as the freshman class of today,” she forecasts. “They will come in with this AI knowledge from high school, middle school and even elementary school.”

James hopes to keep her fellow faculty up to speed on understanding an often daunting concept. She faced the same feelings and challenges as a UF doctoral student working on the Ethical, Legal and Social Implications (ELSI) research of the Human Genome Project grant sponsored by the U.S. Department of Energy.

Before the DNA mapping and sequencing of the Human Genome Project began, researchers from UF College of Medicine and Morehouse School of Medicine examined the potential societal impact of the project. The famous project aimed to lay out the base pairs that make up human DNA and furthermore the exact genetic sequences that inform how the body functions. ELSI research dealt with how learning the roots of disease and health issues may cause discrimination against individuals who may have that information encoded.

“I was initially intimidated by the coding, mapping, and the nitty gritty biological data,” James recalled about the Human Genome Project. But her look into ELSI research showed how science can mean more than just information. The social implications of its use is something anyone can recognize and learn.

“Technology and science are not necessarily value-neutral because people make the decision of what questions to ask, how to use the data, and what policies or laws should be instituted,” James said. She offered some example questions exposed by the project: Who funds this research? What is their interest in it? Which groups will be impacted? Lastly, will it be weaponized against certain groups?

The ability to determine one’s genetic information could lead groups in positions of power to deepen the health disparities of disenfranchised people. The burgeoning of AI in all fields is that same social realization. Through the internet, AI use can discriminate when there is lack of representation of minority groups in the data.

Whether learning racist and coded language or targeted bullying at girls with eating disorders, the tech must be properly guided to avoid these pitfalls that have made recent news headlines. For example, the U.S. Department of Justice discovered its algorithm, called Pattern, for determining crime recidivism of prisoners often incorrectly predicted the risks of Black, Hispanic and Asian inmates.

Last year, tech companies including Microsoft and Amazon prohibited law enforcement from using their facial recognition services as the tech could falsely classify the gender and darker skin of individuals, creating room for abuse by police on minority groups. This is why the Teaching Innovation Committee is working on bridging the AI knowledge gap for faculty and students regardless of their technological literacy.

“You [students] are digital natives,” James explained. “Us faculty members, we are digital immigrants. There is a multigenerational function to tech and AI, and we’re going to need each other.” The goal is to make faculty comfortable with discussing AI topics and how it fits into their areas of study.

Recently, the committee sent a university-wide survey to faculty asking about their familiarity with AI. The committee is in the process of creating an online AI pressbook that will help faculty, graduate students, and post-docs to have a better understanding of AI and how to incorporate building blocks into their curricula.

Politics and policies will soon be steeped in AI, and the first step is keeping educators informed about how it will affect their studies.