Artificial intelligence (AI) has dominated the news and is rapidly becoming a part of the lives of many college students.

Rivka Levitan ’09 is an associate professor of computer and information science and is an expert in spoken language processing, discourse, and human computer interaction. Her research in the area of entrainment—the phenomenon by which conversational partners start to talk more like each other—earner her an NSF Career Award.

With AI becoming more and more prevalent both inside and outside of classrooms, we asked her about these new technological trends and their benefits and potential problems.

AI is currently being used in Amazon’s Alexa, Apple’s Siri, text-to-speak, and other similar technologies. How are these evolving and expanding to include even more human characteristics, and what are the implications?

Everyone who uses these technologies can see that they’ve improved unbelievably even over the past year. The accuracy of automatic speech recognition keeps going up, and text-to-speech sounds more fluent and natural. As these technologies get better, they’re replacing human workers such as call center employees, personal assistants, and voice actors. A stranger implication is the concern, recently expressed by Open AI, that users may become emotionally attached to their chatbot when interacting with it in voice mode. New technologies have always rendered some jobs obsolete, but no one ever fell in love with their loom.

Another worrying implication is that these advances are uneven, leaving people who aren’t native speakers of Standard American English behind. As automatic speech recognition becomes more accurate, it’s being adopted in more and more customer service contexts and workplaces. But it isn’t becoming more accurate for everyone. For example, voice-to-text on my Google phone doesn’t recognize “Rivka” or “Levitan.”

Students are using technology more and more for their classes. How do you see it being useful, and what advice would you give students who use AI tools such as ChatGPT?

ChatGPT can be very useful as a timesaving device, and I use it a lot with prompts like “suggest five ways to phrase this” or “generate a polite negative response to this e-mail.” I also sometimes use Github Copilot, which generates code.

Like everyone who uses these technologies, students should be aware that they’re very often wrong. ChatGPT will invent quotes, dates, citations, and statistics. Everyone has someone in their life who will say absolutely anything as long as it sounds good. They might be fun to talk to, but you wouldn’t hand in an essay they wrote for you, or repeat anything you heard from them, without verifying it with a more reliable source.

Even more importantly, when you write an essay, the point isn’t the five paragraphs but the ability to understand a topic, take a stance, and convey it in an organized, coherent way. Even as technology gets better and better, I don’t think anyone will be able to reach the top of their field without the ability to do that. On the other hand, I think that once someone has that understanding, stance, and organized argument, most of the time it won’t matter if they generate the actual five paragraphs using AI. But until you have that ability, don’t cheat yourself of the opportunity to acquire it during the years you’re fortunate enough to be getting graded feedback on your work. This all applies to programming as well.

What courses are being offered at Brooklyn College for students to expand their knowledge about AI?

In addition to Data Analysis (CISC 3225), the Computer and Information Science Department offers a class on Artificial Intelligence (CISC 3410) and Machine Learning (CISC 3440). These are advanced electives, and to quote a student comment that I like to use to set expectations: “AI is not about cool stuff like many people think. Especially with this professor it’s a nightmare. You better know Calc2, Advanced Statistics, Discrete Structures, and Algorithms like your favorite song you sing in the shower.