As artificial intelligence continues to disrupt industries, the demand for systems that can interact more naturally with people has greatly increased. Problem is, creating such a system is no easy feat. During a keynote presentation at 2022’s Annual Conference on Neural Information Processing Systems, Juho Kim, an associate professor at the Korea Advanced Institute of Science & Technology (KAIST) laid out the four main challenges facing the development of new A.I.-powered tools.

In doing so, he explained how incorporating robust opportunities for feedback will allow developers to create systems that are even friendlier to us humans.

1. Bridging the accuracy gap.

According to Kim, one of the most difficult aspects of creating artificial intelligence is aligning its functionality with how users expect it to work, and bridging the gap between the user’s intention and the system’s output.

Kim says that most A.I. developers are focused on expanding their systems’ datasets and models, but this can widen the accuracy gap, where the system can function very well but only for people who already know how to use it.

One solution? Incorporate human testers. By analyzing feedback from real people, developers can identify errors and unintended biases.

2. Incentivizing users to work with A.I.

Kim says, “unfortunately, one of the most common patterns we see in Human-A.I. interaction is that humans quickly abandon A.I. systems because they aren’t getting any tangible value out of them.” One way that Kim says developers can solve for this issue is by prioritizing “co-learning” when creating their systems.

By giving ample opportunities for humans to provide feedback on A.I. systems, people can learn how to better use the system, and the A.I. can learn more about what its users actually want from it. Kim says this kind of two-way learning incentivizes people to stick with these systems.

3. Considering social dynamics.

A major issue for today’s A.I. systems is understanding context. As an example, Kim described a system used by his students: Students enter a chatroom to have conversations with each other, and an A.I.-powered moderator watches their chat and can recommend questions to keep the conversation going.

While Kim says it may be tempting to just have the A.I. moderator participate in the conversation, asking questions to the students directly instead of just recommending topics of conversation, this wouldn’t encourage the A.I. to learn more about the social dynamics of the conversation. Instead, by giving users the ability to either accept or reject the A.I.’s recommendations, the system can learn more about what kinds of questions should be asked when.

4. Supporting sustainable engagement.

Kim says that A.I. tools are often used only once or twice by a given user, so it’s key to design experiences that can adapt over time to stay relevant to users’ needs. Kim described an experiment he ran with his class in which they developed an A.I. system that could edit the appearance of websites to make them more easily readable.

According to Kim, the class could’ve made a system in which the A.I. just automatically creates what it thinks is the perfect website design, but by giving users the ability to edit and tinker with the layout, the A.I. can learn more about what that specific user is looking for and provide a more personalized experience going forward.

By Ben Sherry