NSF-Funded Project at Brown University Trains AI to See More Like Humans
In a groundbreaking project supported by the U.S. National Science Foundation (NSF), Brown University researchers are reimagining how artificial intelligence models learn to perceive the world — by teaching them to see like humans.
At the center of this innovative effort is Click Me, an interactive online game that crowdsources human visual intuition. Players strategically click on parts of images they find most informative, helping train AI models to focus on the same features people rely on for image recognition.
The goal: bridge the widening gap between AI perception and human vision, especially as image recognition systems grow more powerful but remain prone to mistakes that humans rarely make — such as misidentifying a partially obscured stop sign or mislabeling an image of a dog in sunglasses.
The project integrates insights from neuroscience, psychology, and machine learning. Researchers have developed a “neural harmonization” method to align AI decision-making with the human clicks gathered in Click Me. By doing so, AI systems are guided to categorize images based on features people instinctively prioritize.
Public participation has been a key strength of the project, with thousands contributing clicks and millions of interactions generated across social media. NSF funding has also supported a new computational framework that trains AI not just to match human decisions, but also to mirror human response times — enabling more interpretable, human-like reasoning.
Applications for this work are far-reaching, with potential benefits in healthcare, autonomous vehicles, education, and beyond. By aligning AI with human cognitive patterns, the research could lead to safer, more trustworthy technologies.