Weather     Live Markets

A team of researchers at Penn State has developed a novel approach to training artificial intelligence (AI) systems inspired by the way children learn to identify objects and navigate their surroundings. By using information about spatial position, the researchers were able to train AI visual systems more efficiently, resulting in models that outperformed base models by up to 14.99%. This new machine learning approach, detailed in the journal Patterns, is based on developmental psychology and aims to improve AI systems’ ability to explore extreme environments or distant worlds.

Current AI training methods often involve using large sets of randomly shuffled photographs from the internet, but the team of researchers took a different approach grounded in developmental psychology. They developed a new contrastive learning algorithm that allows an AI system to detect visual patterns and identify positive pairs of images more effectively. By incorporating environmental data such as location, the AI system can overcome challenges related to changes in camera position, lighting conditions, and other factors that may affect image recognition.

The researchers hypothesized that infants’ visual learning depends on their perception of location, leading them to create digital simulations of different environments in the ThreeDWorld platform. By manipulating and measuring the location of viewing cameras as if a child were exploring a house, the researchers were able to generate egocentric datasets with spatiotemporal information. These datasets, referred to as House14K, House100K, and Apartment14K, were used to train and test the new contrastive learning algorithm alongside base models.

When running models through the simulations, the research team found that models trained on their algorithm consistently outperformed the base models on various tasks. For example, on a task involving recognizing rooms in a virtual apartment, the augmented model achieved an average accuracy of 99.35%, representing a 14.99% improvement over the base model. The datasets created by the researchers are now available for other scientists to use in their own AI training projects through the website www.child-view.com.

Overall, the research has implications for the future development of advanced AI systems capable of navigating and learning from new environments. The researchers suggest that their approach could be particularly useful for teams of autonomous robots with limited resources that need to adapt to unfamiliar surroundings. Moving forward, the team plans to refine their model to better leverage spatial information and incorporate a wider range of environments. This interdisciplinary research was supported by the U.S. National Science Foundation and the Institute for Computational and Data Sciences at Penn State, with contributions from departments of Psychology and Computer Science and Engineering.

Share.
Exit mobile version