Bio-inspired Machine Vision
I received an M.S. from the University of Technology, Graz and a Ph.D. from the Technical University of Vienna, Austria in Applied Mathematics. I am a Research Scientist at the
Center for Automation Research at the Institute for Advanced Computer Studies at UMD. I cofounded the
Autonomy Cognition and Robotics (ARC) Lab and co-lead the
Perception and Robotics Group at UMD. I am the PI of an NSF sponsored Science of Learning Center Network for Neuromporphic Engineering and co-organize the
Neuromorphic Engineering and Cognition Workshop.
My research is in the areas of Computer Vision, Robotics, and Human Vision, focusing on biological-inspired solutions for active vision systems. I have modeled perception problems using tools from geometry, statistics and signal processing
and developed software in the areas of multiple view geometry, motion, navigation, shape, texture, and action recognition. I have also combined computational modeling with psychophysical experiments to gain insights into human motion and low-level
feature perception.
My current work is on robot vision in the following two areas:
1) Integrating perception, action, and high-level reasoning to interpret human manipulation actions with the ultimate goal of advancing collaborative robots and creating robots that visually learn from humans.
2) Motion processing for fast active robots (such as drones) using as input bio-inspired event-based sensors.
By integrating cognitive processes with perception and action execution, we investigate ways of structuring representations of events at multiple time spans, that allow generalization of actions. The main application of this work is to create robots that visually learn from humans.
Read moreDynamic vision sensors, because of their high temporal resolution, low latency, high dynamic range, and high compression, hold promises for autonomous Robotics. We study the advantages of this data for fundamental navigation processes of egomotion, segmentation. and image motion.
Read moreBetween low-level image processing and high-level reasoning are grouping mechanism that implement principles of Gestalt, such as closure, or symmetry. We have developed new 2D image-processing operators and 3D operators that implemet these principles for attention, segmentation and recognition.
Read moreActive vision system compute from video essential information about their environment's spatio-temporal geometry for navigation. In a series of studies, I investigated the recovery 3D motion and scene structure from image motion and implementation in efficient algorithms.
Read moreTaking advantage of the mathematical properties of fractal geometry, we designed texture descriptors that are inavriant to changes in viewpoint, geometric deformation , and illumination, and which encode with very low dimension the essential structure of textures.
Read moreOptical illusions can provide insight into the mechanism of vision. Using geometry and statistics I have uncovered a number of principles explaining limitations in what we can recover from images, and these principles can explain different optical illusions and were used to create new ones.
Read more
June/July 2023: I co-organize the 2023 Telluride Neuromorphic Cognition Engineering Workshop.
June 19, 2023: I co-organize the CVPR 2023 Workshop on Event-based Vision.
February 24, 2023: Our project on
VAIolin - Music Education for All. was featured on the UMD Homepage.
February 2023: UMD Grand Challenge Team award with Irina Muresanu for our project on an AI platform for violin pedagogy.
September 2022: Co-Investigator on NIH award with Stephen Restaino (Sonosa);
Title:
"Wearable ultrasound systems and software for assessment of obstructive sleep apnea."
June/July 2022: I co-organize the 2022 Telluride Neuromorphic Cognition Engineering Workshop and a project on "Cross-modality signals: auditory, visual and motor"
July 2nd, 2022: I co-organize a Forum on Future Directions of Neuromorphic Cognition Engineering.
July 2022: PI on MIPS award.
Title:
"Tissue identification for wearable ultrasound."
May 2022: Co-PI on a National Multiple Sclerosis Society grant with Daniel Harrison;
Title:
"Development of a Convolutional Neural Network for MRI Prediction of Progression and Treatment Response in Progressive Forms of Multiple Sclerosis"
August 2021: Maryland Innovation Initiative award with Irina Muresanu.
Title:
"Artificial Intelligence software for assessment of posture and form in violin instruction."
August 2020: NSF grant on "Accelerating Research on Neuromorphic Perception, Action, and Cognition." It will fund network activities, including the
fellowships, the Telluride Neuromorphic Cognition Workshop and the NeuroPAC Web-portal .
February 2023: Our paper "Mid-Vision Feedback for Convolutional Neural Networks"
was accepted at ICLR 2023."
February 2023: Two papers accepted at ICRA 2023:
"Efficient Fusion of Image Attributes: A New Approach to Visual Recognition" and
"NatSGD: A Dataset with Speech, Gestures, and Demonstrations for Robot Learning in Natural Human-Robot Interaction."
January 2023: Two papers accepted at AAAI Workshops 2023 : "WorldGen: A Large Scale Generative Simulator" and
"TTCDist: Fast Distance Estimation From an Active Monocular Camera Using Time-to-Contact."
December 2022: I co-edited a Special issue on "Brain Inspired Hyperdimensional
Computing: Algorithms, models, and architectures"
in Frontiers in Neuroscience.
November 2022: Invited talk at the NUS Workshop "Machine Learning and Its Applications"
in Singapore.
September 2022: Invited talk at
NeuroAI in Seattle.
September 2022: Invited talk at the Online
MFI Event Sensor Fusion Workshop.
June 2022: Our paper "DiffPoseNet: Direct differentiable camera pose estimation"
was presented at CVPR 2022.
May 2022: Our dataset EVIMO2 was launched.
April 2022: I gave a colloquium at the Grasp Lab, UPEnn on Bio-inspired Motion Analysis."
January 2022: Our paper "Gluing Neural Networks Symbollically Through Hyperdimensional Cmputing "
was presented at IJCN 2022.
January 2022: I contributed a chapter with M. Maynord on
"Learning for action-based scene understanding " to the book "Advanced Methods and Deep Learning in Computer Vision."
Our work on Vision and Robotics has been sponsored by the following NSF grants in the Cyberphysical program.
Currently I serve for the folllowing journals and conferences
email — fer@cfar.umd.edu
phone — 301 405 1768
office — Iribe Building 4216
ADDRESS (for shipping)
Cornelia Fermüller
Computer Vision Lab, UMIACS
5109 Brendan Iribe Center for Computer Science and Engineering
8125 Paint Branch Dr, College Park, MD 20742