Combining common sense rules and machine learning to understand object manipulation
Automatic situation understanding in videos has improved remarkably in recent years. However, state-of-the-art image processing methods still have considerable shortcomings: they usually require training data for each object class present and may have high false positive or false negative rates, mak...
Elmentve itt :
Szerzők: | |
---|---|
Testületi szerző: | |
Dokumentumtípus: | Cikk |
Megjelent: |
University of Szeged, Institute of Informatics
Szeged
2019
|
Sorozat: | Acta cybernetica
24 No. 1 |
Kulcsszavak: | Számítástechnika |
Tárgyszavak: | |
doi: | 10.14232/actacyb.24.1.2019.11 |
Online Access: | http://acta.bibl.u-szeged.hu/59233 |
Tartalmi kivonat: | Automatic situation understanding in videos has improved remarkably in recent years. However, state-of-the-art image processing methods still have considerable shortcomings: they usually require training data for each object class present and may have high false positive or false negative rates, making them impractical for general applications. We study a case that has a limited goal in a narrow context and argue about the complexity of the general problem. We suggest to solve this problem by including common sense rules and by exploiting various state-of-the art deep neural networks (DNNs) as the detectors of the conditions of those rules. We want to deal with the manipulation of unknown objects at a remote table. We have two action types to be detected: ‘picking up an object from the table’ and ‘putting an object onto the table’ and due to remote monitoring, we consider monocular observation. We quantitatively evaluate the performance of the system on manually annotated video segments, present precision and recall scores. We also discuss issues on machine reasoning. We conclude that the proposed neural-symbolic approach a) diminishes the required size of training data and b) enables new applications where labeled data are difficult or expensive to get. |
---|---|
Terjedelem/Fizikai jellemzők: | 157-172 |
ISSN: | 0324-721X |