Semantic place understanding for human–robot coexistence—toward intelligent workplaces
Recent introductions of robots to everyday scenarios have revealed unprecedented opportunities for collaboration and social interaction between robots and people. However, to date, such interactions are hampered by a significant challenge: having a semantic understanding of their environment. Even s...
Main Authors: | , , , |
---|---|
Format: | Journal article |
Published: |
IEEE
2018
|
_version_ | 1797059552754008064 |
---|---|
author | Rosa, S Patanè, A Lu, C Trigoni, N |
author_facet | Rosa, S Patanè, A Lu, C Trigoni, N |
author_sort | Rosa, S |
collection | OXFORD |
description | Recent introductions of robots to everyday scenarios have revealed unprecedented opportunities for collaboration and social interaction between robots and people. However, to date, such interactions are hampered by a significant challenge: having a semantic understanding of their environment. Even simple requirements, such as “a robot should always be in the kitchen when a person is there,” are difficult to implement without prior training. In this paper, we advocate that robot-people coexistence can be leveraged to enhance the semantic understanding of the shared environment and improve situation awareness. We propose a probabilistic framework that combines human activity sensor data generated by smart wearables with low-level localization data generated by robots. Based on this low-level information and leveraging colocation events between a user and a robot, it can reason about the two types of semantic information: first, semantic maps, i.e., the utility of each room and, second, space usage semantics, i.e., tracking humans and robots through rooms of different utilities. The proposed system relies on two-way sharing of information between the robot and the user. In the first phase, user activities indicative of room utility are inferred from wearable devices and shared with the robot, enabling it to gradually build a semantic map of the environment. In the second phase, via colocation events, the robot teaches the user device to recognize the type of room where they are colocated. Over time, robot and user become increasingly independent and capable of semantic scene understanding. |
first_indexed | 2024-03-06T20:05:56Z |
format | Journal article |
id | oxford-uuid:28e7145e-bd9c-4fb3-8103-b1f12ebc036c |
institution | University of Oxford |
last_indexed | 2024-03-06T20:05:56Z |
publishDate | 2018 |
publisher | IEEE |
record_format | dspace |
spelling | oxford-uuid:28e7145e-bd9c-4fb3-8103-b1f12ebc036c2022-03-26T12:15:53ZSemantic place understanding for human–robot coexistence—toward intelligent workplacesJournal articlehttp://purl.org/coar/resource_type/c_dcae04bcuuid:28e7145e-bd9c-4fb3-8103-b1f12ebc036cSymplectic Elements at OxfordIEEE2018Rosa, SPatanè, ALu, CTrigoni, NRecent introductions of robots to everyday scenarios have revealed unprecedented opportunities for collaboration and social interaction between robots and people. However, to date, such interactions are hampered by a significant challenge: having a semantic understanding of their environment. Even simple requirements, such as “a robot should always be in the kitchen when a person is there,” are difficult to implement without prior training. In this paper, we advocate that robot-people coexistence can be leveraged to enhance the semantic understanding of the shared environment and improve situation awareness. We propose a probabilistic framework that combines human activity sensor data generated by smart wearables with low-level localization data generated by robots. Based on this low-level information and leveraging colocation events between a user and a robot, it can reason about the two types of semantic information: first, semantic maps, i.e., the utility of each room and, second, space usage semantics, i.e., tracking humans and robots through rooms of different utilities. The proposed system relies on two-way sharing of information between the robot and the user. In the first phase, user activities indicative of room utility are inferred from wearable devices and shared with the robot, enabling it to gradually build a semantic map of the environment. In the second phase, via colocation events, the robot teaches the user device to recognize the type of room where they are colocated. Over time, robot and user become increasingly independent and capable of semantic scene understanding. |
spellingShingle | Rosa, S Patanè, A Lu, C Trigoni, N Semantic place understanding for human–robot coexistence—toward intelligent workplaces |
title | Semantic place understanding for human–robot coexistence—toward intelligent workplaces |
title_full | Semantic place understanding for human–robot coexistence—toward intelligent workplaces |
title_fullStr | Semantic place understanding for human–robot coexistence—toward intelligent workplaces |
title_full_unstemmed | Semantic place understanding for human–robot coexistence—toward intelligent workplaces |
title_short | Semantic place understanding for human–robot coexistence—toward intelligent workplaces |
title_sort | semantic place understanding for human robot coexistence toward intelligent workplaces |
work_keys_str_mv | AT rosas semanticplaceunderstandingforhumanrobotcoexistencetowardintelligentworkplaces AT patanea semanticplaceunderstandingforhumanrobotcoexistencetowardintelligentworkplaces AT luc semanticplaceunderstandingforhumanrobotcoexistencetowardintelligentworkplaces AT trigonin semanticplaceunderstandingforhumanrobotcoexistencetowardintelligentworkplaces |