Summary: | Food preparation is one of the essential tasks in daily life and involves a large number of physical interactions between hands, utensils, ingredients, etc. The fundamental unit in the food preparation activity is the concept of a recipe. The recipe describes the cooking process—the way to make a dish in a sequential order of cooking steps. Frequently, following these steps can be an extremely complicated process, which requires coordination, monitoring and execution of multiple tasks simultaneously. This work introduces a cooking assistance system powered by Computer Vision techniques that provide the user with guidance in the accomplishment of a cooking activity in terms of a recipe and its correct execution. The system can provide the user with guidance for carrying out a recipe through the appropriate messages, which appear in a panel specifically designed for the user. Throughout the process, the system can validate the correctness of each step by (a) detection and motion estimation of the ingredients and utensils in the scene and (b) spatial arrangement of them in terms of where each one is located to another. The system was first evaluated on individual algorithmic steps and on the end-to-end execution of two recipes with promising results.
|