Food Computation

From Knoesis wiki
Jump to: navigation, search

Motivation and Background

Over the recent few years, people have become more aware of their food choices due to its impact on their health and chronic diseases. Consequently, the usage of dietary assessment systems has increased, most of which predict calorie information from food images. Various such dietary assessment systems have shown promising results in nudging users toward healthy eating habits. This led to a wide range of research in the area of food computation. Currently, at AIISC, the following projects are being carried out in the realm of food computation.

Projects

1.Explainable Recommendation: A neuro-symbolic food recommendation system with deep learning models and knowledge graphs

In this work, we propose a food recommendation system (Figure-1) that employs an analyser to recommend whether the food is advisable to the user and a reasonser that provides an explanation for the decisions made by the analyser. The recommendation system harnesses generalization power of deep learning models and abstraction power of the knowledge graphs to analyze recipes. The several knowledge graphs involved in this work are

  1. Personalized health knowledge graph
  2. Disease specific knowledge graph
  3. Nutrition retention knowledge graph
  4. Cooking effects knowledge graph

Given a food image, the system will retrieve cooking instructions and extract cooking actions [2]. The ingredients and cooking actions will be analyzed with knowledge graphs and inferences will be drawn with respect to individual’s health condition and food preferences. We plan to extend the recommendation system to suggest alternate ingredients and cooking actions.







2.Ingredient Substitution: Identifying suitable ingredients for health condition and food preferences

Food is a fundamental part of life, and personalizing dietary choices is crucial due to the varying health conditions and food preferences of individuals. A significant aspect of this personalization involves adapting recipes to specific user needs, primarily achieved through ingredient substitution. However, the challenge arises as one ingredient may have multiple substitutes depending on the context, and existing works have not adequately captured this variability. To address these challenges, we have proposed and developed a comprehensive Knowledge Graph (KG) that extensively captures all possible contexts related to ingredient substitution. Our KG includes detailed information on 27,532 ingredients and features 40,110 substitution pairs, offering a rich dataset that greatly enhances the potential for precise and context-aware recommendations. This graph supports not only text-based searches but also image and constraintbased searches, which are thoroughly discussed in the results section of this document. The ability to query based on visual input and specific dietary constraints, such as gluten-free or vegan options, makes this KG an innovative tool in the realm of digital gastronomy. It stands as a significant advancement in the technology of food personalization, demonstrating how sophisticated data integration and querying mechanisms can address complex dietary needs effectively.






3. mDiabetes: A mHealth application to monitor and track carbohydrate intake

In this work, we developed a mobile health application to track carbohydrate intake of type-1 diabetes patients. The user will enter the food item name and their quantity in their convenient units. The app will query the nutrition database, perform necessary computation to convert the user entered volume to estimate the carbohydrates.

Resources

  1. Wiki page
  2. App manual

The app is under development to include food image-based carbohydrate estimation which requires food image based volume estimation.





4. Ingredient Segmentation: Using generative models for dataset creation and ingredient segmentation

In order to perform food image based volume estimation, the food image needs to be segmented and ingredients need to be identified. We propose to train an ensemble model - (i) an object detection model to segment visible ingredients (ii) Visual transformer model to detect invisible ingredients. However, the lack of food segmentation dataset poses a challenge. To overcome this, we propose a novel pipeline utilizing generative models to generate labeled food segmented dataset. We utilize this dataset to train the ingredient segmentation model. The sample dataset generated using DIffusion Attentive Attribution Maps (DAAM) models are presented in Figure-2.





5. Representation Learning: Cross-modal representation learning to retrieve cooking procedure from food images

To support the explainable recommendation system (Project-1) in retrieving cooking instructructions from food images, we proposed a cross modal retrieval system described in Figure 4. In this work, we leverage knowledge infused clustering approaches to cluster similar recipes in the latent space [1]. Clustering similar recipes enables retrieval of more accurate cooking procedures for a given food image. Currently this network architecture is being enhanced and tested using transformer architectures.




Architecture of Ki-Cook Model that utilizes procedural attribute of cooking to learn the prepresentation







Team Members

Coordinated by - Revathy Venkatramanan
Advised By - Amit Sheth

AIISC Collaborators:

  1. Kaushik Roy
  2. Yuxin Zi
  3. Vedant Khandelwal
  4. Renjith Prasad
  5. Hynwood Kim
  6. Jinendra Malekar

External Collaborators
Students/Interns

  1. Kanak Raj (BIT Mesra)
  2. Ishan Rai (Amazon)
  3. Jayati Srivastava (Google)
  4. Dhruv Makwana (Ignitarium)
  5. Deeptansh (IIIT Hyderabad)
  6. Akshit (IIIT Hyderabad)

Professors/Leads

  1. Dr.Lisa Knight (Prisma Health - Endocrinologist)
  2. Dr. James Herbert (USC - Epidemiologist and Nutritionist)
  3. Dr. Ponnurangam Kumaraguru (Professor - IIIT Hyderabad)
  4. Victor Penev (Edamam - Industry collaborator)

Publications

  1. Venkataramanan, Revathy, Swati Padhee, Saini Rohan Rao, Ronak Kaoshik, Anirudh Sundara Rajan, and Amit Sheth. "Ki-Cook: Clustering Multimodal Cooking Representations through Knowledge-infused Learning." Frontiers in Big Data 6: 1200840.
  2. Venkataramanan, Revathy, Kaushik Roy, Kanak Raj, Renjith Prasad, Yuxin Zi, Vignesh Narayanan, and Amit Sheth. "Cook-Gen: Robust Generative Modeling of Cooking Actions from Recipes." arXiv preprint arXiv:2306.01805 (2023).
  3. Sheth, Amit, Manas Gaur, Kaushik Roy, Revathy Venkataraman, and Vedant Khandelwal. "Process knowledge-infused ai: Toward user-level explainability, interpretability, and safety." IEEE Internet Computing 26, no. 5 (2022): 76-84.