Skip to content

Latest commit

 

History

History

Project 4 - CoreML

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 

Food Detector App

This is a Swift app that demonstrates the use of Apple's machine learning vision framework with CoreML.

Using Turi Create, an image classification model is trained (transfer-learning by fine-tuning of Squeezenet) on food images of different types. This trained mlmodel is then used in the Swift iOS project for recognising pictures of food, taken with an iPhone.

Swift topics covered

Note : For simplicity, this app does not perform resize/transformation of images which can generally be a good practice for ML projects.

Data

  • Food 11 dataset from EPFL
    (For simplicity, images for this project was downloaded from https://www.kaggle.com/trolukovich/food11-image-dataset where the original Food 11 data has been organized by category)

  • 16643 food images across 11 classes :

    • Bread
    • Dairy product
    • Dessert
    • Egg
    • Fried food
    • Meat
    • Noodles-Pasta
    • Rice
    • Seafood
    • Soup
    • Vegetable-Fruit
  • 3 splits in datase : Training, Validation & Test.
    (Training & validation have been combined in this project for training the model, and then allowing only 5% split for validation)

Due to size limit issues, the data is not uploaded in this repository. The 2 python files create_sframe.py and train.py include the TuriCreate code used to generate the Food11.mlmodel used in the iOS app.

Demo

demo

Tutorials

  • Introductory guide for TuriCreate
  • Transfer learning tutorials here and here
  • Coreml with swift tutorials here and here.