Implementing Machine Learning on an iOS Device Using Core ML, Swift and Neural Engine

Hello habr! In anticipation of the start of the advanced course "iOS Developer" , we have traditionally prepared a translation of useful material for you.










Introduction



Core ML is a machine learning library released by Apple at WWDC 2017.



It allows iOS developers to add real-time personalized experiences to their applications using advanced local machine learning models using the Neural Engine.



A11 Bionic Chip Review





A11 Bionic Chip Filling

Transistors: 4.3 billion

Cores: 6 ARM cores (64 bits) - 2 high frequency (2.4 GHz) - 4 low power

GPUs: 3

Neural Engine - 600 base operations per second


On September 12, 2017 Apple introduced the world to the A11 Bionic chip with Neural Engine. This neural network hardware can perform up to 600 basic operations per second (BOPS) and is used for FaceID, Animoji, and other machine learning tasks. Developers can use the Neural Engine using the Core ML API.



Core ML optimizes on-device performance by utilizing CPU, GPU, and Neural Engine resources, minimizing memory and power consumption.



Running the model locally on the user's device eliminates the need for a network connection, which helps maintain the privacy of user data and improves the responsiveness of your application.


Core ML is the basis for frameworks and functionality of this domain. Core ML supports Vision for image analysis, Natural Language for word processing, Speech for sound-to-text, and Sound Analysis for identifying sounds in audio.





Core ML API

We can easily automate the task of building machine learning models, which includes training and testing the model using the Playground, and integrating the resulting model file into our iOS project.



Newbie tip: Highlight separate labels for classification tasks.






Core ML general block diagram



Okay. What are we going to create?



In this tutorial, I will show you how to build an image classifier model using Core ML, which can classify Orange and Strawberry images, and add this model to our iOS application.





Image classifier model.



Newbie tip : Image classification refers to supervised learning problems in which we use labeled data (in our case, the label is the name of the image).








Required minimum:



  • Knowledge of Swift language
  • IOS Development Basics
  • Understanding Object Oriented Programming Concepts




Application programs:



  • X-code 10 or later
  • iOS SDK 11.0+
  • macOS 10.13+




Data collection







When collecting data for image classification, follow Apple's guidelines.



  • 10 — , .
  • , .
  • , Create ML UI’s Augmentation: Crop, Rotate, Blur, Expose, Noise Flip.
  • : , . , .
  • , , .
  • , , -.




After you have collected your Data Set, divide it into Train and Test sets and place them in the appropriate folders.







IMPORTANT NOTE: Make sure you distribute the images to the appropriate folders inside the test folder. Because the folder name serves as a label for our images.






In our case, we have two folders, each of which contains the corresponding images.



Model creation







Do not panic! Apple has made this much easier by automating the milestones.



With Core ML, you can use an already trained model to classify input data, or create your own. The Vision framework already works with Core ML to apply classification models to images and preprocess those images, and to make machine learning tasks simpler and more robust.



Just follow these steps.



STEP 1 : Open your X-code.

STEP 2 : Create a clean Swift Playground.

STEP 3 : Remove the default generated code, add the following program and run playground.



   import CreateMLUI //  
  
   let builder = MLImageClassifierBuilder() 
//  MLImageClassifierBuilder
 
   builder.showInLiveView() 
//   Xcode Model builder




Description:

Here we open the interface of the default model builder provided by Xcode.



STEP 4 : Drag the training sample folder to the training area.



Place the training sample folder in the training area indicated by the dotted lines.



Newbie tip : We can also provide an arbitrary name for our model by clicking the down arrow in the tutorial area.




Step 5 : Xcode will automatically process the image and start the learning process. By default, it takes 10 iterations to train a model, depending on the characteristics of your Mac and the size of the dataset. You can watch the training progress in the Playground terminal window.





I'm waiting while the model is being trained.



STEP 6 : After finishing training, you can test your model by dragging and dropping the Test folder into the testing area. Xcode will automatically test your model and display the result.





As you can see, our model has accurately classified the images.



STEP 7 : Save your model.









Integration into the iOS application:



STEP 1 : Open your X-code.

STEP 2 : Create a Single Page iOS app.

STEP 3 : Open the project navigator.

STEP 4 : Drag the trained model to the project navigator.





Place your model in the project navigator.



STEP 5: Open Main.storyboardand create a simple interface as shown below, add IBOutlets and IBActions for the respective views.





Add UIImageView, UIButtons and UILabels.



STEP 6 : Open the file ViewController.swiftand add the following code as an extension.



  extension ViewController: UINavigationControllerDelegate, UIImagePickerControllerDelegate { 
 
 
       func getimage() { 
 
           let imagePicker = UIImagePickerController()
//  UIImagePickerController()
 
           imagePicker.delegate = self //  
 
           imagePicker.sourceType = .photoLibrary  //       
 
           imagePicker.allowsEditing = true  //   
 
           present(imagePicker, animated: true)  //  UIPickerView
 
       } 
 
       func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo : [UIImagePickerController.InfoKey: Any]) { 
 
           let fimage = info[.editedImage] as!UIImage 
//      .editedImage   info 
 
           //    UIImage
 
           fruitImageView.image = fimage  
//    UIImageView
 
           dismiss(animated: true, completion: nil)  //   ,    
 
       } 
 
       func imagePickerControllerDidCancel(_ picker: UIImagePickerController) { 
 
           dismiss(animated: true, completion: nil)  
//     ,     
 
       } 
 
   } 




Description: Here we create an extension for our ViewController class and implement UINavigationControllerDelegate and UIImagePickerControllerDelegate to display the UIImagePickerView when the user clicks the PickImage UIButton. Make sure you set the delegate context.







Steps related to accessing the Core ML model in an iOS application







STEP 1 : Make sure you have imported the following libraries.



import CoreML 
 import Vision




STEP 2 : Create an instance of our Model Core ML class.



let modelobj = ImageClassifier()




STEP 3 : To force Core ML to do the classification, we must first form a request of the type VNCoreMLRequest (VN stands for Vision)



var myrequest: VNCoreMLRequest? 
//  VNCoreMLRequest
 
myrequest = VNCoreMLRequest(model: fruitmodel, completionHandler: { (request, error) in    
//    
 
               //  ,     Core ML
 
               self.handleResult(request: request, error: error)
//  
 
                                                     })




STEP 4: Make sure you crop the image so that it is compatible with the Core ML model. STEP 5: Place the above code in a custom function that returns a request object.



myrequest!.imageCropAndScaleOption = .centerCrop







 func mlrequest() - > VNCoreMLRequest { 
        var myrequest: VNCoreMLRequest ? 
            let modelobj = ImageClassifier() 
        do { 
            let fruitmodel = 
                try VNCoreMLModel( 
                    for: modelobj.model) 
 
           myrequest = VNCoreMLRequest(model: fruitmodel, completionHandler: {   
                (request, error) in self.handleResult(request: request, error: error) 
          
           }) 
 
       } catch { 
           print("Unable to create a request") 
 
       } 
 
       myrequest!.imageCropAndScaleOption = .centerCrop 
        return myrequest! 
    } 




STEP 6 : We now need to convert our UIImage to CIImage (CI: CoreImage) so that it can be used as input to our Core ML model. This can be easily done by instantiating CIImage by passing UIImage in the constructor.



guard  let ciImage = CIImage(image: image)  else { 
       return 
    } 




STEP 7 : We can now process ours VNCoreMLRequestby creating a request handler and passing ciImage.



let handler = VNImageRequestHandler(ciImage: ciImage)




STEP 8 : The request can be fulfilled by calling the method perform()and passing it as a parameter VNCoreMLRequest.



DispatchQueue.global(qos: .userInitiated).async { 
       let handler = VNImageRequestHandler(ciImage: ciImage) 
       do { 
           try handler.perform([self.mlrequest()]) 
       } catch {  
           print("Failed to get the description")  
       }  
   } 




Description : DispatchQueue is an object that manages the execution of tasks sequentially (or concurrently) in the main (or background) thread of your application.



STEP 10 : Place the above code in a custom function as shown below.



func excecuteRequest(image: UIImage) { 
 
            guard 
            let ciImage = CIImage(image: image) 
            else { 
                return 
            } 
            DispatchQueue.global(qos: .userInitiated).async { 
                let handler = VNImageRequestHandler(ciImage: ciImage) 
                do { 
                    try handler.perform([self.mlrequest()]) 
                } catch { 
                    print("Failed to get the description") 
                } 
            } 




STEP 11 : Create a named custom function handleResult()that takes VNRequestan error object and an error object as parameters. This function will be called upon completion VNCoreMLRequest.



func handleResult(request: VNRequest, error: Error ? ) { 
 
       if let classificationresult = request.results as ? [VNClassificationObservation] {//      VNClassificationObservation
 
           DispatchQueue.main.async { 
               self.fruitnamelbl.text = classificationresult.first!.identifier// UILabel          prperty
                print(classificationresult.first!.identifier)
            } 
        } 
        else { 
            print("Unable to get the results") 
        } 
    }  




Note : This is DispatchQueue.main.asyncused to update UIKit objects (in our case it is UILabel) using the UI Thread or Main Thread, since all classification tasks are performed in the background thread.








Listing ViewController.Swift





import UIKit 
    import CoreML 
    import Vision 
    class ViewController: UIViewController { 
        var name: String = "" 
        @IBOutlet weak 
        var fruitnamelbl: UILabel!@IBOutlet weak 
        var fruitImageView: UIImageView!override func viewDidLoad() { 
            super.viewDidLoad() 
            //       .  
        } 
        @IBAction func classifybtnclicked(_ sender: Any) { 
            excecuteRequest(image: fruitImageView.image!) 
        } 
        @IBAction func piclimage(_ sender: Any) { 
            getimage() 
        } 
        func mlrequest() - > VNCoreMLRequest { 
            var myrequest: VNCoreMLRequest ? 
                let modelobj = ImageClassifier() 
           do { 
                let fruitmodel = 
                    try VNCoreMLModel( 
                        for: modelobj.model) 
                myrequest = VNCoreMLRequest(model: fruitmodel, completionHandler: { 
                    (request, error) in self.handleResult(request: request, error: error) 
                }) 
            } catch { 
                print("Unable to create a request") 
            } 
            myrequest!.imageCropAndScaleOption = .centerCrop 
            return myrequest! 
        } 
        func excecuteRequest(image: UIImage) { 
            guard 
            let ciImage = CIImage(image: image) 
            else { 
                return 
            } 
            DispatchQueue.global(qos: .userInitiated).async { 
                let handler = VNImageRequestHandler(ciImage: ciImage) 
                do { 
                    try handler.perform([self.mlrequest()]) 
                } catch { 
                    print("Failed to get the description") 
                } 
            } 
        } 
        func handleResult(request: VNRequest, error: Error ? ) { 
            if let classificationresult = request.results as ? [VNClassificationObservation] { 
                DispatchQueue.main.async { 
                    self.fruitnamelbl.text = classificationresult.first!.identifier 
                    print(classificationresult.first!.identifier) 
                } 
            } 
            else { 
                print("Unable to get the results") 
            } 
        } 
    } 
    extension ViewController: UINavigationControllerDelegate, UIImagePickerControllerDelegate { 
        func getimage() { 
            let imagePicker = UIImagePickerController() 
            imagePicker.delegate = self 
            imagePicker.sourceType = .photoLibrary 
            imagePicker.allowsEditing = true 
            present(imagePicker, animated: true) 
        } 
        func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey: Any]) { 
            let fimage = info[.editedImage] as!UIImage 
            fruitImageView.image = fimage 
            dismiss(animated: true, completion: nil) 
        } 
        func imagePickerControllerDidCancel(_ picker: UIImagePickerController) { 
            dismiss(animated: true, completion: nil) 
        } 
    }




Everything is ready!







Now start your Simulator and test the application.



Note : Make sure you have a photo of oranges and strawberries in your Simulator's photo library.






Click the Pick Image button





Select any image





Click the Classify button





Select another image and click Classify



Hooray:



You have created your first iOS application using Core ML.



Yet:






All Articles