Introduction
Core ML is a framework that was announced by Apple in the 2017 WWDC. It comes with iOS 11 and provides the ability to our application of learning without being explicitly programmed.
Apple provides some predefined core ML models so we can import this model in our Xcode project and use it easily in our Applications. Core ML requires the model format like (modelName.mlmodel). Apple also provides facility for converting our model to Core ML model.
How to add Core ML model in our Application
~ First, we have downloaded the apple Core ML model from the below apple link: “https://developer.apple.com/machine-learning/”
~ Core ML model working in Xcode-9. So we have created our project in Xcode-9 and added this model and also provide target. As shown in below figure.
~ Core ML model working in Xcode-9. So we have created our project in Xcode-9 and added this model and also provide target. As shown in below figure.
Example: Let’s see the below example
In our project we have recognized image using core ML model. So we have taken imageView, label and button in storyboard. Also created outlet of imageview and label and action of button in viewController class. We require image picker for showing and changing images in our project. Hence, need to add two protocol/delegate like
1. UIImagePickerControllerDelegate
2. UINavigationControllerDelegate
First, import CoreML and Vision framework in ViewController.
There are various usages of Core ML using Vision framework in our application. Vision gives us easy access to Apple’s models for detecting image, face, text, barcode and Natural language processing, etc in our application.
Vision framework provides many classes. Those classes give access for connecting Core ML model to the application. For e.g. like VNCoreMLModel as shown in below code,
Inside the button action, write below code:
Also created detectImageDetail() function to recognize image using Core ML model, as shown in below code
Now, let’s run the application and it will give result like shown in below figure.
You can see that it will give the possibilities of the detected object in the given image. In above images, there are recognition of oceans and shower in two images using CoreML Model and getting approximate results.
Advantages
~ Easy to add into your app.
~ Many type of images are recognized easily using Core ML model.
Disadvantages
~ It will support iOS 11 and later only.
~ Core ML supports a limited number of model types.
Conclusion
Through the Core ML model, we can easily recognize our images. If you want to get better results then you can also create your own core ML model and get accurate results.
Hope, this basic but interesting tutorial will help you in checking the usage of Core ML and also, it can be a start point of your direction to use Core ML.
Feel free to share your own experience, usage, example and move towards the advanced and newly introduced frameworks/apis. You can also contact our team if you are looking for machine learning development services.