So this is Day56 for me but I am tackling Day52 of the 100 Days of Swift learning path. It seems I cannot find the time to catch up or the topics are so hard for me that I need time to finish the challenges before moving on.
The app we are going to build in this project is called Instafilter.
We are going to learn about UISlider
and Core Image real-time effects.
Setting up
In project 10 we learned how to select and import a picture from a user’s photo library and in this project we are going to do the opposite thing, that is writing images back to the photo library.
This project lets users choose a picture from the library and edit it with a series of Core Image filters before saving it back to the library.
Designing the interface
I will divide this in steps so that it is easier to follow:
- In Main.storyboard embed the view controller inside a navigation controller
- Drag a UIView inside the view controller with size 375 x 470, positioned with a slight inset (about 20 pts) from the top left corner (here the article is misleading so just follow the video tutorial). In the Attributes Inspector, change the background color to “Dark Grey Color”.
- Drag an Image View inside the view with size 355 x 450, x: 10, y: 10. Image view’s mode should already “Aspect Fit” if you use Xcode 10.2 or later.
- Drag a label just below the View. Also here the article gives different information compared to the video. In the video the views are just dragged around while in the article their size and position is hardcoded. Change the label’s text to “Intensity” and—if you want to follow the text—make it right aligned (I did so).
- Drop a slider next to the label, dragging it all the way to the next side of the screen.
- Place two buttons: the first 120 x 44, attached to the left edge of the screen just below the label, with a title of “Change Filter”, the second 60 x 44 on the other edge, with a title of “Save”.
- Select the View Controller > Editor > Resolve Auto Layout Issues > Reset To Suggested Constraints. If you followed the video you should change one of the buttons constraints to 20 and Update Frames otherwise everything will look fine.
- Switch to the Assistant Editor and create outlets for the image view and the slider, actions for the two buttons and for the slider’s intensity.
Importing a picture
Let’s continue from where we left:
- In ViewController.swift add a property to store the current image of type
UIImage!
(implicitly unwrapped) - In
viewDidLoad
assign “Instafilter” to the view controller’stitle
property and add a right bar button item with system item.add
, targetself
and action#selector(importPicture)
. - Write the
importPicture
method: declare an image picker controller, set it to be editable via the.allowsEditing
property, set the view controller to be its delegate and present it with a standard animation. Be sure to conform toUIImagePickerControllerDelegate
andUINavigationControllerDelegate
before moving on. - In Info.plist add the “Privacy – Photo Library Additions Usage Description” key giving it a value of “We need to work with your photos” (or “We need to import photos”) if you are following the text instead of the videos.
- Implement the method for when the user has finished selecting a picture using the image picker, that is the
didFinishPickingMediaWithInfo
. Make sure (guard let
) that there is an image with the= info[.editedImage] as? UIImage
call, then dismiss the controller and set thecurrentImage
to be the found image.
Quote of Day 53
A picture is worth a thousand words; an interface is worth a thousand pictures!
Ben Shneiderman, CS Professor at the University of Maryland
Applying filters: CIContext
and CIFilter
- Import
CoreImage
(it is such a vast topic that I will study it before doing the next challenges and write a separate article about that). - Add two new properties, one for a CoreImage Context and one for a CoreImage Filter, both implicitly unwrapped. A
CIContext
is a subclass ofNSObject
that provides an evaluation context for rendering image processing results and performing image analysis while aCIFilter
is an image processor that produces an image by manipulating one or more input images or by generating new image data (also a subclass ofNSObject
. Before moving on let’s instantiate both of them inviewDidLoad()
, with a filter named “CISepiaTone”. - Inside
didFinishPickingMediaWithInfo
set thecurrentImage
property to be the input for thecurrentFilter
. To do that we need to convert it to aCIImage
object. I found very interesting what the Documentation has to say about this:
[A
CIImage
object is] a representation of an image to be processed or produced by Core Image filters. […] Although aCIImage
object has image data associated with it, it is not an image. You can think of aCIImage
object as an image “recipe”. ACIImage
object has all the information necessary to produce an image, but Core Image doesn’t actually render an image until it is told to do so. This lazy evaluation allows Core Image to operate as efficiently as possible.
After that we set currentFilter.setValue(beginImage, forKey: kCIInputImageKey)
. But what is this last key? It is defined as “A key for the CIImage
object to use as an input image”. This doesn’t really solve my doubts even if Paul says this to be self-explanatory but let’s move on. Call the not yet created applyProcessing()
method just below that (and also inside the intensityChanged
method. This will make sure that the new method is called as soon as the image is imported and then whenever the slider is moved.
- Write the first version of the
applyProcessing
method. I will use the video version once more because its syntax is slightly different from the one used in the text. First be sure that there is an image attached to our filter through aguard let
statement (that is, safely read the output image from the current filter); set the value of the intensity slider the set thekCIInputIntensityKey
value of the current Core Image filter; then verify that it is possibly to create aCGImage
from ourcontext
(that is a Quartz 2D image from a region of a Core Image image object). This renders a region of an image (in this case all of it, which is the meaning ofimage.extent
) into a temporary buffer using the context, then creates and returns a Quartz 2D image with the results. If this succeeds (anif let
is needed because thecreateCGImage
method returns an optional) store this image into aUIImage
wrapper and set it as theimageView.image
. - Fill in the
changeFilter
method with an alert controller’s action sheet which displays all (or the most interesting) filters. Please be extra careful (not like me) when writing this because Core Image and Xcode will not warn you if you made a mistake in the name of the filter. It will not tell you “filter id not recognised”. No… it will just crash your app at a line that, sincerely, at least to me, doesn’t make that much sense. Each alert action will have the filter’s name as title, the default style and the yet unwrittensetFilter
method as handler. An extra action will contain the cancel button to avoid changing the selected filter. - Write the
setFilter
method which should update thecurrentFilter
property for the filter that was chosen, set thekCIInputImageKey
and callapplyProcessing
. As this is an “action method” it needs to have aUIAlertAction
as its only parameter. So, make sure there is a valid image before continuing (guard let
), safely read the alert action’s title (anotherguard let
), setcurrentFilter = CIFilter(name: actionTitle)
, fill in the same three lines we had at the end of thedidFinishPickingMediaWithInfo
method. I wonder if we could refactor this… - As not every filter has an intensity setting the app will crash if we try to modify the intensity of a filter that doesn’t have it. For full description of the filters’ keys go to this web page in the Apple Documentation. Knowing that each filter has a property that returns an array of all the keys it can support (
inputKeys
), store its return value in a constant and use it in conjunction with thecontains()
method to see if it does contain it and, if so, use it. This will add these five new lines to theapplyProcessing
method.
let inputKeys = currentFilter.inputKeys
if inputKeys.contains(kCIInputIntensityKey) { currentFilter.setValue(intensity.value, forKey: kCIInputIntensityKey) }
if inputKeys.contains(kCIInputRadiusKey) { currentFilter.setValue(intensity.value * 200, forKey: kCIInputRadiusKey) }
if inputKeys.contains(kCIInputScaleKey) { currentFilter.setValue(intensity.value * 10, forKey: kCIInputScaleKey) }
if inputKeys.contains(kCIInputCenterKey) { currentFilter.setValue(CIVector(x: currentImage.size.width / 2, y: currentImage.size.height / 2), forKey: kCIInputCenterKey) }
Saving to the iOS photo library
- Familiarise with the new
UIImageWriteToSavedPhotosAlbum()
method. The first,image
, is the the image to write to the Camera Roll album; the second,completionTarget
, optionally contains the object whose selector should be called after the image has been written to the Camera Roll album (in this case,self
, the view controller); the third,completionSelector
, contains the method selector of thecompletionTarget
object to call. It is an optional method but should conform to a very specific format:
- (void)image:(UIImage *)image
didFinishSavingWithError:(NSError *)error
contextInfo:(void *)contextInfo;
This is much clearer than I thought Objective-C could be: it is a method which returns void
(that is, nothing), called image
, takes a UIImage
as its first parameter, an optional Error
as second and an UnsafeRawPointer
as third. This last one allows us to access and manage raw bytes in memory, whether or not that memory has been bound to a specific type.
Finally, the fourth parameter contains an optional pointer to any context-specific data that one wants passed to the completionSelector
.
- So, inside the
save
method, be sure that there is an image inside the image view and then callUIImageWriteToSavedPhotosAlbum(image, self, #selector(image(_:didFinishSavingWithError:contextInfo:)), nil)
. After the preceding explanation this should not be too scary. - Finally, complete the
image(_:didFinishSavingWithError:)
method with two different alert controllers, one if there is an error and another if there is not.
Voilà! The project is finished!
Please don’t forget to drop a hello and a thank you to Paul for all his great work (you can find him on Twitter) and don’t forget to visit the 100 Days Of Swift initiative page.
You can find the repo for this project here.
Thank you!
If you like what I’m doing here please consider liking this article and sharing it with some of your peers. If you are feeling like being really awesome, please consider making a small donation to support my studies and my writing (please appreciate that I am not using advertisement on my articles).
If you are interested in my music engraving and my publications don’t forget visit my Facebook page and the pages where I publish my scores (Gumroad, SheetMusicPlus, ScoreExchange and on Apple Books).
You can also support me by buying Paul Hudson’s books from this Affiliate Link.
Anyways, thank you so much for reading!
Till the next one!