Figure

A blog about Swift and iOS development.

Hairlines

In graphic design, a “hairline” is a line drawn with the thinnest stroke possible on a given output device. If we're drawing with Sharpies and a ruler, our hairlines will be comparatively thick. If we're printing on LaserJets, they'll be pretty darn thin.

On iOS devices, hairlines should be exactly one pixel wide. But iOS doesn't let us draw in pixels, it asks us to draw in points. And points don't uniformly convert to pixels across all platforms. And even if they did, Core Graphics's coordinate system lines up on the boundaries between pixels, not the pixels themselves. So getting hairlines right can be a little tricky.

Going Off the Grid

Check out the following 5x5 point grid. Let's say we want to draw a 1-point thick line between the two blue dots — that is, between (0,2) and (5,2).

We might expect something like:

But what we'll actually get (assuming, for the moment, that pixels and points are equivalent) is:

What happened?

It turns out our coordinate system (the black lines in our image) traces the space between points, not the points themselves. So by drawing right along the path at Y = 2, we're asking for a line to be drawn between points.

That works just fine, mathematically. But in the physical world, our line will be represented by square pixels that are either on or off. Anything “in between” is gets averaged out with anti-aliasing. So instead of a 1-point blue line, we get a 2-point half-blue line.

If we want our perfect thin blue line, we have to draw it down the middle of a point, not in-between. For example, where Y = 1.5 or Y = 2.5:

More generally, to have a perfect line, we have to either round up or round down to the nearest mid-point between grid lines.

Or, graphically, for any Y in the shaded area, we need to either round it up to the top dashed line or round it down to the bottom dashed line to draw a perfect non-antialiased line.

How do we choose if we want to draw above or below our given coordinate? It depends on the situation. If we're trying to draw a line at the very top of a clipping view, we'll want to draw slightly below the Y we give it. Otherwise it will clip and not be visible. The same goes in reverse for drawing at the bottom of a view.

If we call this preference to draw above or below our point the line's bias we can create a function that draws perfect 1-point-wide lines like so:

func singlePointLine(at y: CGFloat, 
                     in rect: CGRect, 
                     topBias: Bool = true) {
  let adjustedY = round(y) + (topBias ? -0.5 : 0.5)
  let line = makeLine(at: adjustedY, in: rect)
  strokePath(line, width: 1.0)
} 

func makeLine(at y: CGFloat, 
              in rect: CGRect) -> UIBezierPath {
  precondition((rect.minY...rect.maxY).contains(y))
  let line = UIBezierPath()
  line.move(to: CGPoint(x: rect.minX, y: y))
  line.addLine(to: CGPoint(x: rect.maxX, y: y))
  return line
}

func strokePath(_ path: UIBezierPath, 
                width: CGFloat) {
  path.lineWidth = width
  UIColor.blue().setStroke()
  path.stroke()
}

My God, It’s Full of Pixels

Sadly, we don't want to draw perfect single point lines. We want to draw perfect single pixel lines. On non-retina devices, those are the same thing. But on everything else, each point is made up of several pixels.

“Plus” model iPhones, for example, actually have three pixels per point, represented by the yellow lines here.

So rather than drawing between the black point-lines of our previous illustrations, we want to draw between the yellow pixel-lines:

Which will give us a line at Y = 1.8333 or Y = 2.1666, depending on the bias.

That means we can't use our simple round() function anymore (as it only rounds to whole numbers). We have to write our own function that rounds up or down to the nearest fraction depending on the bias we give it:

func round(from: CGFloat, 
           fraction: CGFloat, 
           down: Bool = true) -> CGFloat {
  let expanded = from / fraction
  let rounded = (down ? floor : ceil)(expanded)
  return rounded * fraction
}

Then we just need to know what fraction of a point represents a pixel. We do this using the scale of our main screen:

var pixelUnit: CGFloat  {
  return 1.0 / UIScreen.main().scale
}

Then we can draw a pixel-thick line rounded to the nearest pixel instead of the nearest point:

//So close...
func almostSinglePixelLine(at y: CGFloat, 
                           in rect: CGRect, 
                           topBias: Bool = true) {
  let adjustedY = round(from: y, 
                        fraction: pixelUnit, 
                        down: topBias)
  let line = makeLine(at: adjustedY, in: rect)
  strokePath(line, width: pixelUnit)
}

Which is really close to what we want. But this rounds us to the nearest fraction of a point corresponding to a pixel. Or, to put it another way, it snaps us to the yellow lines in our illustration, which actually run between pixels. If we want to avoid the anti-aliasing shown here, we need to snap to the exact middle of the pixel. That is, halfway between the yellow lines.

One way to do this would be to add ½ of our pixelUnit to our rounded value:

let offset = pixelUnit/2.0
let adjustedY = round(from: y, 
                      fraction: pixelUnit, 
                      down: topBias) + offset

which puts us right in the middle of our pixels, like we want. But it shifts both our lines down below Y = 2.

We really want our top-biased line to be just above 2 and our bottom-biased line to be just below it.

To compensate, we subtract offset from our y before rounding:

func singlePixelLine(at y: CGFloat, 
                     in rect: CGRect, 
                     topBias: Bool = true) {
  let offset = pixelUnit/2.0
  let adjustedY = round(from: y - offset, 
                        fraction: pixelUnit, 
                        down: topBias) + offset
  let line = makeLine(at: adjustedY, in: rect)
  strokePath(line, width: pixelUnit)
}

And there we have it. Pixel-perfect hairlines on either side of Y = 2 (depending on the topBias param).

Here's a gist of all of this put together. Keep in mind the structure has been chosen for maximum readability, not because having a bunch of free functions hanging around for drawing lines is a good idea :)


Hit me up on twitter (@jemmons) to continue the conversation.

First Class Functions in Swift

A higher-order function is simply a function that takes another function as an argument.1 Swift uses HOFs all over the place. In fact, it considers the practice important enough to warrant language-level feature support in the form of trailing closure syntax:

// Takes a function as «completion» argument
func foo(with: String, completion: ()->Void) {...}
foo("bar") {
  // trailing closure syntax let's us create that
  // funcion in-line with a closure, packages it up 
  // and passes it in as «completion».
}

But the pervasive use of trailing closure syntax in Swift can lead us to believe that, when a parameter has a function type, we have to pass it a closure. Not true! In Swift, functions are first-class citizens. We can use a function, itself, anywhere we might otherwise use a closure or variable.

For example, to sum an array of numbers, we might be tempted to give the implementation in a trailing closure like so:

[1,2,3,4,5].reduce(0) { $0 + $1 } //> 15

This is just a fancy way of writing:

[1,2,3,4,5].reduce(0, combine: { a, b in
  return a + b
})

Looking at it this way, it's clear all we're really doing is passing the combine parameter an ad-hoc closure that takes two arguments and sums them together. But do we need to create an inline closure for this? There's already an existing function that takes two arguments and sums them together. It's called +. We should just use that:

[1,2,3,4,5].reduce(0, combine: +) //> 15

Treating functions as first-class citizens extends well beyond just passing them as parameters. We can assign them to variables:

floor(5.12345) //> 5
let myFunc: (Double)->Double = floor
myFunc(5.12345) //> 5

We can conditionally assign functions:

let roundUp = true
let myFunc: (Double)->Double = roundUp ? ceil : floor
myFunc(5.12345) //> 6

And, in fact, we don't even need to assign functions to a variable to use them:

let roundUp = true
(roundUp ? ceil : floor)(5.12345) //> 6

Which is pretty cool. Anytime we use functions directly instead of wrapping them in closures or other forms of indirection, we increase the declarativeness of our code.


1: AKA a procedural parameter, which I am motivated to mention only owing to my love of alliteration.↩︎


Hit me up on twitter (@jemmons) to continue the conversation.

Lenses in Swift

Brandon Williams gives a talk at the Functional Swift Conference about Lenses and Prisims in Swift. I'm not 100% sold on their particular usefulness in Swift (and, to be fair, Brandon isn't either). But I love Brandon's playground presentations, and this one demonstrates an awesome blueprint for adding expressive, functional constructs to Swift: define a function, genericize it, compose it, and define an operator for the composition.


Hit me up on twitter (@jemmons) to continue the conversation.

Matching with Swift's Optional Pattern

For Example…

Swift enumerations with raw values can be very handy when dealing with "stringly-typed" data of the sort JSON is notorious for:

{
  "characters":[
    {"name":"Hank",
     "class":"ranger"},
    {"name":"Sheila",
     "class":"thief"},
    {"name":"Diana",
     "class":"acrobat"},
    {"name":"Eric",
     "class":"cavalier"}
  ]
}

A string can hold a nearly infinite combination of possible values, and we need that sort of flexibility when representing names (the variety of which is also quite large).

But what about a character's class? There's only a handful of valid values for that. Representing them as strings (with their infinite variability) would not just be overkill, it'd expose us to potentially subtle bugs such as misspellings, capitalization errors, and non-exhaustive switch statements:

switch stringFromJSON{
case "ranger":
  //Good...
case "Thief":
  //Oh no! Wrong capitalization.
case "cavaleer":
  //Oops! Wrong spelling
}
//Yikes! We forgot completely about acrobats!

If we represent a character's class with a strict enum instead of a flexible String, Swift will catch all these bugs for us! And if we give raw values to our enum types, translating back and forth between JSON's strings is pretty easy too:

enum DNDClass:String{
  case Ranger = "ranger"
  case Thief = "thief"
  case Acrobat = "acrobat"
  case Cavalier = "cavalier"
}

let myDNDClass = DNDClass(rawValue:stringFromJSON)
let jsonString = myDNDClass?.rawValue

And this works great… until we try to use it in a switch statement:

switch myDNDClass(rawValue:stringFromJSON){
case .Ranger:
//ERROR:
//Enum case 'Ranger' not found in type 'DNSClass?'
}

See that tricky "?" at the end of DNSClass?? We're getting an optional back from our constructor because rawValue: is actually a failable initializer.

And it makes sense that it would be. Like we said before, strings can hold a nearly infinite number of possible values. Our enum's constructor has to be ready for that:

DNDClass(rawValue:"ranger")
//> "Optional(DNDClass.Ranger)"

DNDClass(rawValue:"foobar")
//> "nil"

We can deal with this by guarding against nils:

guard let c = DNDClass(rawValue:jsonString) else{
  fatalError("Unexpected class: \(jsonString)")
}

switch c{
case .Ranger:
  //Do ranger stuff...
}

And that's okay… But now we have to parse a chunk of logic above our switch before getting down to business.

We could, instead, take advantage of the fact that Optionals are enums:

switch DNDClass(rawValue:jsonString){
case .Some(.Ranger):
  //Do ranger stuff...
case .None:
  fatalError("Unexpected class: \(jsonString)")
}

But now, instead of having a clean abstraction around the very concept of an Optional, we have a leaky abstraction that requires anyone who reads it to understand the .Somes and .Nones being used under the covers. There's a reason, after all, that the Swift Programming Language makes no mention of Optional's implementation.1

The Optional Pattern

This is why the optional pattern exists. Just add a "?" to the end of an identifier, and it "matches values wrapped in a Some(Wrapped) case of an Optional<Wrapped> enumeration."

In other words, these cases are equivalent:

switch DNDClass(rawValue:jsonString){
case .Some(.Ranger):
  //...
case .Ranger?:
  //...
}

This goes for more complex value-binding patterns as well:

enum Response{
 case Error(String)
}

switch myResponse:
case .Some(.Error(let s)):
  //Do something with «s»...
case .Error(let s)?:
  //Also can do something with «s»...
}

In essence, any time you want to match a pattern against an unwrapped optional value, just put a ? at the end of it.

Don't Default

There's just one gotcha left to tackle. We might expect the following to be exhaustive, but Swift is going to tell us it's not:

switch DNDClass(rawValue:jsonString){
case .Ranger?:
  //Do ranger stuff...
case .Thief?:
  //Hide in shadows...
case .Acrobat?:
  //Backflips...
case .Cavalier?:
  //Whatever it is a cavalier does...
}

//> ERROR: Switch must be exhaustive

That's because, even though we've covered every character class in our enum, there's one important case we've forgotten; what if DNDClass's init fails and returns nil?

We could easily catch this with a default:

switch DNDClass(rawValue:jsonString){
case .Ranger?:
case .Thief?:
case .Acrobat?:
case .Cavalier?:
default:
  //None of the above...
}

But we only want to catch the nil case, and default catches everything. With default in there, we could add new character classes to our enum,2 and Swift wouldn't warn us if we forgot to cover them in the switch.

A common mistake is to try to use the optional pattern again, maybe with something like nil?. But remember, the optional pattern only matches when an optional is expressly not nil. So that will never work.

The easiest, most readable thing is to use the literal nil, itself:3

switch DNDClass(rawValue:jsonString){
case .Ranger?:
case .Thief?:
case .Acrobat?:
case .Cavalier?:
case nil:
  //Failed to init.
}

Patterns in Swift (or, at least, the documentation for patterns in Swift) use a lot of very precise terminology that can take a bit of work to wrap one's head around. But they're incredibly useful4 and punch well above their weight in their contributions to Swift's expressiveness.

As the language continues to evolve, I'm most excited to see what changes and additions happen in this space.


1: Sort of. It actually is mentioned in two places: an example that "Reimplements the Swift standard library's optional type", and the example that shows equivalent code to very optional pattern we're about to discuss.↩︎

2: Bobby and Presto feel left out.↩︎

3: This isn't a special pattern or identifier — it's simply an expression of the fact Optional.None == nil is true.↩︎

4: Especially in Swift 2 since it gives if, while, guard, and for-in their own case matchers. Add those to the existing switch, and there are all sorts of places for us to play with patterns!↩︎


Hit me up on twitter (@jemmons) to continue the conversation.

Asynchronicity and the Main Thread: Part 2

Last week we talked about how we could run bursty asynchronous tasks on the main thread without blocking it. This is super easy if we have a single task that exists in isolation — let's say, some JSON we need to download. For example:1

if let url = NSURL(string: "http://example.com"){
  let req = NSURLRequest(URL:url)
  let main = NSOperationQueue.mainQueue()
  NSURLConnection
    .sendAsynchronousRequest(req, queue: main){
    //Profit!
  }
}

That's pretty straight forward. The problem is, in applications, nothing happens in isolation. Raw JSON bytes don't do us any good. We need to parse them. And we probably want to update our interface to reflect the downloaded data. This would be simple enough if everything were synchronous:

//Timing is easy in the synchronous world
//(but blocks the main thread) 
let data = downloadJSON()
let json = parseJSON(data)
myController.updateUI(json)

To prevent blocking the main thread, though, we have to make these operations asynchronous. In an asynchronous world, though, the data isn't downloaded by the time downloadJSON() returns. The JSON isn't parsed by the time parseJSON() gets back to us. We have to rely on completion blocks (or the delegate pattern) to let us know when our work is completed:

NSURLConnection
.sendAsynchronousRequest(req, queue: main){
(res, data, error) in
  parseJSON(data){ json in
    myController.updateUI(json)
  }
}

Chaining one operation on the completion of another like this not only leads to a lot of confusing indentation, but now our controller code is all mixed up with our parsing code which is all up in our networking code. It's a complected mess that only gets worse the more dependencies we add.

What we need is an abstraction around the life-cycle of our tasks. One that lets them run asynchronously, but also manages their completions such that we can queue them to run in a specific order.

Thankfully, NSOperation (and the associated NSOperationQueue machinery) lets us do just that.

NSOperation, Queues, and Dependencies

An NSOperation encapsulates the execution of isolated chunks of work. We can subclass2 NSOperation to do pretty much any kind of work we want, then add instances of our subclass to an NSOperationQueue to start them processing.

Of course, encapsulating work and processing it isn't exactly rocket science. We've been doing this forever with simple functions. The magic of NSOperation/Queue is that it tracks the status of our operations, only starting them when they're ready, and taking note of when they finish.

That lets us set up chains of dependencies with addDependency  like so:

let myDownloadOp = DownloadOperation()
let myParseOp = ParseOperation()
let myUpdateOp = UpdateOperation()
let queue = NSOperationQueue.mainQueue()
myUpdateOp.addDependency(myParseOp)
myParseOp.addDependency(myDownloadOp)
queue.addOperations(
  [myDownloadOp, myParseOp, myUpdateOp],
  waitUntilFinished:false)

Because of our dependencies, mainQueue will only execute our parse operation after the download operation has completed. Likewise, it will only start the update operation after the parse operation has completed. Note that everything is self-contained in its own operation and nothing is nested.

More important in the context of our current conversation, this is true even if all these operations are asynchronous. And, as long as we use mainQueue() to process these operations, everything happens on the main thread, too.

In other words, NSOperation/Queue lets us run asynchronous operations on the main thread while maintaining complete control over their timing and order of execution.3

Which means we should be ready to go. And we would be… if the documentation around NSOperation weren't a confusing and self-contradictory hodgepodge.

Making Sense of Asynchronous Operations

Here I'm going to try to synthesize, as best I can, what I've learned about implementing asynchronous NSOperation subclasses from a maze of disparate documentation. If you're more interested in whats than hows, you should feel free to skip ahead to the next section.

By default, NSOperation assumes that when an operation hits the end of its start() method,4 it is complete.5 Making its concurrent property return true is supposed to indicate an operation's task lives beyond the scope of start() — in other words, that it's asynchronous — and thus shouldn't be considered complete just because start() has returned.

Because such an operation would be responsible for manually marking itself as completed, operation queues used to assume concurrent operations managed their own internal thread. It would be redundant for a queue to create its own thread to run a concurrent operation like this, so "concurrent" used to also mean "Tell the queue not to create a new thread for this operation."

Then operation queues got rewritten to use Grand Central Dispatch under the covers. As a result, the documentation says, "Operations are always executed on a separate thread, regardless of whether they are designated as asynchronous or synchronous operations."6

Because the concurrent property was being ignored when it came to threading, it's only remaining job was to indicate whether an operation was asynchronous or not. "Concurrent" and "asynchronous" technically mean different things, though. So in iOS 7, the more semantically precise asynchronous got added to the API, to be used in place of concurrent.7

The only problem being, neither the asynchronous nor concurrent properties seem to do anything.8 Operations with either of these set still report themselves as completed whenever start() returns (whether added to a queue or launched manually, contrary to the docs). The only way to make sure an operation doesn't mark itself as finished when start() completes is to override start() itself.9

Making a New Start

And so, the most important thing we have to do when implementing an asynchronous subclass of NSOperation is to override its start() method. But start() is actually responsible for a few things.

  1. Calling the main implementation in main()
  2. Updating the operation's state to executing when it starts.
  3. Changing the operation's state to finished when it's done.
  4. Sending KVO notifications for each of the above.

Calling main() is easy. Our initial start() method could look like this:

override func start() {
  main()
}

To model state, we're going to create an enumeration, a property to hold it, and override the computed properties executing and finished to point to our state:

enum State{
  case Waiting, Executing, Finished
}

var state = State.Waiting

override var executing:Bool{
  return state == .Executing
}

override var finished:Bool{
  return state == .Finished
}

And update our start() to shift us into "executing" mode before calling main:

override func start() {
  state = .Executing
  main()
}

That's great for setting up "executing". But how do we mark our operation as "finished"? Remember, this is going to be doing asynchronous work, so we don't technically know when the operation is going to end. The best we can do is create a method that subclasses will have to call when their asynchronous tasks are complete:

func finish(){
  state = .Finished
}

This mostly works. But NSOperationQueue (and anything else using our operation) expects to be notified about changes to our state through KVO. And KVO has no way to get automatically triggered when the value of a computed property changes. So we have to send those notifications ourselves:

var state = State.Waiting{
  willSet{
    switch(state, newValue){
    case (.Waiting, .Executing):
      willChangeValueForKey("isExecuting")
    case (.Waiting, .Finished):
      willChangeValueForKey("isFinished")
    case (.Executing, .Finished):
      willChangeValueForKey("isExecuting")
      willChangeValueForKey("isFinished")
    default:
      fatalError( ... )
    }
  }
  didSet{
    switch(oldValue, state){
    case (.Waiting, .Executing):
      didChangeValueForKey("isExecuting")
    case (.Waiting, .Finished):
      didChangeValueForKey("isFinished")
    case (.Executing, .Finished):
      didChangeValueForKey("isExecuting")
      didChangeValueForKey("isFinished")
    default:
      fatalError( ... )
    }
  }
}

This simply sets up two observers on our state property, one for before it gets changed, the other for after. Depending on which state transitions to what, we call the appropriate KVO notifications (or bail with an error).

There are a few things we're playing fast and loose with here that wouldn't fly in a multi-threaded environment. There's no locking around our state property for one thing. And we've given no consideration to what happens if we're initialized by one thread, while start() is called by another.

That's okay! The whole point of this exercise is how much simpler and less crash-prone everything is when we avoid threading altogether. But we should make our "no thread" policy explicit by guarding against it in start(). Also, as a best practice, we should check that our operation hasn't been cancelled before we even begin:

override func start() {
  guard NSThread.isMainThread() else{
    fatalError( ... )
  }

  guard !cancelled else{
    return
  }

  state = .Executing
  main()
}

And that's more or less it! From here, subclasses can override main() to spin up whatever asynchronous task they want, and as long as it calls finish() when it completes, everything will just work.

Exactly what these anynchronous subclasses will look like is a topic for another week. But for now, here's a gist of the base AsyncOperation class we've created together.


1: We should all be using NSURLSession-based networking in the real world. I'm using NSURLConnection in my snippets because it happens to have a more example-friendly interface, but don't take that as an endorsement of best practice. ↩︎

2: NSBlockOperation is a great way to quickly experiment with NSOperation and NSOperationQueue without all the hassle of subclassing. But a block operation marks itself as finished as soon as its block returns, so it's not well suited for asynchronous tasks. ↩︎

3: Q.E.D. ↩︎

4: Including main() which is, by default, called by start() ↩︎

5: "Complete" being defined as a state where isFinished is true and KVO notifications have been dispatched to that effect. ↩︎

6: Note the main queue is an exception. Operations executed on the main queue always run on the main thread. Which, incidentally, is what allows us to ignore most of this nonsense. ↩︎

7: Technically, the two are synonyms as far as NSOperation is concerned. Overriding concurrent to return true does the same for asynchronous and vice-versa. ↩︎

8: As a matter of style, we still override asynchronous to return true in our example. But it's just as a nod toward semantic correctness. There's no functional benefit to doing so that I can find. ↩︎

9: While being careful not to call super, as that would trigger the superclass's behavior of marking itself as finished as soon as super.start() returns. ↩︎


Hit me up on twitter (@jemmons) to continue the conversation.

Asynchronicity and the Main Thread: Part 1

Here's bit of advice we should all follow if we want to reduce bugs and increase the stability of our apps:

"Do everything on the main thread, always."

This might raise a few eyebrows, as it seems to run contrary to another common bit of advice often given to iOS developers:

"Never block the main thread, ever."

Enlightenment comes when we realize the two are not mutually exclusive. We're free to do all the work we please on the main thread — just as long as we don't block it.

Fair enough. But how can we run tasks on main and keep it responsive? Asynchronicity is the answer.

Let's say we want to write an app that downloads this recreation of the Final Fantasy IV map because it's awesome. If we download it synchronously, the thread we run it on has to wait until the download completes before it can move on. Essentially, we block the thread from the moment the download starts until the last byte is transferred.

But that map contains a full 30mb worth of pixels! If we download it on the main thread, we can expect it to freeze our interface for a few seconds at least… and that's over wifi. On a dodgy cell connection, it'd probably block main long enough to crash the app.

That certainly sounds like a deal breaker. Guess it's time to spin up something on a background thread, right?

Not necessarily. The trick is, our download is sitting idle for most of the 3-5 seconds it's blocking the thread. Network connections are high-latency and therefore inherently bursty. For every millisecond of requests sent there's around 200-600ms of just waiting around for a response to come back.

We can reclaim those idle milliseconds by making our download asynchronous. Unlike its synchronous counterpart, an asynchronous download returns control to the thread as soon as it's called. Then, as time goes on, it only asks for the thread's attention when something needs to happen, like a request needs to be sent or some chunk of the download needs to be written to disk. The rest of the time, the thread is free to do other work, keeping it responsive.

In other words, as long as a task spends a lot of its time doing nothing, we can call it asynchronously on the main thread without blocking it. Thankfully, it turns out "nothing" is exactly what most tasks do for the majority of their time!

Think of displaying a dialog, for example. There's a few milliseconds of work at the beginning to show it on the screen. And it'll take a few milliseconds at the end to process a user's tap. But for the many seconds in between? It's just waiting, doing nothing.

Or animations. Even if we're animating a scene at a smooth 60 frames per second and taking a full 10ms to prepare each frame, our animation is still spending 1/3 of its time twiddling its thumbs, waiting.

With so many every-day tasks benefitting from asynchronicity, it is perhaps no surprise that the Cocoa frameworks are chock-full of wonderfully asynchronous APIs. Networking with NSURLConnection/Session; displaying stuff with UITableView, UIAlertView/Controller, etc.; processing input via UITextField/View… if it has a delegate or takes a completion handler, chances are it's operating asynchronously.

In fact, most of the Cocoa code we write is either already asynchronous or can be made so by looking up its documentation and following the "important" instructions in the discussion section. So if step 1 towards making our apps safer and less crashy is to prefer asynchronous APIs over their synchronous counterparts, we're already 90% of the way there.

Once everything is asynchronous, step 2 is pretty easy, too. We could try to stop thinking in terms of the total synchronous time it takes a task to complete, and instead reason about the aggregate of work it actually performs, interleaved between other work on a thread — but I usually just throw everything on the main thread and see how it performs. I've yet to be disappointed.

Step 3 is more complicated. Now that we have a bunch of asynchronous tasks on a single thread, it turns out to be surprisingly hard to answer questions like "What order will this happen in?" or "When would it be safe to load this?"

Managing that complexity will be the topic of next week's post.


Hit me up on twitter (@jemmons) to continue the conversation.

Custom Menu Items for Table View Cells

Adding our own items to a table view cell's pop-up menu is actually pretty easy. But the documentation can be tricky to track down, and there's one weird gottcha, so let's break it down.

TL;DR

Check out the documentation for UIMenuController

Still TL; Still DR

There's a sample project for you to peruse on GitHub.

The Scenic Route

Adding a standard pop-up menu to our table view cells is a simple matter of implementing three methods in our table view delegate:

override func tableView(tableView: UITableView,
shouldShowMenuForRowAtIndexPath indexPath:
NSIndexPath) -> Bool { ... }

override func tableView(tableView: UITableView,
canPerformAction action: Selector,
forRowAtIndexPath indexPath: NSIndexPath,
withSender sender: AnyObject?) -> Bool { ... }

override func tableView(tableView: UITableView,
performAction action: Selector,
forRowAtIndexPath indexPath: NSIndexPath,
withSender sender: AnyObject?) { ... }

This will give us access to cut, copy, and paste (with selectors of cut:, copy:, and paste:, natch) right out of the box. But what if we want to add or own? UITableViewDelegate's documentation for these methods seem to indicate copy and paste are the only actions available to us.

That's where UIMenuController comes in. We can use it to add our own UIMenuItems to the list of possibilities sent to tableView(_: canPerformAction:...) and tableView(_: performAction:...).

Creating a menu item is straight-forward:

let item = UIMenuItem(title: "My Item",
  action: Selector("myItem:"))

And we work with UIMenuController through its singleton instance:

let menu = UIMenuController.sharedMenuController()

This means we can add a menu item to it wherever we like, but it should be someplace that only gets called once (if we don't want our menu getting flooded with duplicate items). The app delegate's application(_: didFinishLaunchingWithOptions:) is one such place.

We could just assign our item to UIMenuController's menuItems property. But because it's a singleton, we really can't reason about what might or might not have been added to it already. So we take measures to ensure we preserve any existing items:

var newItems = menuController.menuItems 
  ?? [UIMenuItem]()
newItems.append(item)
menu.menuItems = newItems

Gottcha

At this point, we'll see our custom action selector, "myItem:", sent to tableView(_: canPerformAction:...), and we might think we're done.

But wait, our menu item still isn't showing up? What's going on?

Here's the thing: canPerformAction... gets sent the selector, but tableView(_: performAction:...) never does. As far as the menu system is concerned, nothing responds to our action's selector, so it doesn't get displayed.

The solution is to step outside of the table's delegate setup and work directly with our cell's custom class. If we implement the selector as a method there, it will be found by the menu system and called whenever our custom item is tapped:

//In our table cell's custom class
func myItem(sender:AnyObject?){
  //handle menu tap here.
}

We might think that means we can leave out tableView(_: performAction:...) all together! Nope. It still needs to be there or the menu won't get displayed regardless of what item actions we actually care about.

So there we go! As long as:

  • tableView(_: shouldShowMenuForRowAtIndexPath:) returns true for the cell at the given index path, and
  • tableView(_: canPerformAction:...) returns true for your custom item's selector for the cell at the given index path, and
  • tableView(_: performAction:...) exists, and
  • The cell at the given index path implements a method with our action's selector

our custom menu item will appear and do the Right Thing.


Hit me up on twitter (@jemmons) to continue the conversation.

Swift Exceptions are Swifty: Part 2

Here's what actually handling an error looks like in Swift 2:1

func human() throws -> String{ ... }
do{
  let h = try human()
  //do stuff with «h»
} catch{
  print("Error: \(error)")
}

In Part 1, we discussed how the underpinnings of throws and throw are, at least conceptually, more Swifty than they first appear. Like a surprising number of types, conditionals, and other "language features" of Swift, throws and throw feel like they could be implemented in the standard library. There's some syntactic sugar, yes, but not a lot of magic.

What about actually handling errors, though? Is it even possible to implement try/catch in terms of Swift, or is it something that requires new alien flow control bolted on to the runtime?

There are fewer clues to go on, here, but it's illuminating to look at the other enhancements added to Swift 2 along side error handling. In doing so, we may discover that neither try nor catch are as magic as they first appear.

Eyes Up, Guardian

For example, let's look at guard. It's whole purpose is to evaluate a condition in the current scope and, if the condition doesn't hold true, return out of said scope. That is to say:

guard let x = foo() else{
  //must return!
}
// can do stuff with «x».

There are a number of ways this kind of early exit can simplify code all on its own, of course. But looking at our error handling above, it seems like the functionality behind guard might have another use.

Error handling in Swift 2, remember, is performed within an explicit new scope created by the do statement. Functions that can throw are evaluated in this scope and, if an error is found, must exit immediately. Sound familiar? try is essentially acting as a guard:

do{
  let h = try human()
  //do something with «h»
}
// conceptually equivalent to:
do{
  guard /*human doesn't error*/ else{ return }
  //do something with «h»
}

Case Study

There's one fly in the ointment. As we mentioned last week, human() is returning an Either enumeration that contains either a value or an ErrorType. We need to be able to unwrap that enumeration to figure out if we've erred or not.

Classically, the only way to unwrap enums like this was with a switch statement:

enum Either<T,U>{
  case This(T)
  case That(U)
}

switch myEither{
case .This(let str):
  print(str)
case .That(let error):
  //handle error
}

But Swift 2 has added the ability to add case clauses to a number of statements outside of switch. Statements like if, for...in, and — of most interest to our current conversation — guard:

func human() throws -> String
do{
  let h = try human()
  //do something with «h»
}
// conceptually equivalent to:
func human() -> Either<String,ErrorType>
do{
  guard case .This(let h) = human() else{
    return
  }
  //do something with «h»
}

Deferential Treatment

Another new addition to Swift 2 is the defer statement. It lets you declare some code that gets executed immediately before control exits the current scope:

func blogPost(){
  defer{
    print("deus ex machina!")
  }
  print("get too clever with examples")
  print("paint self into corner")
}

blogPost()
//> get too clever with examples
//> paint self into corner
//> deus ex machina!

The scope that triggers a defer could be anything. An if statement. The case of a switch… or even a do:

func blogPost(){
  do{
    defer{
      print("deus ex machina!")
    }
    print("paint self into corner")
  }
  print("still have to write blog")
}

blogPost()
//> paint self into corner
//> deus ex machina!
//> still have to write blog

Interesting! Now let's take another look at catch:

do{
  try human()
  print("vegetable")
  print("mineral")
} catch{
  print("error")
}

Assuming human() throws an error, we'll never see "vegetable" or "mineral" printed. That's because as soon as human() fails, we exit the scope of the do we're in. So how does the catch get called?

Is sounds rather like a defer, doesn't it? If we replace that try with a guard, and the catch with a defer, we get essentially the same behavior:

do{
  defer{
    print("error")
  }
  guard case .This(let h) = human() else{
    return
  }
  print("vegetable")
  print("mineral")
}

Building try/catch

Using this knowledge, can we build our own try/catch? If we assume that throwing functions return an Either<T,ErrorType>, the answer is yes! Though it's not quite as pretty as what we've seen so far. We'll need a temporary variable to hold the error state, for example. And because defer will capture it, it'll need to be optional. In the end, something as simple as:

do{
  let h = try human()
  print(h)
} catch{
  print(error)
}

might look like this monstrosity by the time we're finished coding it by hand:

do{
  var tmp:Either<String,ErrorType>?
  defer{
    if case .That(let error) = tmp!{
      print(error)
    }
  }
  tmp = human()
  guard case .This(let string) = tmp! else{
    return
  }
  print(string)
}

So I, for one, welcome our syntactic sugar overlords and pray we never need write the likes of this again.

But the point, of course, is not that we need to do any of this. It is that we can. The mechanisms for handling errors, buried deep in the primitive bellies of most languages, feel exceptionally close to the surface of Swift. Concepts that are called "magic" in C++ or "you-just-have-to-learn-it" in Java are knowable, buildable things in Swift. This extends from ARC through Int and Array2 apparently all the way up to try/catch.

More and more it's starting to feel like this is what it means to be "Swifty".


1: I'm willfully simplifying my example by only talking about functions, not methods, and ignoring pattern-matching in the catch clause. Everything discussed should apply, regardless. ↩︎

2: Somewhere, there's a 70,000 word blog post waiting to be written on the wonders of isUniquelyReferenced. UPDATE: Of course it's already written, and of course it's by Mike Ash. ↩︎


Hit me up on twitter (@jemmons) to continue the conversation.

Swift Exceptions are Swifty: Part 1

There's a lot of sturm and drang around the announcement of exceptions in Swift 2. I get it. I fought in the Java wars. I know the terror (or worse: apathy) try/catch blocks can inspire.

But remember that first time we ⌘-clicked on Int and realized "My god, it's written in Swift"? The same experience awaits us if we dig a little into the design of exceptions. They may look like a scary language-level feature, but the details are elegant. Swift 2's exceptions feel written in Swift, not ossified runtime magic. This makes them more dynamic than they might first appear (and holds the door open for more generalized forms of exception handling in the future).

throws and throw

There's no greater example of this than marking a function with throws:

func infallible(val:Int) -> String{
  return "Perfect Things"
}

func human(val:Int) throws -> String{
  if mistake{
    throw MyErrorType
  }
  return "Mostly Perfect"
}

At first blush, really weird things are going on here. infallible() is functional. It operates on inputs and returns its output. Simple.

But human() takes input, returns output — and has some third kind of dimension introduced by this throws keyword. It's no longer an in-and-out operation. In a way, throws appears to break the very definition of what a function is.

Fans of monadic error handling often point out that, by wrapping errors into the return via an Either type,⁠1 the purity of the function is maintained:

func human(val:Int) -> Either<String,NSError>{
  if mistake{ return Either(error) }
  return Either("Mostly Perfect")
}

And while this is true, the costs are obvious. Angle brackets, monads, and the cognitive overhead of wrapping and unwrapping values for both implementors and callers, alike.

An ideal solution might maintain functional purity under the covers, forcing functions to error out by returning Either. But then offer enough syntactic sugar on top to save the uninitiated (or initiated-but-uninclined) from having to reason about functional concepts like functors, binds, and what maps on what.

Would it blow your mind to learn this is essentially what Swift 2 does?⁠2

It's telling that exceptions have arrived in the language alongside multi-payload enums. This makes it trivial to create Either types without the awkward, inefficient boxing of values previously required. And it would seem exceptions take advantage of this. throws is just syntactic sugar. The following are, at least conceptually, identical:⁠3

func canFail<T>() throws -> T{}
//Conceptually equivalent to:
func canFail<T>() -> Either<T,ErrorType>{}

Which makes throw a simple return with an Either wrapper:

throw MyErrorType
//Conceptually equivalent to:
return Either<MyErrorType>

Note that even if we don't understand Either, or if generics make us break into a cold sweat, or if our knuckles turn white at the mere mention of "monads", throws, throw (and all the try/catch machinery we'll get to next week) nicely isolate this all away from us.

This is in keeping with Swift's founding principals. Back during the introduction of access modifiers, Chris Lattner told us one of Swift's design goals is to allow features to be introduced to new users incrementally.

He calls out Java as being particularly bad at this, requiring us to understand public static void main before we can write a single line of code.⁠4 Now imagine if, when learning Swift, we had to understand monads before we could handle exceptions. Our heads would never stop spinning!

So isolating away Either is a good thing for beginners. But, while the new user can eschew all access control while learning Swift, public, internal, and private are waiting there for the advanced practitioner. What about Either? Can we opt-out of the sugar Swift 2 bakes on top of exception handling and manually deal with Either?

Short answer: no. But patience is a virtue. Remember, access control didn't make it into Swift 1 until beta 4. Who knows what future betas of Swift 2 will bring?⁠5


1: See also: the Result monad, which is essentially an Either<T,U> with U predefined as an error type.↩︎

2: "Under the covers errors propagate a lot like Result<T>. There's no dynamic unwinding like exceptions." –Joe Groff ↩︎

3: "throws -> T is like -> T | ErrorType. If you don't catch an error, you implicitly return it." –Joe Groff ↩︎

4: And if that single line can throw, we have to add understanding checked exceptions to the list. ↩︎

5: "Didn't make it for WWDC, but we plan on having standard funcs for wrapping and releasing errors into an Either."–Joe Groff ↩︎


Hit me up on twitter (@jemmons) to continue the conversation.

Clean Optional Parameters

Let's extend NSError to return errors with a domain, code, and description specific to our app:

extension NSError{
  enum MyErrorCodes:Int{
    case UnknownError = 100, 
         ParseError, 
         ParameterError, 
  }

  //We'd like this to be a class variable, but
  //those aren't supported in Swift (yet). So we 
  //use a computed property instead:
  class var myDomain:String{ return "MyDomain" }

  class func 
   myParseError(description:String)->NSError{
    let info = 
       [NSLocalizedDescriptionKey:description]
    return self(
       domain:myDomain, 
       code:MyErrorCodes.ParseError.rawValue, 
       userInfo:info)
  }
}

This makes getting an error pretty convenient:

func parse(inout error:NSError?){
  //...
  error = NSError.myParseError("Parse failed.")
}

But if we use it many places, it might get tedious to continually type a description for the error. Especially if that description is usually "Parse failed."

We could give up on passing it as a parameter and hard-code the description into myParseError. But what if we have a very specific failure and we want to pass the reason for it up to our UI?

What we need is a way to optionally pass a custom param to our method when we have to, but fall back on a sane default when we don't. Optionally being the key word, there.

So we make our param optional. And thanks to nil coalescing operators, we don't even have to (un)wrap the entire thing with if let:

class func 
 myParseError(description:String?)->NSError{
  let info = 
     [NSLocalizedDescriptionKey:
     description ?? "Parse failed."]
  return self(
     domain:myDomain, 
     code:MyErrorCodes.ParseError.rawValue, 
     userInfo:info)
}

Now, if we want a custom description, we pass it in. If we don't, we just pass nil:

NSError.myParseError("Big bada boom!") 
//> "Big bada boom!"
NSError.myParseError(nil) 
//> "Parse failed."

But we're programmers, and therefore inherently lazy. So some of us are already thinking about ways to get rid of that nil:

extension NSError{
  //...
  class func myParseError()->NSError{
    return myParseError(nil)
  }
  class func 
   myParseError(description:String?)->NSError{...}
}

But that's oh-so-ObjC. Let's not forget the marvelous gift Swift has given us in the form of Default Parameter Values. Instead of cluttering up our interface with shell methods, we can add a simple = nil to our original method's signature to give its param a default value:

extension NSError{
  //...
  class func 
   myParseError(description:String?=nil)->NSError{
    let info = 
       [NSLocalizedDescriptionKey:
       description ?? "Parse failed."]
    return self(
       domain:myDomain, 
       code:MyErrorCodes.ParseError.rawValue, 
       userInfo:info)
  }
}

Now, whenever we don't include a parameter in our call, the method defaults to a param value of nil. And everything just works!

NSError.myParseError("Big bada boom!") 
//> "Big bada boom!"
NSError.myParseError() 
//> "Parse failed."

 

Update 3/27/2015:

A few have asked on reddit and twitter if we couldn't specify our default error message directly in the method signature and if that might not be the cleaner way to implement this.

The answers are "Kinda," and "In my humble opinion, no," respectively. Certainly something like the following is tempting:

class func 
 parseError(desc:String="Parse failed.")->NSError{
  let info = 
     [NSLocalizedDescriptionKey:desc]
  //...
}

But this is not the same as the example we lay out above. Our original example uses a default description if our argument is nil. This new example uses a default argument to set our description. It's a subtle distinction. To see the difference, consider what would happen if we passed nil to both:

//Original definition:
myParseError(nil) //> "Parse failed."
//New with default:
parseError(nil) //> nil

Is this a big deal? It depends on what you expect the method to do when given a nil value (or something that may or may not be nil). But I'd argue our original definition is more robust.

Also, setting values in our method declarations can be slippery slope. "Parse failed" seems innocent enough, but what if our error description is a little more realistic?

class func 
 parseError(desc:String="There was a problem parsing your file. Please check the file name and try again.")->NSError{
  let info = 
     [NSLocalizedDescriptionKey:desc]
  //...
}

Or what if our default isn't a literal? Or what if it isn't even a constant?

class func 
 myName(fullName:String=first()+last())->NSError{
  let info = 
     [NSLocalizedDescriptionKey:desc]
  //...
}

This is definitly clever. And let's take a moment to marvel at the fact that the above is even possible in Swift! But it complects our code by mixing data (and business logic!) with our method signature. And that makes it (IMHO) less "clean".

Contrast this with our original implementation. Yes, we set a default value of nil, but if we think about it, nil is already the default value of any optional. If we simply created an "empty" string optional as the parameter's default instead, it would behave exactly the same:

class func 
 parseError(desc:String?=Optional<String>())->NSError{
  let info = 
     [NSLocalizedDescriptionKey:desc ?? "Default"]
  //...
}

We're not creating a new value by giving our parameter a default of nil. We're merely clarifying what the existing default for our parameter type is so our caller can ignore it if it wants. This keeps our parameters simple and easy to reason about.


Hit me up on twitter (@jemmons) to continue the conversation.

Stupid Disambiguation Tricks

Swift does a lot of work with inference. This prevents a lot of boilerplate and redundancy:

//Long and boring...
let url:NSURL = NSURL(string:"http://foo.com") 
self.localMethod(self.myProperty, MyEnum.SomeType)

//Better coding through inference!
let url = NSURL(string:"http://foo.com")
localMethod(myProperty, .SomeType)

But sometimes our code is ambiguous leading Swift to infer the wrong things. For example, let's say we want an optional string variable with an initial value of nil:

//Ambiguous. What type should this be?
let optionalString = nil

Swift can't infer a type from nilany optional type could conceivably have a value of nil. So we need to disambiguate by being more explicit or providing more context.

//Provides explicit value:
let optionalString = Optional<String>()

//Provides context for the assignment:
let optionalString:String? = nil

Another common case involves having a property and parameter with the same name. Normally we don't have to explicitly refer to properties through self. But in this case, if we don't, Swift will assume we want to assign to the parameter instead of the property… which is illegal:

var tautology:String
init(tautology:String){
  //Bad. The parameter is read-only.
  tautology = tautology

  //Good. Explicit use of "self".
  self.tautology = tautology
}

But there are other, weirder situations where disambiguation is needed:

struct MyThing{
  func map(transform:SomeClosure){
    //Wrong number of arguments?!
    map(self, transform);
  }
}

This looks legit. We have a method called map that's using Swift's top-level map function in its implementation. Why the sad compiler?

It turns out when Swift sees map in the body of our method, it assumes we're talking about self.map and tries to call itself recursively (with the wrong parameters, hence the error).

To work around this, we need to let the compiler know we want to use standard library's map by prepending the call with Swift.:

struct MyThing{
  func map(transform:SomeClosure){
    Swift.map(self, transform);
  }
}

Which works great for the standard library functions. But what if we want to call our own top-level routine:

func foo(){ ... }
struct MyThing{
  func foo(){
    //Bad. Calls self.foo recursively
    foo() 
  }
}

No problem. There's nothing magic about that Swift. specifier. It's just the name of the module the standard library happens to be defined in. foo is defined in our own module, so we prepend its name to the call instead:1

//In module "MyProject":
func foo(){ ... }
struct MyThing{
  func foo(){
    MyProject.foo()
  }
}

Which works great. Until…

What happens when you define a type with the same name as your module that contains a method with the same name as a top-level function that you want to call from within its implementation?2

//In module "MyProject":
func foo(){ ... }
struct MyProject{
  func foo(){
    //Uh-oh! Recursive again.
    MyProject.foo()
  }
}

Kablamoo! Because MyProject is also the name of our struct, Swift is trying to call the foo method on it instead of our module resulting in another recursive loop.3

So how do we tell Swift we're trying to specify a module name, not a type?

Um… I don't think we can. Or, at least, I haven't been able to find a way, yet. If you know of something, tell me about it!

In the meantime, I work around this problem using private functions that wrap their ambiguously-named counterparts:

//In module "MyProject":
func foo(){ ... }
private func fooAlias(){ foo() }

struct MyProject{
  func foo(){
    fooAlias()
  }
}

What can I say? Not everything in Swift is all rainbows and monads. Its implicit-and-morally-opposed-to-external-tweaking namespaces and pathologically-convinced-we-want-the-most-local-thing-always inference engine are perfect examples of tools that can sometimes get us backed into an ugly corner.

But those cases are: rare; usually addressable with additional context; and, for all the concision, consistency, and convenience we get in exchange, well worth the tradeoff.


1: Normally our module name is the same as our project name. But if we want to look it up, we can check our Build Settings for "Project Module Name" under the "Packaging" section.↩︎

2: The asymmetry of programming: it only takes 68 bytes to implement what it takes 190 bytes to explain.↩︎

3: The actual error is "Missing argument for parameter #1 in call" because methods are, in fact, curried functions in Swift.↩︎


Hit me up on twitter (@jemmons) to continue the conversation.

Nil Coalescing Operator

Swift has a fun little formulation called the nil coalescing operator. Many languages have similar constructs, but in Swift it's written as ?? and you use it like this:

let result = optionalValue ?? "default value"

It essentially says, "Unwrap the optional. If it's a value, use it. If it's nil, use the supplied value, instead." It's really just a more concise way to say the following:

//Long-winded version of the above:
var result = "default value"
if let someFoo = optionalFoo{
  result = someFoo
}

It's important to note, though, that the right-hand side of the operator isn't limited to being a simple literal or constant. In fact, it can be any sort of expression:

let result = optionalValue ?? calculateDefault()

The only limitation is that it must ultimatly evaluate to a value that has the same type as that represented by the optional. This, for example, is an error:

let stringOptional:String? = nil
let result =  stringOptional ?? 42
//BOOM! Optional is a String but 42 is an Int

One fun consequence of ?? taking an expression for its right-hand argument is that you can chain nil coalescing operators together:

let result = maybe ?? possibly ?? "default"

Another thing we need to know about the nil coalescing operator is that, like Swift's logical "and" and "or" operators, ?? short-circuits its evaluation. That is, if the optional isn't nil, it doesn't need to know the value of the given expression, so it doesn't evaluate it. This means we can put expensive operations there without worrying they'll be called unnecessarily:

let result = cache ?? calculateAndCacheValue()

The nil coalescing operator wasn't included in the first beta of Swift (it was added in beta5), and as such wasn't in the original edition of The Swift Programming Language. As a result, it's escaped many of ours' notice.

But it's an incredibly important tool for increasing the readability of our Optional code. And while all tools focused on concision can, of course, be taken too far1, we owe to ourselves (and those reading our subroutines) to keep ?? close at hand.


1: I recently found myself writing:

x = (value?["on"] as? NSNumber)?.boolValue ?? false

I don't yet feel guilty about it, but imagine I will.↩︎


Hit me up on twitter (@jemmons) to continue the conversation.

Literal Enumerations

Let's say we want to dig a value out of pile of JSON. For example, what if we want to pull Bobbin's name out of the following:

{
  "characters":{
    "LucasArts":[
      {"name":"Guybrush Threepwood"},
      {"name":"Bobbin Threadbare"}, 
      {"name":"Manny Calavera"} 
    ]
  }
}

We might represent the path1 used to get to this data with an array like so:

["characters", "LucasArts", 1, "name"]

The problem is, JSON freely mixes dictionaries with arrays. So some of our path elements are Strings, while some are Int indexes. Swift arrays are homogenous — that is, everything in them has to have the same type. We can't mix-and-match Strings and Ints in a type-safe Array.

We've talked before about how Swift enumerations are sum types that can represent one of a number of different types. So we can create an enum that represents both String-based keys and Int-based indexes, and use it as the type of our array:

enum JSONSubscript{
  case Key(String), Index(Int)
}

let path:[JSONSubscript] = [.Key("characters"), .Key("LucasArts"), .Index(1), .Key("name")]

Which works and has a nice sort of DSL feel to it. But if we use more than a handful of enumerations in our code, it's dangerously easy for us to be overwhelmed by a flood of .This("thing") and .That("thing"). Can we do better?

An oft-overlooked feature of Swift enumerations is that we can define initializers for them, just like classes and structs:

enum JSONSubscript{
  case Key(String), Index(Int)
  init(value:String){
    self = .Key(value)
  }
  init(value:Int){
    self = .Index(value)
  }
}

And if we can write initializers, we can conform to ...LiteralConvertible protocols.

Mattt Thompson has written a fantastic explanation of literal convertibles that we should all read. But to summarize, a type that conforms to one of the literal convertible protocols can use the given literal to initialize itself.

To put it another way, we usually think of 5 as being a literal representation of an Int. But if JSONSubscript were to implement the IntegerLiteralConvertible protocol, 5 could also be a literal representation of JSONSubscript.

All it takes is a few inits… and some typealiases in the case of StringLiteralConvertible:

enum JSONSubscript : IntegerLiteralConvertible, StringLiteralConvertible{
  case Key(String), Index(Int)

  //for IntegerLiteralConvertible:
  init(integerLiteral value: IntegerLiteralType){
    self = .Index(value)
  }

  //for StringLiteralConvertible
  init(unicodeScalarLiteral value:String){
    self = .Key(value)
  }
  init(extendedGraphemeClusterLiteral val:String){
    self = .Key(val)
  }  
  init(stringLiteral value:StringLiteralType){
    self = .Key(value)
  }
}

Now, I'll be the first to concede the implementation of StringLiteralConvertible is pretty verbose2

[UPDATE: thanks to a little help from the message boards, the above is now much less long-winded. I hereby retract my claims of verbosity!]

…But in exchange, look what our path array has become:

let path:[JSONSubscript] = ["characters", "LucasArts", 1, "name"]

Simple, beautiful data without annotation or distraction.


1: By "path" here I'm talking about something like JSONPath — or XPath in the world of XML.↩︎

2: They're aware of it.↩︎


Hit me up on twitter (@jemmons) to continue the conversation.

Custom Switch Matchers

We've made a lot of hay out of Swift's incredibly flexible switch statement on this blog. As we've seen, its pattern matching is particularly powerful when dealing with tuples:

switch (xCoordinate, yCoordinate){
case (0, 0):
  println("origin")
case let (0, y):
  println("No X, \(y) high.")
case let (x, 0):
  println("\(x) far, no Y.")
case (0...5, 5...10):
  println("In first quadrant")
case let (x, _):
  println("Who cares about Y? X is \(x)")
}

But we don't have to limit ourselves to tuples or simple value types when using a switch. With binding and the addition of a where clause, we can match any sort of arbitrary construct:

class Person{
  var name:String
  //...
}

switch myPersonObject{
case let x where x.name == "Gybrush":
  println("You fight like a cow.")
case let x where x.name == "Manny":
  println("One year later...")
case let x where x.name == "Vella":
  println("I'm going to kill Mog Chothra!")
default:
  println("I don't know who that is.")
}

Which is wonderful! At the same time… well, It's hard to ignore the ton of repetition going on. In each statement we're binding a value to a name, fetching a property from it, and finally comparing the property to a string. This final comparison is all that really matters; the rest is just boilerplate.

Wouldn't it be great if we could teach switch new matchers that knew how to deal with our custom objects? Believe it or not, there's an operator for that.

It's called the pattern match operator, sometimes known by its (less googleable) syntax: ~=. And switch statements use it behind the scenes to compare values to their cases' matching patterns.

All of which is to say, these two examples are (at least conceptually) identical:

switch myint{
case 0...10:
  println("first half")
case 11...20:
  println("second half")
default:
  println("out of range")
}

if 0...10 ~= myint{
  println("first half")
} else if 11...20 ~= myint{
  println("second half")
} else{
  println("out of range")
}

We can reasonably argue all its various merits and trade-offs, but this is a pretty clear-cut case of how operator overloading can open up a whole new world of expressiveness to us. For we can simply overload ~= like so:

func ~=(pattern:String, value:Person)->Bool{
  return pattern == value.name
}

And suddenly it's possible to refactor our switch statement down to:

switch myPersonObject{
case "Gybrush":
  println("You fight like a cow.")
case "Manny":
  println("One year later...")
case "Vella":
  println("I'm going to kill Mog Chothra!")
default:
  println("I don't know who that is.")
}

This is yet another example of Swift employing polymorphism in an unexpected way to make the type system do work for us. Work that results in very clean, concise, and — dare I say it — declarative code.


Hit me up on twitter (@jemmons) to continue the conversation.