Figure

A blog about Swift and iOS development.

Mixing Constant and Literal Strings

Say we're writing an HTTP library. We're going to want a way to deal with headers.

func addHeader(_ header: String, value: String) {
  //...
}

Take a look at the signature of addHeaders. On the surface, there's no problem here. The spec roughly defines headers as a list of key/value pairs with both the key and the value being text. Seems pretty straight forward:

addHeader("Contant-Type", value: "text/html")

But it's not the wild west. HTTP headers have a number of well-known keys. And some, like "Content-Type" here, are used over and over again. And if we look above, we'll see I mistyped it.

No problem. We'll define a constant to use instead:

let kContentType = "Content-Type"
addHeader(kContentType, value: "text/html")

A great solution for the problem at hand. But we haven't dealt with the root issue: the interface is still inherently stringly typed. Nothing actually enforces the use of our constant, so…

//Me in a different file.
//After three years.
//And a bottle of Buffalo Trace.

addHeader("cantnt-tipy", value: "max/headroom")

Right. This is why we have enums.

enum HeaderKey {
  case accept
  case contentType
  case userAgent
  //...
}

func addHeader(_ header: HeaderKey, value:String) {
  //...
}

addHeader(.contentType, value: "max/headroom")

Great! Clean and very swifty. We could stop here…

Except that well-known headers aren't the whole story. Custom headers are very much a thing.

addHeader("X-Coke-Type", value: "New™ Coke®")
//🛑 cannot convert value of type 'String' 
//   to expected argument type 'HeaderKey'

How do we make room in our enum for unexpected and unknowable keys like this? We'll capture them in an associated value:

enum HeaderKey {
  case accept, contentType, userAgent
  case other(String)
}

addHeader(.contentType, value: "max/headroom")
addHeader(.other("X-Coke-Type"),
          value: "New™ Coke®")

And now we have an interesting decision to make. Do we want to enforce safe, well-known constants and provide an option to specify arbitrary strings? Or do we want to allow arbitrary strings and provide an option to specify safe, well-known constants?

Above I've chosen the former. But if the situation calls for the latter, we could easily make HeaderKey conform to ExpressibleByStringLiteral:

extension HeaderKey: ExpressibleByStringLiteral {
  public init(stringLiteral value: String) {
    self = .other(value)
  }
  //...
}

Then we could write our custom headers without .other:

addHeader("X-Coke-Type", value: "New™ Coke®")

Now, of course, there's nothing to stop us from fat-fingering "Contant-Type" as a string literal. But HeaderKey is still there in the signature and we can use .contentType if we choose.

Which of these approaches is correct? Neither and both — it's a trade off that depends on the use case. For our HTTP header example, though, it feels right to prioritize enumeration over custom strings.

Speaking of conformance to string protocols, so far we've been focusing on cleaning up the call site. But remember, ultimately, headers are text. So when we pass them to our networking libraries et al., we'll need to treat them like strings. That's what CustomStringConvertible is for:

extension HeaderKey: CustomStringConvertible {
  public var description: String {
    switch self {
    case .accept: return "Accept"
    case .contentType: return "Content-Type"
    case .userAgent: return "User-Agent"
    case .other(let s): return s
    }
  }
}

At this point, we might ask "Why not RawRepresentable?" It's true, RawRepresentable does almost exactly what we want. But it carries with it the extra overhead of initializing with a raw value which we'll never use.1 And String(describing:) is the cannonical way "to convert an instance of any type to its preferred representation as a string."

func addHeader(_ header: HeaderKey, value:String) {
  let headerText = String(describing: header)
  libraryExpectingAString(headerText)
}

Very rarely, in life or code, is any list absolute or certain. Situations come up all the time where we need an "escape hatch" from our carefully calculated set of pre-defined options.

When that happens, we don't need to throw our hands up in despair and make everything a String. Enumerations (combined with CustomStringConvertible and maybe even ExpressibleByStringLiteral) let us work around the 20% case while not jeopardizing the safety and convenience of the 80% out there.


1: Still, RawRepresentable is way cooler than it's often given credit for, and those interested should read Ole Begemann's amazing write up on manually implementing the protocol.↩︎


Hit me up on twitter (@jemmons) to continue the conversation.

Readable Swift: The Curious Case of Not

Where's the Not?

Before a thing can be read, it must be seen. And this is a problem with Swift's1 "not" operator: !. It's comparatively thin so it doesn't leave a lot of ink on the page. And unlike a . or ,, it's also tall, so it isn't able to use its negative space to stake out territory.

Compare:

foo.bar.baz
foo!bar!baz

(.many, .unique, .books)
(!many, !unique, !books)

The . pops out as a "nonletter". The ! blends together with whatever's around it. Which makes it non-ideal for a "reevaluate this entire expression as the opposite of whatever it was" operator.

Surprisingly, the rise of boutique "programmer's fonts" hasn't really helped us out here. Everything from Courier Prime to Source Code Pro2 renders ! more or less the same; an undifferentiated thin line. Even Hoefler&Co's Operator, which goes out of its way to be ugly in the name of readability, toes the line when it comes to the humble !.

But where $600 typefaces fail us, we can oft find salvation in unicode and emoji:

prefix operator ❗️

prefix public func ❗️(a: Bool) -> Bool{
  return !a
}

I'm not going to claim this hasn't been a controversial change for some of my teams. But it's indisputable it stands out:

foo!bar!baz
foo❗️bar❗️baz

(!many, !unique, !books)
(❗️many, ❗️unique, ❗️books)

Of course, a ❗️ is a little more difficult to type than a "!". But that's not a bug, it's a feature! Because…

The Only Winning Move is Not to Not

Let's look at a simple conditional.

if homer.isLickingToads { ... }

This is the very definition of readable. Why? Because (english-speaking) brains are highly adapted to parse english and this reads like english: "If Homer is licking toads…".

Now let's look at its negation:

if ❗️homer.isLickingToads { ... }

Hmm. This is a little less readable3 because it doesn't parse quite right. "If not Homer is licking toads…" ultimately makes sense, but only because we disengage our language center and engage our logic circuits to eval it. This creates friction.

Now, it's really important to note we write all our code in "logic mode". So typing out ❗️homer.isLickingToads is the most natural, easiest thing to do at coding time — even though it's (slightly) more difficult to read afterwards.

This asymmetry is important to call out because it's the opposite of useful. We write code only once but read it many times thereafter. If we have to introduce a burden, we want to shift it to the writer, not the reader.4 So we would like this to read:

if homer.isNotLickingToads { ... }

Our linguistic brains parse this just fine. And that troublesome ❗️ has vanished altogether!

On the down side, our logical brains now need to write more code. How much more? Just:

extension Homer {
  var isNotLickingToads: Bool { 
    return ❗️isLickingToads 
  }
}

That seems a totally worthwhile tradeoff.


1: Along with every other common programming language… though interestingly this is one of those cases where C-based languages diverged from their ALGOL-roots. ALGOL and it's non-C derivatives use NOT.↩︎

2: Which is the best and is totally what you should be using.↩︎

3: And if you feel there's nothing wrong with this, please substitute your own arbitrary number of parentheses and ||s until it becomes scary.↩︎

4: It's probably also worth pointing out that, until we make this shift, code review is either sisyphean or useless.↩︎


Hit me up on twitter (@jemmons) to continue the conversation.

Complexity and Strategy

So much good stuff in this post from Microsoft's CVP of Office Development, Terry Crowley. The excerpts speak for themselves:

If essential complexity growth is inevitable, you want to do everything you can to reduce ongoing accidental or unnecessary complexity.

And:

What I found is that advocates for these new technologies tended to confuse the productivity benefits of working on a small code base with the benefits of the new technology itself — efforts using a new technology inherently start small so the benefits get conflated.

And:

The dynamic you see with especially long-lived code bases like Office is that the amount of framework code becomes dominated over time by the amount of application code and in fact frameworks over time get absorbed into the overall code base…

This means that you eventually pay for all that code that lifted your initial productivity. So “free code” tends to be “free as in puppy” rather than “free as in beer”.


Hit me up on twitter (@jemmons) to continue the conversation.

Two Papers on Generic Programming

I understand the syntactic application of generics as a language feature, but Ole Begemann's post on protocols convinced me there's still a lot of gold to be mined in their conceptual underpinnings.

Towards that, I've found these two papers particularly enlightening:

An example chunk of wisdom from Fundamentals (lightly edited for conformance with Swift syntax):

The critical insight which produced generic programming is that highly reusable components must be programmed assuming a minimal collection of [protocols], and that the [protocols] used must match as wide a variety of concrete program structures as possible.

Thus, successful production of a generic component is not simply a matter of identifying the minimal requirements of an arbitrary type or algorithm – it requires identifying the common requirements of a broad collection of similar components.


1: Presented at the "First International Joint Conference of the International Symposium on Symbolic and Algebraic Computations and Applied Algebra, Algebraic Algorithms, and Error Correcting Codes". Ladies and gentlemen, that is how you name a goddamned conference! ↩︎


Hit me up on twitter (@jemmons) to continue the conversation.

Three Quick Tips

Three quick tips before the new year.

Stop Using &&

Everyone else has probably arrived at this already, but Swift 3's rejiggering of the where clause affects more than guard's syntax. It also eliminates the need for && operators in ifs:

// Equivalent in Swift 3:
if this && that { ... }
if this, that { ... }

This clearly isn't a big deal when evaluating two simple Bools. But when working with expressions (especially those involving other logical operators), we have to consider things like grouping and order of operations and I usually give up and put parentheses around everything just to be safe:

// Maybe a regex or something?
if (x > y) && (xyzzy(q) || q != 0) { ... }

// Two expressions, no operator ambiguity.
if x > y, xyzzy(q) || q != 0 { ... }

This syntax also has the knock-on benefit of making it clear where to break lines.1

// Use operator to end lines?
if this &&
   that &&
   theOther {
  //...
}

// Or begin the new lines?
if this
&& that
&& theOther {
  //...
}

// Nevermind.
if this,
   that,
   theOther {
  //...
}

Give Modules Distinct Names

Swift's module-based approach to implicit namespacing works great most of the time. Even if I do something daft like implement my own String type:

// In MyModule:
public struct String { ... }

I can use module names to disambiguate what would otherwise be a collision with the standard library:

import MyModule

let myString = MyModule.String("foo")
let swiftString = Swift.String("foo")

But, what if MyModule exports a public type also named MyModule?

// In MyModule:
public struct String { ... }
public struct MyModule { ... }

Oops! Now, when we import MyModule, it binds the symbol MyModule to our struct, not the module. Swift now thinks we're asking our struct for a member called String which, of course, doesn't exist:

import MyModule

let myString = MyModule.String("foo")
//> Value of type 'MyModule' has no member 'String'

Even if we're careful not to clobber the stdlib, there's no telling what types other 3rd party modules (or future versions of Swift) might introduce. Collisions are inevitable. Best to plan for them ahead of time by not doubling up on module and type names.

Dealing with Non-distinct Modules

Unfortunately, naming our Swift modules after their primary class seems to be something of a(n anti-)pattern in the community. Here's a quick hack for when we need to work around collisions between modules with non-distinct names:

  1. Imports are scoped to file, so create a new file and import only one of the conflicting modules.
  2. Because only one module is imported, references to the conflicting type are now unambiguous. Use this to create a typealias for it.
  3. Repeat this procedure for the other troublesome module.
  4. Use the aliases to avoid conflicts in files where both modules are imported.

So if we have two modules, Foo and Bar, both of which declare a type Thing (and another type with the same name as the module, preventing standard disambiguation), we could:

// In "FooTypes.swift"
import Foo
typealias FooThing = Thing
// In "BarTypes.swift"
import Bar
typealias BarThing = Thing
// In "MyWidget.swift"
import Foo
import Bar
//Use aliases instead of Foo.Thing and Bar.Thing
let thing: FooThing = BarThing().someValue

Out with the old, in with the new. Everyone have a safe and productive 2017!


1: Yes. This is a real argument I've had. More than once.↩︎


Hit me up on twitter (@jemmons) to continue the conversation.

Back-to-Back Begemann!

This week, Ole Begemann talks about introducing an operator to aid interpolating optionals. Which is cool, but then he drops this tidbit:

The @autoclosure construct ensures that the right operand is only evaluated when needed, i.e. when the optional is nil.

I'd never thought of using @autoclosure to enforce this sort of "short circuit" behavior before. Very clever!


Hit me up on twitter (@jemmons) to continue the conversation.

First and Rest

Often times (especially when writing recursive functions) we want to take an array and split it into its first and remaining elements. The most obvious way of doing this in Swift is pretty clunky:

//Ugly!
func loop(_ c: [Int]) {
  guard !c.isEmpty else {
    return
  }
  var rest = c
  let first = rest.removeFirst()

  print(first)
  loop(rest)
}

I hate pretty much everything about this.

First, removeFirst() requires we make sure the array isn't empty. But it's non-obvious we're supposed to care about that. In fact, we probably shouldn't care. We want to focus on first, but our code is forcing us to think about the mechanism we're using to retrieve it instead.

Then there's the assignment of c to rest just to make it mutable. And the fact that rest is named "rest" even though, initially, it contains the entire array. It's not clear at all that removeFirst() returns the value it removes, so the value of first is something of a mystery. And, when all is finally said and done, rest is mutable even though it has no reason to be.

All and all, it feels too verbose, and way too imperative. Nothing apart from the names first and rest give any hint about what's going on here. Thankfully, there's another way to skin this cat:

//Pretty!
func loop(_ c: [Int]) {
  let (first, rest) = (c.first, c.dropFirst())
  guard let someFirst = first else {
    return
  }

  print(someFirst)
  loop(Array(rest))
}

This is better on a number of levels. Nothing is ever mutable, for one. And c can now be empty making first optional — which lets us focus on our value and its existence rather than implementation details of our array. And first and dropFirst() are straight-forward in their naming and behavior.1

There's only one thing still sticking in my craw. first is used for one hot minute before being superseded by the the unwrapped someFirst. Depending on our sensibilities, this might be something the nascent "unwrap" proposal could help with. In the meantime, though, it looks like a job for case:2

//Concise!
func loop(_ c: [Int]) {
 guard case 
       let(first?, rest) = (c.first, c.dropFirst())
       else {
   return
 }

 print(first)
 loop(Array(rest))
}

And there we go! Clear, concise assignment through tuples and just-in-time binding thanks to case.

Back when we left ObjC, many of us asked ourselves, "How much more expressive can Swift really be?" My answer is, "Very; and increasingly so."


1: Well, almost. Note that dropFirst() returns an ArraySlice. Because loop expects an Array, we have to convert rest before passing it to loop(). Reworking loop() to take a generic Collection would get around this — at the expense of being less blog-friendly.↩︎

2: For those unclear on the work first? is doing here, this post on optional pattern matching covers the basics.↩︎


Hit me up on twitter (@jemmons) to continue the conversation.

Emptiness

Great thinking over at Khanlou.com about types that implement isEmpty and the weird tri-state they can be in if Optional. I also have a self-imposed rule forbidding optional Arrays and Dictionarys.

I should probably extend this practice to Strings, but treating "" different from " " gives me the heeby jeebies. So I have a maybeBlank()->String? extension on String that returns the string or nil if it's empty or whitespace.

This article really calls out how insane that is.


Hit me up on twitter (@jemmons) to continue the conversation.

Testing UserDefaults

As with all dependencies, we should reduce UserDefaults to an abstract protocol, inject it into the classes that require it, and thereby decouple ourselves from implementation details and increase the testability of our code by introducing seams.

But I'll be honest, the UserDefaults API is so simple and pervasive that standard dependency management feels like overkill. I always end up calling it directly from my code. Perhaps you do the same?

If so, we've probably encountered the same problem: without the ability to inject mocks, testing code that makes use of UserDefaults can be a pain. Any time we use our test device (including running tests on it!) we potentially change the state of its persisted settings.

So all our tests have to initialize settings to some known state before running. Which introduces a lot of ugly boilerplate and makes it difficult for us to test uninitialized "default" states.

Thankfully, UserDefaults is pretty flexible and we can code our way out of this hole.

Domain Names

UserDefaults is built around five1 conceptual "domains":

  • Volatile Domain
  • Persistent Domain
  • Registration Domain
  • Argument Domain
  • Global Domain

Each domain can hold any number of keys and values. If two or more domains have values for the same key, the value of the higher domain overrides values of the lower ones.

We're probably all familiar with this concept in the context of the registration and persistent domains. The registration domain holds the "default" values we set up using register(defaults:) on app launch. The persistent domain holds the user values we persist using set(_:forKey:).2 And we know that if we register a default then persist a value from the user, it's the persisted value we'll get back from UserDefaults.

But the defaults we registered are still there in the registration domain. If we could somehow peel back the persistent domain, we could test from the "base state" of our app without any of the goofy stuff that might have been persisted by other tests or users.

A Clean Slate

UserDefaults has a mechanism for this: setPersistentDomain(_:forName:). The documentation helpfully states that "name" here "should be equal to your application's bundle identifier." So clearing out our UserDefaults is as simple putting something like this in our setUp():

override func setUp() {
  let bundle = Bundle.main
  let defaults = UserDefaults.standard
  guard let name = bundle.bundleIdentifier else {
      fatalError( ... )
  }
  defaults.setPersistentDomain([:], forName: name)
}

And this works. But two problems. First, it blows away the persisted preferences of our app. If we're running tests on our carry device, it can be a pain to have our app reset every time we test.

Second, I personally hate having setUp() and tearDown() methods in my tests. Code in setUp() feels so far removed from where it's actually used, and most of my tests require some amount custom setup that can't be reduced to a single function anyway.

So here's what I use instead. I've been very happy with it:

extension UserDefaults {
  let bundle = Bundle.main
  let defs = UserDefaults.standard
  static func blankDefaultsWhile(handler:()->Void){
    guard let name = bundle.bundleIdentifier else {
      fatalError("Couldn't find bundle ID.")
    }
    let old = defs.persistentDomain(forName: name)
    defer {
      defs.setPersistentDomain( old ?? [:], 
                                forName: name)
    }

    defs.removePersistentDomain(forName: name)
    handler()
  }
}

Then my tests look something like:

class MyTests: XCTestCase {
  func testThing() {
    // Defaults can be full of junk.
    UserDefaults.blankDefaultsWhile {
      // Some tests that expect clean defaults.
      // They can also scribble all over defaults
      // with test-specific values.
    }
    // Defaults are back to their pre-test state.
  }
}

Remember, as of Swift 3 closures are non-escaping by default. So blankDefaultsWhile's trailing closure doesn't need a @noescape to avoid the self tax.


1: There's actually a sixth "Host Domain" that scopes preferences to host name. But this is (even more) rarely used, and only accessible through Core Foundation.↩︎

2: As for the rest, the global domain holds system-wide preferences for all apps. The argument domain holds preferences we pass when launching apps from the command line (or via "Arguments Passed on Launch" in our Xcode project's scheme). The volatile domain is more or less equivalent to the persistent domain, except its values don't get saved to disk, and are thus lost every time an app is quit.↩︎


Hit me up on twitter (@jemmons) to continue the conversation.

Hairlines

In graphic design, a “hairline” is a line drawn with the thinnest stroke possible on a given output device. If we're drawing with Sharpies and a ruler, our hairlines will be comparatively thick. If we're printing on LaserJets, they'll be pretty darn thin.

On iOS devices, hairlines should be exactly one pixel wide. But iOS doesn't let us draw in pixels, it asks us to draw in points. And points don't uniformly convert to pixels across all platforms. And even if they did, Core Graphics's coordinate system lines up on the boundaries between pixels, not the pixels themselves. So getting hairlines right can be a little tricky.

Going Off the Grid

Check out the following 5x5 point grid. Let's say we want to draw a 1-point thick line between the two blue dots — that is, between (0,2) and (5,2).

We might expect something like:

But what we'll actually get (assuming, for the moment, that pixels and points are equivalent) is:

What happened?

It turns out our coordinate system (the black lines in our image) traces the space between points, not the points themselves. So by drawing right along the path at Y = 2, we're asking for a line to be drawn between points.

That works just fine, mathematically. But in the physical world, our line will be represented by square pixels that are either on or off. Anything “in between” is gets averaged out with anti-aliasing. So instead of a 1-point blue line, we get a 2-point half-blue line.

If we want our perfect thin blue line, we have to draw it down the middle of a point, not in-between. For example, where Y = 1.5 or Y = 2.5:

More generally, to have a perfect line, we have to either round up or round down to the nearest mid-point between grid lines.

Or, graphically, for any Y in the shaded area, we need to either round it up to the top dashed line or round it down to the bottom dashed line to draw a perfect non-antialiased line.

How do we choose if we want to draw above or below our given coordinate? It depends on the situation. If we're trying to draw a line at the very top of a clipping view, we'll want to draw slightly below the Y we give it. Otherwise it will clip and not be visible. The same goes in reverse for drawing at the bottom of a view.

If we call this preference to draw above or below our point the line's bias we can create a function that draws perfect 1-point-wide lines like so:

func singlePointLine(at y: CGFloat, 
                     in rect: CGRect, 
                     topBias: Bool = true) {
  let adjustedY = round(y) + (topBias ? -0.5 : 0.5)
  let line = makeLine(at: adjustedY, in: rect)
  strokePath(line, width: 1.0)
} 

func makeLine(at y: CGFloat, 
              in rect: CGRect) -> UIBezierPath {
  precondition((rect.minY...rect.maxY).contains(y))
  let line = UIBezierPath()
  line.move(to: CGPoint(x: rect.minX, y: y))
  line.addLine(to: CGPoint(x: rect.maxX, y: y))
  return line
}

func strokePath(_ path: UIBezierPath, 
                width: CGFloat) {
  path.lineWidth = width
  UIColor.blue().setStroke()
  path.stroke()
}

My God, It’s Full of Pixels

Sadly, we don't want to draw perfect single point lines. We want to draw perfect single pixel lines. On non-retina devices, those are the same thing. But on everything else, each point is made up of several pixels.

“Plus” model iPhones, for example, actually have three pixels per point, represented by the yellow lines here.

So rather than drawing between the black point-lines of our previous illustrations, we want to draw between the yellow pixel-lines:

Which will give us a line at Y = 1.8333 or Y = 2.1666, depending on the bias.

That means we can't use our simple round() function anymore (as it only rounds to whole numbers). We have to write our own function that rounds up or down to the nearest fraction depending on the bias we give it:

func round(from: CGFloat, 
           fraction: CGFloat, 
           down: Bool = true) -> CGFloat {
  let expanded = from / fraction
  let rounded = (down ? floor : ceil)(expanded)
  return rounded * fraction
}

Then we just need to know what fraction of a point represents a pixel. We do this using the scale of our main screen:

var pixelUnit: CGFloat  {
  return 1.0 / UIScreen.main().scale
}

Then we can draw a pixel-thick line rounded to the nearest pixel instead of the nearest point:

//So close...
func almostSinglePixelLine(at y: CGFloat, 
                           in rect: CGRect, 
                           topBias: Bool = true) {
  let adjustedY = round(from: y, 
                        fraction: pixelUnit, 
                        down: topBias)
  let line = makeLine(at: adjustedY, in: rect)
  strokePath(line, width: pixelUnit)
}

Which is really close to what we want. But this rounds us to the nearest fraction of a point corresponding to a pixel. Or, to put it another way, it snaps us to the yellow lines in our illustration, which actually run between pixels. If we want to avoid the anti-aliasing shown here, we need to snap to the exact middle of the pixel. That is, halfway between the yellow lines.

One way to do this would be to add ½ of our pixelUnit to our rounded value:

let offset = pixelUnit/2.0
let adjustedY = round(from: y, 
                      fraction: pixelUnit, 
                      down: topBias) + offset

which puts us right in the middle of our pixels, like we want. But it shifts both our lines down below Y = 2.

We really want our top-biased line to be just above 2 and our bottom-biased line to be just below it.

To compensate, we subtract offset from our y before rounding:

func singlePixelLine(at y: CGFloat, 
                     in rect: CGRect, 
                     topBias: Bool = true) {
  let offset = pixelUnit/2.0
  let adjustedY = round(from: y - offset, 
                        fraction: pixelUnit, 
                        down: topBias) + offset
  let line = makeLine(at: adjustedY, in: rect)
  strokePath(line, width: pixelUnit)
}

And there we have it. Pixel-perfect hairlines on either side of Y = 2 (depending on the topBias param).

Here's a gist of all of this put together. Keep in mind the structure has been chosen for maximum readability, not because having a bunch of free functions hanging around for drawing lines is a good idea :)


Hit me up on twitter (@jemmons) to continue the conversation.

First Class Functions in Swift

A higher-order function is simply a function that takes another function as an argument.1 Swift uses HOFs all over the place. In fact, it considers the practice important enough to warrant language-level feature support in the form of trailing closure syntax:

// Takes a function as «completion» argument
func foo(with: String, completion: ()->Void) {...}
foo("bar") {
  // trailing closure syntax let's us create that
  // funcion in-line with a closure, packages it up 
  // and passes it in as «completion».
}

But the pervasive use of trailing closure syntax in Swift can lead us to believe that, when a parameter has a function type, we have to pass it a closure. Not true! In Swift, functions are first-class citizens. We can use a function, itself, anywhere we might otherwise use a closure or variable.

For example, to sum an array of numbers, we might be tempted to give the implementation in a trailing closure like so:

[1,2,3,4,5].reduce(0) { $0 + $1 } //> 15

This is just a fancy way of writing:

[1,2,3,4,5].reduce(0, combine: { a, b in
  return a + b
})

Looking at it this way, it's clear all we're really doing is passing the combine parameter an ad-hoc closure that takes two arguments and sums them together. But do we need to create an inline closure for this? There's already an existing function that takes two arguments and sums them together. It's called +. We should just use that:

[1,2,3,4,5].reduce(0, combine: +) //> 15

Treating functions as first-class citizens extends well beyond just passing them as parameters. We can assign them to variables:

floor(5.12345) //> 5
let myFunc: (Double)->Double = floor
myFunc(5.12345) //> 5

We can conditionally assign functions:

let roundUp = true
let myFunc: (Double)->Double = roundUp ? ceil : floor
myFunc(5.12345) //> 6

And, in fact, we don't even need to assign functions to a variable to use them:

let roundUp = true
(roundUp ? ceil : floor)(5.12345) //> 6

Which is pretty cool. Anytime we use functions directly instead of wrapping them in closures or other forms of indirection, we increase the declarativeness of our code.


1: AKA a procedural parameter, which I am motivated to mention only owing to my love of alliteration.↩︎


Hit me up on twitter (@jemmons) to continue the conversation.

Lenses in Swift

Brandon Williams gives a talk at the Functional Swift Conference about Lenses and Prisims in Swift. I'm not 100% sold on their particular usefulness in Swift (and, to be fair, Brandon isn't either). But I love Brandon's playground presentations, and this one demonstrates an awesome blueprint for adding expressive, functional constructs to Swift: define a function, genericize it, compose it, and define an operator for the composition.


Hit me up on twitter (@jemmons) to continue the conversation.

Matching with Swift's Optional Pattern

For Example…

Swift enumerations with raw values can be very handy when dealing with "stringly-typed" data of the sort JSON is notorious for:

{
  "characters":[
    {"name":"Hank",
     "class":"ranger"},
    {"name":"Sheila",
     "class":"thief"},
    {"name":"Diana",
     "class":"acrobat"},
    {"name":"Eric",
     "class":"cavalier"}
  ]
}

A string can hold a nearly infinite combination of possible values, and we need that sort of flexibility when representing names (the variety of which is also quite large).

But what about a character's class? There's only a handful of valid values for that. Representing them as strings (with their infinite variability) would not just be overkill, it'd expose us to potentially subtle bugs such as misspellings, capitalization errors, and non-exhaustive switch statements:

switch stringFromJSON{
case "ranger":
  //Good...
case "Thief":
  //Oh no! Wrong capitalization.
case "cavaleer":
  //Oops! Wrong spelling
}
//Yikes! We forgot completely about acrobats!

If we represent a character's class with a strict enum instead of a flexible String, Swift will catch all these bugs for us! And if we give raw values to our enum types, translating back and forth between JSON's strings is pretty easy too:

enum DNDClass:String{
  case Ranger = "ranger"
  case Thief = "thief"
  case Acrobat = "acrobat"
  case Cavalier = "cavalier"
}

let myDNDClass = DNDClass(rawValue:stringFromJSON)
let jsonString = myDNDClass?.rawValue

And this works great… until we try to use it in a switch statement:

switch myDNDClass(rawValue:stringFromJSON){
case .Ranger:
//ERROR:
//Enum case 'Ranger' not found in type 'DNSClass?'
}

See that tricky "?" at the end of DNSClass?? We're getting an optional back from our constructor because rawValue: is actually a failable initializer.

And it makes sense that it would be. Like we said before, strings can hold a nearly infinite number of possible values. Our enum's constructor has to be ready for that:

DNDClass(rawValue:"ranger")
//> "Optional(DNDClass.Ranger)"

DNDClass(rawValue:"foobar")
//> "nil"

We can deal with this by guarding against nils:

guard let c = DNDClass(rawValue:jsonString) else{
  fatalError("Unexpected class: \(jsonString)")
}

switch c{
case .Ranger:
  //Do ranger stuff...
}

And that's okay… But now we have to parse a chunk of logic above our switch before getting down to business.

We could, instead, take advantage of the fact that Optionals are enums:

switch DNDClass(rawValue:jsonString){
case .Some(.Ranger):
  //Do ranger stuff...
case .None:
  fatalError("Unexpected class: \(jsonString)")
}

But now, instead of having a clean abstraction around the very concept of an Optional, we have a leaky abstraction that requires anyone who reads it to understand the .Somes and .Nones being used under the covers. There's a reason, after all, that the Swift Programming Language makes no mention of Optional's implementation.1

The Optional Pattern

This is why the optional pattern exists. Just add a "?" to the end of an identifier, and it "matches values wrapped in a Some(Wrapped) case of an Optional<Wrapped> enumeration."

In other words, these cases are equivalent:

switch DNDClass(rawValue:jsonString){
case .Some(.Ranger):
  //...
case .Ranger?:
  //...
}

This goes for more complex value-binding patterns as well:

enum Response{
 case Error(String)
}

switch myResponse:
case .Some(.Error(let s)):
  //Do something with «s»...
case .Error(let s)?:
  //Also can do something with «s»...
}

In essence, any time you want to match a pattern against an unwrapped optional value, just put a ? at the end of it.

Don't Default

There's just one gotcha left to tackle. We might expect the following to be exhaustive, but Swift is going to tell us it's not:

switch DNDClass(rawValue:jsonString){
case .Ranger?:
  //Do ranger stuff...
case .Thief?:
  //Hide in shadows...
case .Acrobat?:
  //Backflips...
case .Cavalier?:
  //Whatever it is a cavalier does...
}

//> ERROR: Switch must be exhaustive

That's because, even though we've covered every character class in our enum, there's one important case we've forgotten; what if DNDClass's init fails and returns nil?

We could easily catch this with a default:

switch DNDClass(rawValue:jsonString){
case .Ranger?:
case .Thief?:
case .Acrobat?:
case .Cavalier?:
default:
  //None of the above...
}

But we only want to catch the nil case, and default catches everything. With default in there, we could add new character classes to our enum,2 and Swift wouldn't warn us if we forgot to cover them in the switch.

A common mistake is to try to use the optional pattern again, maybe with something like nil?. But remember, the optional pattern only matches when an optional is expressly not nil. So that will never work.

The easiest, most readable thing is to use the literal nil, itself:3

switch DNDClass(rawValue:jsonString){
case .Ranger?:
case .Thief?:
case .Acrobat?:
case .Cavalier?:
case nil:
  //Failed to init.
}

Patterns in Swift (or, at least, the documentation for patterns in Swift) use a lot of very precise terminology that can take a bit of work to wrap one's head around. But they're incredibly useful4 and punch well above their weight in their contributions to Swift's expressiveness.

As the language continues to evolve, I'm most excited to see what changes and additions happen in this space.


1: Sort of. It actually is mentioned in two places: an example that "Reimplements the Swift standard library's optional type", and the example that shows equivalent code to very optional pattern we're about to discuss.↩︎

2: Bobby and Presto feel left out.↩︎

3: This isn't a special pattern or identifier — it's simply an expression of the fact Optional.None == nil is true.↩︎

4: Especially in Swift 2 since it gives if, while, guard, and for-in their own case matchers. Add those to the existing switch, and there are all sorts of places for us to play with patterns!↩︎


Hit me up on twitter (@jemmons) to continue the conversation.

Asynchronicity and the Main Thread: Part 2

Last week we talked about how we could run bursty asynchronous tasks on the main thread without blocking it. This is super easy if we have a single task that exists in isolation — let's say, some JSON we need to download. For example:1

if let url = NSURL(string: "http://example.com"){
  let req = NSURLRequest(URL:url)
  let main = NSOperationQueue.mainQueue()
  NSURLConnection
    .sendAsynchronousRequest(req, queue: main){
    //Profit!
  }
}

That's pretty straight forward. The problem is, in applications, nothing happens in isolation. Raw JSON bytes don't do us any good. We need to parse them. And we probably want to update our interface to reflect the downloaded data. This would be simple enough if everything were synchronous:

//Timing is easy in the synchronous world
//(but blocks the main thread) 
let data = downloadJSON()
let json = parseJSON(data)
myController.updateUI(json)

To prevent blocking the main thread, though, we have to make these operations asynchronous. In an asynchronous world, though, the data isn't downloaded by the time downloadJSON() returns. The JSON isn't parsed by the time parseJSON() gets back to us. We have to rely on completion blocks (or the delegate pattern) to let us know when our work is completed:

NSURLConnection
.sendAsynchronousRequest(req, queue: main){
(res, data, error) in
  parseJSON(data){ json in
    myController.updateUI(json)
  }
}

Chaining one operation on the completion of another like this not only leads to a lot of confusing indentation, but now our controller code is all mixed up with our parsing code which is all up in our networking code. It's a complected mess that only gets worse the more dependencies we add.

What we need is an abstraction around the life-cycle of our tasks. One that lets them run asynchronously, but also manages their completions such that we can queue them to run in a specific order.

Thankfully, NSOperation (and the associated NSOperationQueue machinery) lets us do just that.

NSOperation, Queues, and Dependencies

An NSOperation encapsulates the execution of isolated chunks of work. We can subclass2 NSOperation to do pretty much any kind of work we want, then add instances of our subclass to an NSOperationQueue to start them processing.

Of course, encapsulating work and processing it isn't exactly rocket science. We've been doing this forever with simple functions. The magic of NSOperation/Queue is that it tracks the status of our operations, only starting them when they're ready, and taking note of when they finish.

That lets us set up chains of dependencies with addDependency  like so:

let myDownloadOp = DownloadOperation()
let myParseOp = ParseOperation()
let myUpdateOp = UpdateOperation()
let queue = NSOperationQueue.mainQueue()
myUpdateOp.addDependency(myParseOp)
myParseOp.addDependency(myDownloadOp)
queue.addOperations(
  [myDownloadOp, myParseOp, myUpdateOp],
  waitUntilFinished:false)

Because of our dependencies, mainQueue will only execute our parse operation after the download operation has completed. Likewise, it will only start the update operation after the parse operation has completed. Note that everything is self-contained in its own operation and nothing is nested.

More important in the context of our current conversation, this is true even if all these operations are asynchronous. And, as long as we use mainQueue() to process these operations, everything happens on the main thread, too.

In other words, NSOperation/Queue lets us run asynchronous operations on the main thread while maintaining complete control over their timing and order of execution.3

Which means we should be ready to go. And we would be… if the documentation around NSOperation weren't a confusing and self-contradictory hodgepodge.

Making Sense of Asynchronous Operations

Here I'm going to try to synthesize, as best I can, what I've learned about implementing asynchronous NSOperation subclasses from a maze of disparate documentation. If you're more interested in whats than hows, you should feel free to skip ahead to the next section.

By default, NSOperation assumes that when an operation hits the end of its start() method,4 it is complete.5 Making its concurrent property return true is supposed to indicate an operation's task lives beyond the scope of start() — in other words, that it's asynchronous — and thus shouldn't be considered complete just because start() has returned.

Because such an operation would be responsible for manually marking itself as completed, operation queues used to assume concurrent operations managed their own internal thread. It would be redundant for a queue to create its own thread to run a concurrent operation like this, so "concurrent" used to also mean "Tell the queue not to create a new thread for this operation."

Then operation queues got rewritten to use Grand Central Dispatch under the covers. As a result, the documentation says, "Operations are always executed on a separate thread, regardless of whether they are designated as asynchronous or synchronous operations."6

Because the concurrent property was being ignored when it came to threading, it's only remaining job was to indicate whether an operation was asynchronous or not. "Concurrent" and "asynchronous" technically mean different things, though. So in iOS 7, the more semantically precise asynchronous got added to the API, to be used in place of concurrent.7

The only problem being, neither the asynchronous nor concurrent properties seem to do anything.8 Operations with either of these set still report themselves as completed whenever start() returns (whether added to a queue or launched manually, contrary to the docs). The only way to make sure an operation doesn't mark itself as finished when start() completes is to override start() itself.9

Making a New Start

And so, the most important thing we have to do when implementing an asynchronous subclass of NSOperation is to override its start() method. But start() is actually responsible for a few things.

  1. Calling the main implementation in main()
  2. Updating the operation's state to executing when it starts.
  3. Changing the operation's state to finished when it's done.
  4. Sending KVO notifications for each of the above.

Calling main() is easy. Our initial start() method could look like this:

override func start() {
  main()
}

To model state, we're going to create an enumeration, a property to hold it, and override the computed properties executing and finished to point to our state:

enum State{
  case Waiting, Executing, Finished
}

var state = State.Waiting

override var executing:Bool{
  return state == .Executing
}

override var finished:Bool{
  return state == .Finished
}

And update our start() to shift us into "executing" mode before calling main:

override func start() {
  state = .Executing
  main()
}

That's great for setting up "executing". But how do we mark our operation as "finished"? Remember, this is going to be doing asynchronous work, so we don't technically know when the operation is going to end. The best we can do is create a method that subclasses will have to call when their asynchronous tasks are complete:

func finish(){
  state = .Finished
}

This mostly works. But NSOperationQueue (and anything else using our operation) expects to be notified about changes to our state through KVO. And KVO has no way to get automatically triggered when the value of a computed property changes. So we have to send those notifications ourselves:

var state = State.Waiting{
  willSet{
    switch(state, newValue){
    case (.Waiting, .Executing):
      willChangeValueForKey("isExecuting")
    case (.Waiting, .Finished):
      willChangeValueForKey("isFinished")
    case (.Executing, .Finished):
      willChangeValueForKey("isExecuting")
      willChangeValueForKey("isFinished")
    default:
      fatalError( ... )
    }
  }
  didSet{
    switch(oldValue, state){
    case (.Waiting, .Executing):
      didChangeValueForKey("isExecuting")
    case (.Waiting, .Finished):
      didChangeValueForKey("isFinished")
    case (.Executing, .Finished):
      didChangeValueForKey("isExecuting")
      didChangeValueForKey("isFinished")
    default:
      fatalError( ... )
    }
  }
}

This simply sets up two observers on our state property, one for before it gets changed, the other for after. Depending on which state transitions to what, we call the appropriate KVO notifications (or bail with an error).

There are a few things we're playing fast and loose with here that wouldn't fly in a multi-threaded environment. There's no locking around our state property for one thing. And we've given no consideration to what happens if we're initialized by one thread, while start() is called by another.

That's okay! The whole point of this exercise is how much simpler and less crash-prone everything is when we avoid threading altogether. But we should make our "no thread" policy explicit by guarding against it in start(). Also, as a best practice, we should check that our operation hasn't been cancelled before we even begin:

override func start() {
  guard NSThread.isMainThread() else{
    fatalError( ... )
  }

  guard !cancelled else{
    return
  }

  state = .Executing
  main()
}

And that's more or less it! From here, subclasses can override main() to spin up whatever asynchronous task they want, and as long as it calls finish() when it completes, everything will just work.

Exactly what these anynchronous subclasses will look like is a topic for another week. But for now, here's a gist of the base AsyncOperation class we've created together.


1: We should all be using NSURLSession-based networking in the real world. I'm using NSURLConnection in my snippets because it happens to have a more example-friendly interface, but don't take that as an endorsement of best practice. ↩︎

2: NSBlockOperation is a great way to quickly experiment with NSOperation and NSOperationQueue without all the hassle of subclassing. But a block operation marks itself as finished as soon as its block returns, so it's not well suited for asynchronous tasks. ↩︎

3: Q.E.D. ↩︎

4: Including main() which is, by default, called by start() ↩︎

5: "Complete" being defined as a state where isFinished is true and KVO notifications have been dispatched to that effect. ↩︎

6: Note the main queue is an exception. Operations executed on the main queue always run on the main thread. Which, incidentally, is what allows us to ignore most of this nonsense. ↩︎

7: Technically, the two are synonyms as far as NSOperation is concerned. Overriding concurrent to return true does the same for asynchronous and vice-versa. ↩︎

8: As a matter of style, we still override asynchronous to return true in our example. But it's just as a nod toward semantic correctness. There's no functional benefit to doing so that I can find. ↩︎

9: While being careful not to call super, as that would trigger the superclass's behavior of marking itself as finished as soon as super.start() returns. ↩︎


Hit me up on twitter (@jemmons) to continue the conversation.