Text

What is currying (in Swift)?

A blog post by Ole Begemann has led some people interested in Swift wondering what exactly curried functions and currying are — for example, listen to the discussion on the Mobile Couch podcast Episode 37.

Let’s look at currying in Swift. Here is a binary add function

func add(x: Int, #y: Int) -> Int {
    return x + y
}

and next we have a curried version of the same function

func curriedAdd(x: Int)(y: Int) -> Int {
  return x + y
}

The difference between the two is that add takes two arguments (two Ints) and returns an Int, whereas curriedAdd takes only one argument and returns a function of type (y: Int) -> Int. If you put those two definitions into a Playground, both add(1, y: 2) and curriedAdd(1)(y: 2) yield 3.

In add(1, 2), we supply two arguments at once, but in curriedAdd(1)(y: 2), we supply only one argument, get a new function as the result, and then, apply that new function to the second argument. In other words, add requires two arguments at once, whereas its curried variant requires the two arguments one after the other in two separate function calls.

This works not only for binary functions, but also for functions expecting three or more arguments. More generally, currying refers to the fact that any n-ary function (i.e., any function expecting n arguments) can be rewritten as a computationally equivalent function that doesn’t get all n arguments at once, but gets them one after the other, always returning an intermediate function for each of the n function applications.

That’s…interesting, but why should we care? Curried functions are more versatile than their uncurried counterparts. We can apply the function add only to two arguments. That’s it. In contrast, we can apply curriedAdd to either one or two arguments. If we want to define an increment function, we can do that easily in terms of curriedAdd:

let inc = curriedAdd(1)

As expected, inc(y: 2) gives 3.

For a simple function, such as, add, this extra versatility is not very impressive. However, Ole’s blog post explains how this ultimately enables the target-action pattern in Swift and that is pretty impressive!

As a side note, in the functional language Haskell all functions are curried by default. In fact, the concept was called currying after the mathematician Haskell B. Curry in whose honour the language was called Haskell.

Text

Inline Objective-C in Haskell for GHC 7.8


nslog :: String -> IO ()
nslog msg =
  $(objc ['msg :> ''String]
    (void [cexp| NSLog(@"Here is a message from Haskell: %@", msg) |]))

The latest GHC release (GHC 7.8) includes significant changes to Template Haskell, which make it impossible to get type information from variables in the current declaration group in a Template Haskell function. Version 0.6 of language-c-inline introduces marshalling hints in the form of type annotations to compensate for that lack of type information.

In the above code example, hints are used in two places: (1) the quoted local variable msg carries an annotation suggesting to marshal it as a String and (2) the result type of the inline Objective-C code is suggested to be IO () by the void annotation. These hints are required as Template Haskell no longer propagates the type information contained in the function signature.

Photo
As a spin off from teaching programming to my 10 year old son and his friends, we have published a sprite and pixel art editor for the iPad, called BigPixel, which you can get from the App Store. (It has a similar feature set as the earlier Haskell version, but is much prettier!)

As a spin off from teaching programming to my 10 year old son and his friends, we have published a sprite and pixel art editor for the iPad, called BigPixel, which you can get from the App Store. (It has a similar feature set as the earlier Haskell version, but is much prettier!)

Text

The future of array-oriented computing in Haskell — The Result!

I recently posted a survey concerning The future of array-oriented computing in Haskell. Here is a summary of the responses.

It is not surprising that basically everybody (of the respondents — who surely suffer from grave selection bias) is interested in multicore CPUs, but I’m somewhat surprised to see about 2/3 to be interested in GPGPU. The most popular application areas are data analytics, machine learning, and scientific computing with optimisation problems and physical simulations following close up.

The most important algorithmic patterns are iterative numeric algorithms, matrix operations, and —the most popular— standard aggregate operations, such as maps, folds, and scans. (This result most surely suffers from selection bias!)

I am very happy to see that most people who tried Repa or Accelerate got at least some mileage out of them. The most requested backend feature for Repa are SIMD instructions (aka vector instructions) and the most requested feature for Accelerate is support for high-performance CPU execution. I did suspect that and we really like to provide that functionality, but it is quite a bit of work (so will take a little while). The other major request for Accelerate is OpenCL support — we really need some outside help to realise that, as it is a major undertaking.

As far as extending the expressiveness of Accelerate goes, there is strong demand for nested data parallelism and sparse data structures. This also requires quite a bit of work (and is conceptual very hard!), but the good news is that PLS has got a PhD student working on just that!

NB: In the multiple choice questions permitting multiple answers, the percentages given by the Google Docs summary is somewhat misleading.

Text

Let’s program!

Last year, I started to teach my, then, 9 year-old programming. Yesterday, we took it a step further by including 5 of his friends. We began writing a simple 2D game in Haskell using the Gloss library (which provides a simple, purely functional, event-driven API on top of OpenGL). My goal is to provide the children with a basic understanding of fundamental computational concepts.

I put the code from our first session into a public Git repo along with a brief summary of how the session was structured.

Text

The future of array-oriented computing in Haskell — a survey

In the Programming Languages & Systems (PLS) group, we have spent a lot of energy on developing methods for high-performance array programming in a purely functional style. We are curious how our work is being used and what else the community would like to be able to achieve with libraries, such as Repa and Accelerate. Please help us by completing this survey. Thanks!

Text

A new version of the GPU language Accelerate

We released version 0.14 of Accelerate, the embedded high-level language for general-purpose GPU programming. In addition to new constructs for iterative algorithms and improved code generation, this version adds support for the latest CUDA release (5.5) and for OS X Mavericks.

To learn more about Accelerate, watch Trevor’s YOW! Lambda Jam 2013 talk (slides) or read Chapter 6 of Simon Marlow’s Book Parallel and Concurrent Programming in Haskell.

You can find more information on Accelerate’s GitHub page.

Call for help: Accelerate currently works (out of the box) on OS X and Linux. It should also work on Windows, but we need some community help to fix the build process on Windows — for details, please see the recent issue on GitHub.

Text

The Glasgow Haskell Compiler (GHC) on OS X 10.9 (Mavericks)

Apple finally dropped the GNU C Compiler (GCC) from its developer tools and only supports the LLVM-based clang compiler. This causes the Glasgow Haskell Compiler (GHC) some grief, mainly due to its use of the C pre-processor (cpp) as a cheap macro system for Haskell[1].

Here is how to fix this for the latest version of the Haskell Platform for Mac — until the HP maintainers release an updated version. I am assuming you have installed Mavericks and that you have either (a) Xcode 5 (from the Mac App Store) with the command line tools installed or (b) have directly gotten the Command Line Tools for Xcode. Using the latest Haskell Platform for Mac, follow these two steps:

  1. Get and compile Luke Iannini’s clang-xcode5-wrapper[2] and put the binary into /usr/local/bin — or grab this already compiled binary and put it in /usr/local/bin/.
  2. Edit GHC’s settings file as described next.

Edit

/Library/Frameworks/GHC.framework/Versions/Current/usr/lib/ghc-7.6.3/settings

by changing the second line of the file, such that it reads

("C compiler command", "/usr/local/bin/clang-xcode5-wrapper")

That’s it! Happy Haskell hacking on the most advanced operating system ;)

And kudos to the kind Apple engineers who accepted last minute clang patches from the Haskell community, and to Austin Seipp and Carter Schonwald for developing the patches and working with Apple.

[1] I have long maintained the view that (ab)using cpp for Haskell is a Bad Idea.

[2] This is a Haskell program; so, either compile it before updating to Mavericks or grab my binary.

Tags: macos haskell ghc
Video

Here is the video of my YOW! Lambda Jam keynote asking, “Do Extraterrestrials Use Functional Programming?” You can also get the slides separately.

Text

Slides of my FHPC invited talk on “Data Parallelism in Haskell”

The slides of my invited talk at FHPC’13 are now online.