Inline Objective-C in Haskell for GHC 7.8

nslog :: String -> IO ()
nslog msg =
  $(objc ['msg :> ''String]
    (void [cexp| NSLog(@"Here is a message from Haskell: %@", msg) |]))

The latest GHC release (GHC 7.8) includes significant changes to Template Haskell, which make it impossible to get type information from variables in the current declaration group in a Template Haskell function. Version 0.6 of language-c-inline introduces marshalling hints in the form of type annotations to compensate for that lack of type information.

In the above code example, hints are used in two places: (1) the quoted local variable msg carries an annotation suggesting to marshal it as a String and (2) the result type of the inline Objective-C code is suggested to be IO () by the void annotation. These hints are required as Template Haskell no longer propagates the type information contained in the function signature.

As a spin off from teaching programming to my 10 year old son and his friends, we have published a sprite and pixel art editor for the iPad, called BigPixel, which you can get from the App Store. (It has a similar feature set as the earlier Haskell version, but is much prettier!)

As a spin off from teaching programming to my 10 year old son and his friends, we have published a sprite and pixel art editor for the iPad, called BigPixel, which you can get from the App Store. (It has a similar feature set as the earlier Haskell version, but is much prettier!)


The future of array-oriented computing in Haskell — The Result!

I recently posted a survey concerning The future of array-oriented computing in Haskell. Here is a summary of the responses.

It is not surprising that basically everybody (of the respondents — who surely suffer from grave selection bias) is interested in multicore CPUs, but I’m somewhat surprised to see about 2/3 to be interested in GPGPU. The most popular application areas are data analytics, machine learning, and scientific computing with optimisation problems and physical simulations following close up.

The most important algorithmic patterns are iterative numeric algorithms, matrix operations, and —the most popular— standard aggregate operations, such as maps, folds, and scans. (This result most surely suffers from selection bias!)

I am very happy to see that most people who tried Repa or Accelerate got at least some mileage out of them. The most requested backend feature for Repa are SIMD instructions (aka vector instructions) and the most requested feature for Accelerate is support for high-performance CPU execution. I did suspect that and we really like to provide that functionality, but it is quite a bit of work (so will take a little while). The other major request for Accelerate is OpenCL support — we really need some outside help to realise that, as it is a major undertaking.

As far as extending the expressiveness of Accelerate goes, there is strong demand for nested data parallelism and sparse data structures. This also requires quite a bit of work (and is conceptual very hard!), but the good news is that PLS has got a PhD student working on just that!

NB: In the multiple choice questions permitting multiple answers, the percentages given by the Google Docs summary is somewhat misleading.


Let’s program!

Last year, I started to teach my, then, 9 year-old programming. Yesterday, we took it a step further by including 5 of his friends. We began writing a simple 2D game in Haskell using the Gloss library (which provides a simple, purely functional, event-driven API on top of OpenGL). My goal is to provide the children with a basic understanding of fundamental computational concepts.

I put the code from our first session into a public Git repo along with a brief summary of how the session was structured.


The future of array-oriented computing in Haskell — a survey

In the Programming Languages & Systems (PLS) group, we have spent a lot of energy on developing methods for high-performance array programming in a purely functional style. We are curious how our work is being used and what else the community would like to be able to achieve with libraries, such as Repa and Accelerate. Please help us by completing this survey. Thanks!


A new version of the GPU language Accelerate

We released version 0.14 of Accelerate, the embedded high-level language for general-purpose GPU programming. In addition to new constructs for iterative algorithms and improved code generation, this version adds support for the latest CUDA release (5.5) and for OS X Mavericks.

To learn more about Accelerate, watch Trevor’s YOW! Lambda Jam 2013 talk (slides) or read Chapter 6 of Simon Marlow’s Book Parallel and Concurrent Programming in Haskell.

You can find more information on Accelerate’s GitHub page.

Call for help: Accelerate currently works (out of the box) on OS X and Linux. It should also work on Windows, but we need some community help to fix the build process on Windows — for details, please see the recent issue on GitHub.


The Glasgow Haskell Compiler (GHC) on OS X 10.9 (Mavericks)

Apple finally dropped the GNU C Compiler (GCC) from its developer tools and only supports the LLVM-based clang compiler. This causes the Glasgow Haskell Compiler (GHC) some grief, mainly due to its use of the C pre-processor (cpp) as a cheap macro system for Haskell[1].

Here is how to fix this for the latest version of the Haskell Platform for Mac — until the HP maintainers release an updated version. I am assuming you have installed Mavericks and that you have either (a) Xcode 5 (from the Mac App Store) with the command line tools installed or (b) have directly gotten the Command Line Tools for Xcode. Using the latest Haskell Platform for Mac, follow these two steps:

  1. Get and compile Luke Iannini’s clang-xcode5-wrapper[2] and put the binary into /usr/local/bin — or grab this already compiled binary and put it in /usr/local/bin/.
  2. Edit GHC’s settings file as described next.



by changing the second line of the file, such that it reads

("C compiler command", "/usr/local/bin/clang-xcode5-wrapper")

That’s it! Happy Haskell hacking on the most advanced operating system ;)

And kudos to the kind Apple engineers who accepted last minute clang patches from the Haskell community, and to Austin Seipp and Carter Schonwald for developing the patches and working with Apple.

[1] I have long maintained the view that (ab)using cpp for Haskell is a Bad Idea.

[2] This is a Haskell program; so, either compile it before updating to Mavericks or grab my binary.

Tags: macos haskell ghc

Here is the video of my YOW! Lambda Jam keynote asking, “Do Extraterrestrials Use Functional Programming?” You can also get the slides separately.


Slides of my FHPC invited talk on “Data Parallelism in Haskell”

The slides of my invited talk at FHPC’13 are now online.



My research group will present four talks ICFP and affiliated events. Trevor will present our ICFP paper Optimising Purely Functional GPU Programs, Ben will give the talk about our Haskell Symposium paper Data Flow Fusion with Series Expressions in Haskell, Amos will talk at the Haskell Implementors Workshop about GHC’s SpecConstr optimisation, and I will present an invited talk about Data Parallelism in Haskell at FHPC.

Tags: icfp pls