New paper: Types Are Calling Conventions
I’ve just submitted a paper, coauthored with Simon Peyton Jones, to the Haskell Symposium. In this paper, we outline what we think is an interesting point in the design space of intermediate languages for a lazy functional programming language like Haskell, and show some cool optimisations that we can do with it that are hard or impossible to express in the intermediate language used by GHC today.
Although this mainly represents a potential improvement in GHC’s internals, where I’d really like to go with this is to push the ability to make a distinction between strict and lazy data into the type system of Haskell itself. This would mean that you could, for example, write functions that produce element-strict lists, and document some of the strictness properties of your functions in their types.
If any of this sounds interesting to you, you can obtain the paper from my freshly-minted Computer Lab website. You can leave any comments you may have on the corresponding Wiki page.
Edward,
Not quite: I think we can get autolifting almost for free.
Essentially, to retain polymorphism the idea is this: thunk on BOTH sides of the function arrow. Now the translation of the identity function on |Int|s has type |{Int} -> {Int}|.
Polymorphic functions like |map| now ALWAYS strictly evaluate expressions of polymorphic type - but it's up to the caller as to whether that should lead to thunk creation or real evaluation. Essentially we move thunking from use-sites into definition-sites.
Now, the question is how much of the extra thunks this creates we can optimize away. For example, the binary function of type |Int -> Int -> Int| is translated to one of type |{Int} -> {{Int} -> {Int}}|, so you're going to end up doing a lot of bogus thunk operations which are immediately blasted by a force. This sucks and needs to change.
I think this scheme will work, but I need to flesh out the details and finish writing the paper :-)
By "more inefficient thunking mechanism" do you mean the autolifting types from the Not Not ML paper?
R.W.Harper (one of the ML designers) discusses using type systems to distinguish strict from lazy in his PL textbook (online draft on his web site: http://www.cs.cmu.edu/~rwh/plbook/book.pdf)
Distingushing between laziness and strictness in the type system is certainly not a new idea. Personally, I prefer the type system of Not Not ML (http://research.microsoft.com/en-us/um/people/simonpj/papers/not-not-ml/) has the nicest take on it, as the type system there encodes some nice operational notions about the difference between "strict" and "lazy".
That said, I hadn't seen the particular type system Harper suggests before. However, he doesn't seem to deal with the principal interesting features of adding hybrid lazy/strict evaluation to the language, namely:
1) Interaction with polymorphism. Can you instantiate a type variable with both lifted and unlifted types? I have a preliminary paper that answer this question, and the answer is "yes, as long as you are willing to accept a more inefficient thunking translation into the intermediate language"
2) Automatic insertion of thunk/force operations in a type-directed manner. This reduces the notational burden of working with mixed evaluation order. I'm also making some progress on this front.
If you know of any related work on these two axes please do let me know!