I’ve once again reached the point that I seem to eventually reach in all my Haskell projects, where I find myself stumbling helplessly over code that is either lazier or stricter than I want it to be. Yesterday evening I posted a question on Stack Overflow asking about how one generally debugs this sort of thing, and sometime while I was sleeping I received what looks like a pretty good reply.
Good, but frustrating, for two reasons:
Haskell having been around as long as it has, one would imagine there would be better tools for dealing with these common problems. Perhaps I’m spoiled by the wealth of comparatively good tools on the JVM platform, but debugging unwanted laziness using the profiler and unwanted strictness using
traceseems very 1980’s. What seems even more horrifying is that when we want to do performance tuning, we don’t use the profiler (nope, that’s a debugging tool) but instead we manually inspect the Core code that the compiler produces.
The Haskell language makes some effort to avoid hidden surprises, typically making them unsurprising by explicitly requiring them in type signatures (such as
Maybe a). Why then shouldn’t laziness or strictness be visible, in some form, to the type system?
Let me pause at this point and make it clear that I have no idea what the fuck I’m talking about. I have no suggestion for how to provide what I think I want, no reason to believe that there is a solution at all, and for that matter no certainty that what I think I want is what I actually want. Let me again be clear that I’m talking out of my ass here.
That being said, my ass is reasonably intelligent as asses go. So it did have one thought: what if we could do a similar trick to what’s done in the
ST monad, establishing sort of a sandbox for a computation:
runST :: (forall s. ST s a) -> a
The phantom type
s defines the region we’re working in, so for some
STRef s a, it can only be referenced within a particular
runST. What if we could similarly define a region in which we can guarantee that certain values will not be forced? I’m picturing something like:
data Lazy l a -- "lazy region" monad runLazy :: (forall l. Lazy l a) -> a data LazyRef l a -- "lazy reference" newLazyRef :: a -> Lazy l (LazyRef l a) modifyLazyRef :: LazyRef l a -> (a -> a) -> Lazy l ()
You’d create a lazy ref for values that you want to guarantee you never read, and the only thing you can do with these refs is modify their eventual values. I don’t know, though… this doesn’t seem right. Hit me up on Twitter or something if you have any ideas.
Update: Tying the knot is for jerks. Any code that requires lazy evaluation not merely for economy but for correctness is too brittle for practical use. Laziness is awesome, and some algorithms are far more elegantly expressed lazily than strictly, but all these knot-tying shenanigans have persuaded me that optional laziness (à la Scala’s
lazy val) rather than optional strictness (à la Haskell’s
seq or bang patterns) is the correct bias for most problems.