Thursday, March 26, 2015

New website with new talks

My website is now located at ndmitchell.com and has a few new talks on it:

My old website at community.haskell.org is going to stop being updated, and I'll be putting in redirections shortly. That server is going to stop hosting websites, so I bought myself a domain name and setup a GitHub pages website. The repo is here, including all the data, metadata, templates and scripts.

Sunday, March 22, 2015

Finding a GHC bug

Summary: I found a nasty bug in GHC 7.10 RC3. It's been fixed.

For Shake, I have an extensive test suite (2500+ lines of tests). I also test on 7 GHC versions, including GHC HEAD. After adding GHC 7.10 Release Candidate 3 to the mix one of the tests started failing. A week later the bug has been simplified, diagnosed and fixed as bug #10176. Below is a tale of how that happened, including a technical explanation of the bug in Step 8.

Step 1: Write a lot of tests

The Shake test that caught this particular bug checks that if the user makes a mistake then the error message must contain certain substrings correctly identifying the problem. With GHC 7.10 RC3 on Travis this test stopped throwing an exception entirely, continuing as though nothing were wrong. Weird.

Step 2: Reproduce locally

I tried to reproduce the failure locally, which ended up spotting a fatal bug in the GHC 7.10 RC3 32bit Windows version. After opting for the 64bit version, at first I couldn't reproduce the error. Eventually I realised that you needed to turn on optimisation (at -O1), and that running through ghci (how I usually develop Haskell) didn't cause the problem. Noticing that -O1 was required gave me a clue, that it was related to an optimisation. The typical cause of programs that work without optimisation but fail with it are programs that raise exceptions in pure code (since the exception can change due to optimisations) or those that call unsafePerformIO (it has unsafe in the name for a reason). I certainly do both those things in Shake, but I wasn't aware of anywhere I did them in a dubious manner.

Step 3: Reduce the test case

I spent a lot of time trying to reduce the test case. By inserting print statements I narrowed the place the difference was happening to Development.Shake.Core.applyKeyValue, which is a pretty core bit of Shake. However, while I was able to chop out a lot of auxiliary features (lint tracking, command tracing) the actual code remained difficult to reduce to any great extent, for two reasons. Firstly, the bug was incredibly fragile - moving a monomorphic NOINLINE function from one module to another made the bug disappear. Secondly, the applyKeyValue function is right in the middle of Shake, and the test required a few successful Shake runs to set up things for the failing test, so I couldn't change its observable semantics too much.

What I did conclude was that Shake didn't seem to be doing anything dodgy in the small patch of code that seemed relevant, giving me the first hint that maybe GHC was at fault, not Shake.

Step 4: Differences at the Core level

At this point, I reached out to the GHC mailing list, asking if anyone had any ideas of a culprit. They didn't, but Simon Peyton Jones suggested finding the smallest breaking change and comparing the generated Core. You can do that by compiling with -ddump-simpl, and adding -dsuppress-all -dsuppress-uniques to get something a bit easier to diff. Fortunately, by this point I had a very small change to make the error appear/disappear (moving a function from one module to another), so the difference in Core was tiny. The change in the problematic version read:

case (\_ -> error "here") of {}

In GHC Core a case always evaluates its scrutinee until it has the outermost value available (aka WHNF). The empty alternatives mean that GHC has proven that the evaluation always results in an exception. However, a lambda already has a value available (namely the lambda) so evaluation never throws an exception. As a result, GHC has violated the rules of Core and bad things happen.

Step 5: Reducing further

In order to reduce the bug further I now had a better test, namely:

ghc Core.hs -O -ddump-simpl | grep -F "case (\\"

With this test I didn't have to keep the internals of Shake working, and in fact didn't even have to provide a runnable entry point - all I had to do was look for the dodgy construction in the Core language. Note that I'm not actually looking for case of a lambda with empty alternatives, reasoning (seemingly correctly) that any case on a lambda with non-empty alternatives would be eliminated by the GHC simplifier, so any case followed by lambda is buggy.

I reduced by having a ghcid Window open in one corner, using the warnings -fwarn-unused-binds and -fwarn-unused-imports. I hacked out some part of the program and then patched everything up so it no longer raised an error using ghcid for rapid feedback. I then ran the grep test. If the bug had gone I put the program back to how it was and tried somewhere else. If the bug remained I then cleaned up the now redundant declarations and imports and checked again, repeating until the code was minimal.

Several hours later I was left with something like:

buggy :: (() -> Bool) -> () -> Bool -> IO ()
buggy fun unit bool =
    runReaderT (
        (if bool then liftIO $ print () else p) >>
        (if fun unit then error2Args unit unit >> p else p)) ()

{-# NOINLINE error2Args #-}
error2Args :: () -> () -> a
error2Args _ _ = error "here"

Note that error2Args must be in a different module to buggy.

Step 6: Bisecting

At this point hvr stepped in and bisected all the changes between GHC 7.10 RC2 and RC3, determining that a large Typeable change introduced the bug in the original shake test case. However, using the minimal program, the bug was also present in GHC 7.10 RC2. That suggested the bug might have been around for a while.

Step 7: Augmenting GHC's Lint Checker

GHC already has a pass in the compiler, enabled with -dcore-lint, which checks for dodgy constructs in the Core language. Enabling it didn't pick up this example (hence I used grep instead), so Joachim Breitner added such a check. He also added the example as a test case, so that if it ever breaks in future things it will be spotted immediately.

Step 8: Diagnose and Fix

Joachim then continued to diagnose and fix the issue, the details of which can be found in the patch. The problem (as I understand it) is that GHC looks at the code:

fun x = error "foo" x

And concludes two facts.

  1. If fun is called with one argument then the code will raise an error. That's true, and allows the compiler to replace fun () () with fun ().
  2. After analysing all calls of fun it spots that fun is always called with two arguments, so it is free to change fun to be fun x y = error "foo" x y.

By applying these two facts, we can make the transformation:

case fun () () of {}
-- apply the first rule
case fun () of {}
-- inline fun after applying the second rule
case (\x y -> error "foo" x y) () of {}
-- beta reduce:
case (\y -> error "foo" () y) of {}

Now we have caused invalid Core to be produced. While the two facts are each individually correct, applying the first fact causes the second fact to stop being true. Joachim fixed this by making the call argument count analysis stop at the first argument that guarantees an error.

Step 9: The Impact

The manifestation of the bug is quite interesting. Essentially GHC decides something is an error, but then fails to actually throw the error. As a result, any code the simplifier places after the error call will be eliminated, and that can remove a large chunk of the program. However, any code the simplifier doesn't manage to hoist next to the code will still get run, even though it should have been skipped due to an error. In essence, given exactly the wrong conditions to trigger the bug, you can write:

main = do
    putStrLn "here1"
    ... error "foo" ...
    putStrLn "here2"
    ...
    putStrLn "here3"

And end up with the program printing here1 followed by here3, without throwing an exception. In the case of my original Shake test it started to compile, should have stopped with an error but instead just skipped compiling altogether and went on to do the bits after compiling. A very weird manifestation.

Disclaimer: I've eliminating many missteps of mine, which included pushing random patches to try and reduce on the Travis machine and installing a Linux VM.

Monday, March 09, 2015

Implementing a Functor instance

Summary: Implementing a Functor instance is much easier than implementing a Monad instance, and can turn out to be quite useful.

Haskell forces all programmers to understand some details of the Monad typeclass to do basic IO, but currently nothing forces people to learn the Functor typeclass. However, Functor is much simpler than Monad, and all Monads must be Functors, so thinking more about Functor can be a nice route to understanding Monad better.

An intuitive description of a functor is:

A container whose contents can be replaced, without changing the shape of the container.

Some example functors include lists and Maybe. Both contain values, and you can replace the values inside them. In fact, most types with a single type parameter can be made functors. For example, in CmdArgs I define something similar to:

data Group a = Group {groupUnnamed :: [a], groupNamed :: [(String, [a])]}

This Group structure contains a values inside it. Sometimes it is useful to transform all the underlying a values, perhaps to a different type. The Functor instance has a single member:

fmap :: Functor f => (a -> b) -> f a -> f b

For the above type, we instantiate f to Group so we get:

fmap :: (a -> b) -> Group a -> Group b

We can implement fmap by applying f to every a value inside Group:

instance Functor Group where
    fmap f (Group a b) = Group (map f a) [(x, map f y) | (x,y) <- b]

Note in particular that Group is usually written Group a, but in the instance declaration we're omitting the a, to say Group itself (without any arguments) is a functor. Providing insufficient type arguments like that makes Functor a higher-kinded type class, in contrast to those like Eq or Ord which would have been on Group a.

When implementing fmap the type checker eliminates most bad implementations, so the only law you need to think about is that fmap id = id - given the identity function, the value shouldn't change. We can show this law for Group with:

Group a b = fmap id (Group a b)
-- inline fmap
Group a b = Group (map id a) [(x, map id y) | (x,y) <- b]
-- map id x ==> x
Group a b = Group a [(x, y) | (x,y) <- b]
-- simplify list comprehension
Group a b = Group a b
-- equal

In fact, the function map is just fmap specialised to [], so the rule map id x ==> x is just applying the fmap id = id law on lists. From this law, we can derive the additional law that:

fmap (f . g)  ==  fmap f . fmap g

Both these laws can serve as the basis for optimisation opportunities, reducing the number of times we traverse a value, and GHC exploits these laws for the list type.

In general, most data types that take a type parameter can be made functors, but there are a few common exceptions:

  • You have a value on the left of an arrow – for example data Foo a = Foo (a -> Int) cannot be made a functor, since we have no way to change the incoming b back to an a.
  • You have an invariant relating the structure and the elements. For example data OrdList a = Nil | Gt a (OrdList a), where all functions on OrdList have an Ord context, and OrdList is exported abstractly. Here the functor would break the abstraction.
  • You require an instance for the element type, e.g. Data.Vector.Storable requires a Storable instance to create a vector, which Functor does not allow.

The name functor may sound scary, or confusing to C++ programmers (who accidentally say functor to mean function) – but they are a nice simple abstraction.

Wednesday, February 25, 2015

Making withSocketsDo unnecessary

Summary: Currently you have to call withSocketsDo before using the Haskell network library. In the next version you won't have to.

The Haskell network library has always had a weird and unpleasant invariant. Under Windows, you must call withSocketsDo before calling any other functions. If you forget, the error message isn't particularly illuminating (e.g. getAddrInfo, does not exist, error 10093). Calling withSocketsDo isn't harmful under Linux, but equally isn't necessary, and thus easy to accidentally omit. The network library has recently merged some patches so that in future versions there is no requirement to call withSocketsDo, even on Windows.

Existing versions of network

The reason for requiring withSocketsDo is so that the network library can initialise the Windows Winsock library. The code for withSocketsDo was approximately:

withSocketsDo :: IO a -> IO a
#if WINDOWS
withSocketsDo act = do
    initWinsock
    act `finally` termWinsock
#else
withSocketsDo act = act
#endif

Where initWinsock and termWinsock were C functions. Both checked a mutable variable so they only initialised/terminated once. The initWinsock function immediately initialised the Winsock library. The termWinsock function did not terminate the library, but merely installed an atexit handler, providing a function that ran when the program shut down which terminated the Winsock library.

As a result, in all existing versions of the network library, it is fine to nest calls to withSocketsDo, call withSocketsDo multiple times, and to perform networking operations after withSocketsDo has returned.

Future versions of network

My approach to removing the requirement to call withSocketsDo was to make it very cheap, then sprinkle it everywhere it might be needed. Making such a function cheap on non-Windows just required an INLINE pragma (although its very likely GHC would have always inlined the function anyway).

For Windows, I changed to:

withSocketsDo act = do evaluate withSocketsInit; act 

{-# NOINLINE withSocketsInit #-}
withSocketsInit = unsafePerformIO $ do
    initWinsock
    termWinsock

Now withSocketsDo is very cheap, with subsequent calls requiring no FFI calls, and thanks to pointer tagging, just a few cheap instructions. When placing additional withSocketsDo calls my strategy was to make sure I called it before constructing a Socket (which many functions take as an argument), and when taking one of the central locks required for the network library. In addition, I identified a few places not otherwise covered.

In newer versions of the network library it is probably never necessary to call withSocketsDo - if you find a place where one is necessary, let me know. However, for compatibility with older versions on Windows, it is good practice to always call withSocketsDo. Libraries making use of the network library should probably call withSocketsDo on their users behalf.

Tuesday, February 17, 2015

nub considered harmful

Summary: Don't use nub. A much faster alternative is nubOrd from the extra package.

The Haskell Data.List module contains the function nub, which removes duplicate elements. As an example:

nub [1,2,1,3] ==  [1,2,3]

The function nub has the type Eq a => [a] -> [a]. The complexity of take i $ nub xs is O(length xs * i). Assuming all elements are distinct and you want them all, that is O(length xs ^ 2). If we only have an Eq instance, that's the best complexity we can achieve. The reason is that given a list as ++ [b], to check if b should be in the output requires checking b for equality against nub as, which requires a linear scan. Since checking each element requires a linear scan, we end up with a quadratic complexity.

However, if we have an Ord instance (as we usually do), we have a complexity of O(length xs * log i) - a function that grows significantly slower. The reason is that we can build a balanced binary-tree for the previous elements, and check each new element in log time. Does that make a difference in practice? Yes. As the graph below shows, by the time we get to 10,000 elements, nub is 70 times slower. Even at 1,000 elements nub is 8 times slower.


The fact nub is dangerous isn't new information, and I even suggested changing the base library in 2007. Currently there seems to be a nub hit squad, including Niklas Hamb├╝chen, who go around raising tickets against various projects suggesting they avoid nub. To make that easier, I've added nubOrd to my extra package, in the Data.List.Extra module. The function nubOrd has exactly the same semantics as nub (both strictness properties and ordering), but is asymptotically faster, so is almost a drop-in replacement (just the additional Ord context).

For the curious, the above graph was generated in Excel, with the code below. I expect the spikes in nub correspond to garbage collection, or just random machine fluctuations.

import Control.Exception
import Data.List.Extra
import Control.Monad
import System.Time.Extra

benchmark xs = do
    n <- evaluate $ length xs
    (t1,_) <- duration $ evaluate $ length $ nub xs
    (t2,_) <- duration $ evaluate $ length $ nubOrd xs
    putStrLn $ show n ++ "," ++ show t1 ++ "," ++ show t2

main = do
    forM_ [0,100..10000] $ \i -> benchmark $ replicate i 1
    forM_ [0,100..10000] $ \i -> benchmark [1..i]

Tuesday, February 10, 2015

Why is the Hoogle index so out of date?

Summary: Hoogle 4 is out of date. The alpha version Hoogle 5 has fresh code and data every day (and isn't yet ready).

Someone recently asked why Hoogle's index is so out of date. Making the index both more current (updated daily) and larger (indexing all of Stackage) is one of the goals behind my Hoogle 5 rewrite (which still isn't finished). Let's compare the different update processes:

Hoogle 4 updates took about two hours to complete, if they went well, and often had to be aborted. I first compiled the Hoogle binary on the haskell.org machines, which often failed, as typically the version of GHC was very old. Once I'd got a compiled binary, I needed to generate the database, which took about 2 hours, and occasionally failed halfway through. Once I had the new binary and databases I moved everything to correct place for Apache, accepting a small window of downtime during the move. Assuming that worked, I did a few test searches and smiled. Often the new Hoogle binary failed to start (usually failure to find some files, sometimes permissions) and I had to switch back to the old copy. Fixing up such issues took up to an hour. I had a mix of Windows .bat and Linux .sh scripts to automate some of the steps, but they weren't very robust, and required babysitting.

Hoogle 5 updates happen automatically at 8pm every night, take 4 minutes, and have yet to fail. I have a cron script that checks out the latest code and runs an update script. That script clones a fresh repo, compiles Hoogle, builds the databases, runs the test suite, kills the old version and launches the new version. The Hoogle code is all tested on Travis, so I don't expect that to fail very often. The upgrade script is hard to test, but the two failure modes are upgrading to a broken version, or not upgrading. The upgrade script runs checks and fails if anything doesn't work as expected, so it errs on the side of not upgrading. I use Uptime Robot to run searches and check the server is working, along with a canary page which raises an error if no upgrade happens for two days.

Clearly, the Hoogle 5 version update story is better. But why didn't I do it that way with Hoogle 4? The answer is that Hoogle 4 came out over six years ago, and a lot has changed since then:

  • Hoogle 4 is a CGI binary, served through Apache, while Hoogle 5 is a Haskell Warp server. By moving the logic into Haskell, it's far easier for me to configure and manage. Warp was only released on Hackage in 2011.
  • Hoogle 4 runs on the on the main haskell.org server, where my mistakes can easily take out the haskell.org home page (as a result, the haskell.org home page once said "moo" for 10 minutes). Hoogle 5 runs on a dedicated VM where I have root, and no one else runs anything, so I can experiment with settings about swap files, IP tables and cron jobs.
  • My job has provided a lot of practice doing drive-by sysadmining over the last 6 years. I've also had a lot of practice doing critical releases on a nightly basis. In comparison, Hoogle is pretty simple.
  • The revised/rethought approach to Hoogle databases is a lot faster and uses a lot less memory, so it takes under a minute to generate databases, instead of over an hour. That time difference makes it much easier to experiment with different approaches.

When will Hoogle 5 be ready? It doesn't yet do type search, there is no offline version and no API. There are probably lots of other little pieces missing. If you want, feel free to use it now at hoogle.haskell.org. You can still use Hoogle 4 at haskell.org/hoogle, or the more up-to-date FP complete hosted Hoogle 4.

Thursday, February 05, 2015

Refactoring with Equational Reasoning

Summary: Haskell is great for refactoring, thanks to being able to reason about and transform programs with confidence.

I think one of Haskell's strengths as a practical language is that it's easy to refactor, and more importantly, easy to refactor safety. Programs in the real world often accumulate technical debt - code that is shaped more by its history than its current purpose. Refactoring is one way to address that technical debt, making the code simpler, but not changing any meaningful behaviour.

When refactoring, you need to think of which alternative forms of code are equivalent but better. In C++ I've removed unused variables, only to find they were RAII variables, and their mere presence had a semantic effect. In R I've removed redundant if expressions, only to find the apparently pure condition had the effect of coercing a variable and changing its type. In Haskell, it's equally possible to make refactorings that at first glance appear safe but aren't - however, in practice, it happens a lot less. I think there are a few reasons for that:

  • Haskell is pure and evaluation strategy is largely unobservable - moving a statement "before" or "after" another lexically is usually safe.
  • Refactorings that do go wrong, for example variables that accidentally get moved out of scope or types which are no longer as constrained, usually result in compiler errors.
  • The Haskell community cares about semantics and laws. The Monad laws are satisfied by almost all monads, flagrantly breaking those laws is rare.
  • Functions like unsafePerformIO, which could harm refactoring, are almost always used behind a suitable pure abstraction.

Note that these reasons are due to both the language, and the conventions of the Haskell community. (Despite these factors, there are a few features that can trip up refactorings, e.g. exceptions, record wildcards, space-leaks.)

To take a very concrete example, today I was faced with the code:

f = fromMaybe (not b) . select
if f v == b then opt1 else opt2

At one point the function f was used lots, had a sensible name and nicely abstracted some properties. Now f is used once, the semantics are captured elsewhere, and the code is just unclear. We can refactor this statement, focusing on the condition:

f v == b
-- inline f
(fromMaybe (not b) . select) v == b
-- remove brackets and inline (.)
fromMaybe (not b) (select v) == b
-- expand to a case statement
(case select v of Nothing -> not b; Just x -> x) == b
-- push the == down
case select v of Nothing -> not b == b; Just x -> x == b
-- simplify not b == b
case select v of Nothing -> False; Just x -> x == b
-- collapse back up
select v == Just b

And now substitute back in:

if select v == Just b then opt1 else opt2

Our code is now much simpler and more direct. Thanks to the guarantees I expect of Haskell programs, I also have a high degree of confidence this code really is equivalent - even if it isn't obvious just looking at beginning and end.