The Haskell Tool Stack

Documentation table of contents

The Haskell Tool Stack

stack is a cross-platform program for developing Haskell projects. It is aimed at Haskellers both new and experienced.

It features:

  • Installing GHC automatically, in an isolated location.
  • Installing packages needed for your project.
  • Building your project.
  • Testing your project.
  • Benchmarking your project.

How to install

Downloads are available by operating system:

Upgrade instructions

Note: if you are using cabal-install to install stack, you may need to pass a constraint to work around a Cabal issue: cabal install --constraint 'mono-traversable >= 0.9' stack.

Quick Start Guide

First you need to install it (see previous section).

Start your new project:
stack new my-project
cd my-project
stack setup
stack build
stack exec my-project-exe
  • The stack new command will create a new directory containing all the needed files to start a project correctly.
  • The stack setup will download the compiler if necessary in an isolated location (default ~/.stack) that won’t interfere with any system-level installations. (For information on installation paths, please use the stack path command.).
  • The stack build command will build the minimal project.
  • stack exec my-project-exe will execute the command.
  • If you just want to install an executable using stack, then all you have to do isstack install <package-name>.

If you want to launch a REPL:

stack ghci

Run stack for a complete list of commands.

Workflow

The stack new command should have created the following files:

.
├── LICENSE
├── Setup.hs
├── app
│   └── Main.hs
├── my-project.cabal
├── src
│   └── Lib.hs
├── stack.yaml
└── test
    └── Spec.hs

    3 directories, 7 files

So to manage your library:

  1. Edit files in the src/ directory.

The app directory should preferably contain only files related to executables.

  1. If you need to include another library (for example the package text:
    • Add the package text to the file my-project.cabal in the section build-depends: ....
    • run stack build another time
  2. If you get an error that tells you your package isn’t in the LTS. Just try to add a new version in the stack.yaml file in the extra-deps section.

It was a really fast introduction on how to start to code in Haskell using stack. If you want to go further, we highly recommend you to read the stack guide.

How to contribute

This assumes that you have already installed a version of stack, and have git installed.

  1. Clone stack from git with git clone https://github.com/commercialhaskell/stack.git.
  2. Enter into the stack folder with cd stack.
  3. Build stack using a pre-existing stack install with stack setup && stack build.
  4. Once stack finishes building, check the stack version with stack --version. Make sure the version is the latest.
  5. Look for issues tagged with newcomer and awaiting-pr labels.

Build from source as a one-liner:

git clone https://github.com/commercialhaskell/stack.git && \
cd stack && \
stack setup && \
stack build

Complete guide to stack

This repository also contains a complete user guide to using stack , covering all of the most common use cases.

Questions, Feedback, Discussion

Why stack?

stack is a project of the Commercial Haskell group, spearheaded by FP Complete. It is designed to answer the needs of commercial Haskell users, hobbyist Haskellers, and individuals and companies thinking about starting to use Haskell. It is intended to be easy to use for newcomers, while providing the customizability and power experienced developers need.

While stack itself has been around since June of 2015, it is based on codebases used by FP Complete for its corporate customers and internally for years prior. stack is a refresh of that codebase combined with other open source efforts like stackage-cli to meet the needs of users everywhere.

A large impetus for the work on stack was a large survey of people interested in Haskell, which rated build issues as a major concern. The stack team hopes that stack can address these concerns.

Changelog

0.1.10.1

Bug fixes:

  • stack image container did not actually build an image #1473

0.1.10.0

Release notes:

  • The Stack home page is now at haskellstack.org, which shows the documentation rendered by readthedocs.org. Note: this has necessitated some changes to the links in the documentation’s markdown source code, so please check the links on the website before submitting a PR to fix them.
  • The locations of the Ubuntu and Debian package repositories have changed to have correct URL semantics according to Debian’s guidelines #1378. The old locations will continue to work for some months, but we suggest that you adjust your /etc/apt/sources.list.d/fpco.list to the new location to avoid future disruption.
  • openSUSE and SUSE Linux Enterprise packages are now available, thanks to @mimi1vx. Note: there will be some lag before these pick up new versions, as they are based on Stackage LTS.

Major changes:

  • Support for building inside a Nix-shell providing system dependencies #1285
  • Add optional GPG signing on stack upload --sign or with stack sig sign ...

Other enhancements:

  • Print latest applicable version of packages on conflicts #508
  • Support for packages located in Mercurial repositories #1397
  • Only run benchmarks specified as build targets #1412
  • Support git-style executable fall-through (stack something executes stack-something if present) #1433
  • GHCi now loads intermediate dependencies #584
  • --work-dir option for overriding .stack-work #1178
  • Support detailed-0.9 tests #1429
  • Docker: improved POSIX signal proxying to containers #547

Bug fixes:

  • Show absolute paths in error messages in multi-package builds #1348
  • Docker-built binaries and libraries in different path #911 #1367
  • Docker: --resolver argument didn’t effect selected image tag
  • GHCi: Spaces in filepaths caused module loading issues #1401
  • GHCi: cpp-options in cabal files weren’t used #1419
  • Benchmarks couldn’t be run independently of eachother #1412
  • Send output of building setup to stderr #1410

0.1.8.0

Major changes:

  • GHCJS can now be used with stackage snapshots via the new compiler field.
  • Windows installers are now available: download them here #613
  • Docker integration works with non-FPComplete generated images #531

Other enhancements:

  • Added an allow-newer config option #922 #770
  • When a Hackage revision invalidates a build plan in a snapshot, trust the snapshot #770
  • Added a stack config set resolver RESOLVER command. Part of work on #115
  • stack setup can now install GHCJS on windows. See #1145 and #749
  • stack hpc report command added, which generates reports for HPC tix files
  • stack ghci now accepts all the flags accepted by stack build. See #1186
  • stack ghci builds the project before launching GHCi. If the build fails, optimistically launch GHCi anyway. Use stack ghci --no-build option to disable #1065
  • stack ghci now detects and warns about various circumstances where it is liable to fail. See #1270
  • Added require-docker-version configuration option
  • Packages will now usually be built along with their tests and benchmarks. See #1166
  • Relative local-bin-path paths will be relative to the project’s root directory, not the current working directory. #1340
  • stack clean now takes an optional [PACKAGE] argument for use in multi-package projects. See #583
  • Ignore cabal_macros.h as a dependency #1195
  • Pad timestamps and show local time in –verbose output #1226
  • GHCi: Import all modules after loading them #995
  • Add subcommand aliases: repl for ghci, and runhaskell for runghc #1241
  • Add typo recommendations for unknown package identifiers #158
  • Add stack path --local-hpc-root option
  • Overhaul dependencies’ haddocks copying #1231
  • Support for extra-package-dbs in ‘stack ghci’ #1229
  • stack new disallows package names with “words” consisting solely of numbers #1336
  • stack build --fast turns off optimizations

Bug fixes:

  • Fix: Haddocks not copied for dependencies #1105
  • Fix: Global options did not work consistently after subcommand #519
  • Fix: ‘stack ghci’ doesn’t notice that a module got deleted #1180
  • Rebuild when cabal file is changed
  • Fix: Paths in GHC warnings not canonicalized, nor those for packages in subdirectories or outside the project root #1259
  • Fix: unlisted files in tests and benchmarks trigger extraneous second build #838

0.1.6.0

Major changes:

  • stack setup now supports building and booting GHCJS from source tarball.
  • On Windows, build directories no longer display “pretty” information (like x86_64-windows/Cabal-1.22.4.0), but rather a hash of that content. The reason is to avoid the 260 character path limitation on Windows. See #1027
  • Rename config files and clarify their purposes #969
    • ~/.stack/stack.yaml –> ~/.stack/config.yaml
    • ~/.stack/global –> ~/.stack/global-project
    • /etc/stack/config –> /etc/stack/config.yaml
    • Old locations still supported, with deprecation warnings
  • New command “stack eval CODE”, which evaluates to “stack exec ghc – -e CODE”.

Other enhancements:

  • No longer install git on Windows #1046. You can still get this behavior by running the following yourself: stack exec -- pacman -Sy --noconfirm git.
  • Typing enter during –file-watch triggers a rebuild #1023
  • Use Haddock’s --hyperlinked-source (crosslinked source), if available #1070
  • Use Stack-installed GHCs for stack init --solver #1072
  • New experimental stack query command #1087
  • By default, stack no longer rebuilds a package due to GHC options changes. This behavior can be tweaked with the rebuild-ghc-options setting. #1089
  • By default, ghc-options are applied to all local packages, not just targets. This behavior can be tweaked with the apply-ghc-options setting. #1089
  • Docker: download or override location of stack executable to re-run in container #974
  • Docker: when Docker Engine is remote, don’t run containerized processes as host’s UID/GID #194
  • Docker: set-user option to enable/disable running containerized processes as host’s UID/GID #194
  • Custom Setup.hs files are now precompiled instead of interpreted. This should be a major performance win for certain edge cases (biggest example: building Cabal itself) while being either neutral or a minor slowdown for more common cases.
  • stack test --coverage now also generates a unified coverage report for multiple test-suites / packages. In the unified report, test-suites can contribute to the coverage of other packages.

Bug fixes:

  • Ignore stack-built executables named ghc #1052
  • Fix quoting of output failed command line arguments
  • Mark executable-only packages as installed when copied from cache #1043
  • Canonicalize temporary directory paths #1047
  • Put code page fix inside the build function itself #1066
  • Add explicit-setup-deps option #1110, and change the default to the old behavior of using any package in the global and snapshot database #1025
  • Precompiled cache checks full package IDs on Cabal < 1.22 #1103
  • Pass -package-id to ghci #867
  • Ignore global packages when copying precompiled packages #1146

0.1.5.0

Major changes:

  • On Windows, we now use a full MSYS2 installation in place of the previous PortableGit. This gives you access to the pacman package manager for more easily installing libraries.
  • Support for custom GHC binary distributions #530
    • ghc-variant option in stack.yaml to specify the variant (also --ghc-variant command-line option)
    • setup-info in stack.yaml, to specify where to download custom binary distributions (also --ghc-bindist command-line option)
    • Note: On systems with libgmp4 (aka libgmp.so.3), such as CentOS 6, you may need to re-run stack setup due to the centos6 GHC bindist being treated like a variant
  • A new --pvp-bounds flag to the sdist and upload commands allows automatic adding of PVP upper and/or lower bounds to your dependencies

Other enhancements:

  • Adapt to upcoming Cabal installed package identifier format change #851
  • stack setup takes a --stack-setup-yaml argument
  • --file-watch is more discerning about which files to rebuild for #912
  • stack path now supports --global-pkg-db and --ghc-package-path
  • --reconfigure flag #914 #946
  • Cached data is written with a checksum of its structure #889
  • Fully removed --optimizations flag
  • Added --cabal-verbose flag
  • Added --file-watch-poll flag for polling instead of using filesystem events (useful for running tests in a Docker container while modifying code in the host environment. When code is injected into the container via a volume, the container won’t propagate filesystem events).
  • Give a preemptive error message when -prof is given as a GHC option #1015
  • Locking is now optional, and will be turned on by setting the STACK_LOCK environment variable to true #950
  • Create default stack.yaml with documentation comments and commented out options #226
  • Out of memory warning if Cabal exits with -9 #947

Bug fixes:

  • Hacky workaround for optparse-applicative issue with stack exec --help #806
  • Build executables for local extra deps #920
  • copyFile can’t handle directories #942
  • Support for spaces in Haddock interface files fpco/minghc#85
  • Temporarily building against a “shadowing” local package? #992
  • Fix Setup.exe name for –upgrade-cabal on Windows #1002
  • Unlisted dependencies no longer trigger extraneous second build #838

0.1.4.1

Fix stack’s own Haddocks. No changes to functionality (only comments updated).

0.1.4.0

Major changes:

  • You now have more control over how GHC versions are matched, e.g. “use exactly this version,” “use the specified minor version, but allow patches,” or “use the given minor version or any later minor in the given major release.” The default has switched from allowing newer later minor versions to a specific minor version allowing patches. For more information, see #736 and #784.
  • Support added for compiling with GHCJS
  • stack can now reuse prebuilt binaries between snapshots. That means that, if you build package foo in LTS-3.1, that binary version can be reused in LTS-3.2, assuming it uses the same dependencies and flags. #878

Other enhancements:

  • Added the --docker-env argument, to set environment variables in Docker container.
  • Set locale environment variables to UTF-8 encoding for builds to avoid “commitBuffer: invalid argument” errors from GHC #793
  • Enable translitation for encoding on stdout and stderr #824
  • By default, stack upgrade automatically installs GHC as necessary #797
  • Added the ghc-options field to stack.yaml #796
  • Added the extra-path field to stack.yaml
  • Code page changes on Windows only apply to the build command (and its synonyms), and can be controlled via a command line flag (still defaults to on) #757
  • Implicitly add packages to extra-deps when a flag for them is set #807
  • Use a precompiled Setup.hs for simple build types #801
  • Set –enable-tests and –enable-benchmarks optimistically #805
  • --only-configure option added #820
  • Check for duplicate local package names
  • Stop nagging people that call stack test #845
  • --file-watch will ignore files that are in your VCS boring/ignore files #703
  • Add --numeric-version option

Bug fixes:

  • stack init --solver fails if GHC_PACKAGE_PATH is present #860
  • stack solver and stack init --solver check for test suite and benchmark dependencies #862
  • More intelligent logic for setting UTF-8 locale environment variables #856
  • Create missing directories for stack sdist
  • Don’t ignore .cabal files with extra periods #895
  • Deprecate unused --optimizations flag
  • Truncated output on slow terminals #413

0.1.3.1

Bug fixes:

  • Ignore disabled executables #763

0.1.3.0

Major changes:

  • Detect when a module is compiled but not listed in the cabal file (#32)
    • A warning is displayed for any modules that should be added to other-modules in the .cabal file
    • These modules are taken into account when determining whether a package needs to be built
  • Respect TemplateHaskell addDependentFile dependency changes (#105)
    • TH dependent files are taken into account when determining whether a package needs to be built.
  • Overhauled target parsing, added --test and --bench options #651

Other enhancements:

  • Set the HASKELL_DIST_DIR environment variable #524
  • Track build status of tests and benchmarks #525
  • --no-run-tests #517
  • Targets outside of root dir don’t build #366
  • Upper limit on number of flag combinations to test #543
  • Fuzzy matching support to give better error messages for close version numbers #504
  • --local-bin-path global option. Use to change where binaries get placed on a --copy-bins #342
  • Custom snapshots #111
  • –force-dirty flag: Force treating all local packages as having dirty files (useful for cases where stack can’t detect a file change)
  • GHC error messages: display file paths as absolute instead of relative for better editor integration
  • Add the --copy-bins option #569
  • Give warnings on unexpected config keys #48
  • Remove Docker pass-host option
  • Don’t require cabal-install to upload #313
  • Generate indexes for all deps and all installed snapshot packages #143
  • Provide --resolver global option #645
    • Also supports --resolver nightly, --resolver lts, and --resolver lts-X
  • Make stack build --flag error when flag or package is unknown #617
  • Preserve file permissions when unpacking sources #666
  • stack build etc work outside of a project
  • list-dependencies command #638
  • --upgrade-cabal option to stack setup #174
  • --exec option #651
  • --only-dependencies implemented correctly #387

Bug fixes:

  • Extensions from the other-extensions field no longer enabled by default #449
  • Fix: haddock forces rebuild of empty packages #452
  • Don’t copy over executables excluded by component selection #605
  • Fix: stack fails on Windows with git package in stack.yaml and no git binary on path #712
  • Fixed GHCi issue: Specifying explicit package versions (#678)
  • Fixed GHCi issue: Specifying -odir and -hidir as .stack-work/odir (#529)
  • Fixed GHCi issue: Specifying A instead of A.ext for modules (#498)

0.1.2.0

  • Add --prune flag to stack dot #487
  • Add --[no-]external,--[no-]include-base flags to stack dot #437
  • Add --ignore-subdirs flag to init command #435
  • Handle attempt to use non-existing resolver #436
  • Add --force flag to init command
  • exec style commands accept the --package option (see Reddit discussion)
  • stack upload without arguments doesn’t do anything #439
  • Print latest version of packages on conflicts #450
  • Flag to avoid rerunning tests that haven’t changed #451
  • stack can act as a script interpreter (see [Script interpreter] (https://github.com/commercialhaskell/stack/wiki/Script-interpreter) and Reddit discussion)
  • Add the --file-watch flag to auto-rebuild on file changes #113
  • Rename stack docker exec to stack exec --plain
  • Add the --skip-msys flag #377
  • --keep-going, turned on by default for tests and benchmarks #478
  • concurrent-tests: BOOL #492
  • Use hashes to check file dirtiness #502
  • Install correct GHC build on systems with libgmp.so.3 #465
  • stack upgrade checks version before upgrading #447

0.1.1.0

  • Remove GHC uncompressed tar file after installation #376
  • Put stackage snapshots JSON on S3 #380
  • Specifying flags for multiple packages #335
  • single test suite failure should show entire log #388
  • valid-wanted is a confusing option name #386
  • stack init in multi-package project should use local packages for dependency checking #384
  • Display information on why a snapshot was rejected #381
  • Give a reason for unregistering packages #389
  • stack exec accepts the --no-ghc-package-path parameter
  • Don’t require build plan to upload #400
  • Specifying test components only builds/runs those tests #398
  • STACK_EXE environment variable
  • Add the stack dot command
  • stack upgrade added #237
  • --stack-yaml command line flag #378
  • --skip-ghc-check command line flag #423

Bug fixes:

  • Haddock links to global packages no longer broken on Windows #375
  • Make flags case-insensitive #397
  • Mark packages uninstalled before rebuilding #365

0.1.0.0

  • Fall back to cabal dependency solver when a snapshot can’t be found
  • Basic implementation of stack new #137
  • stack solver command #364
  • stack path command #95
  • Haddocks #143:
    • Build for dependencies
    • Use relative links
    • Generate module contents and index for all packages in project

0.0.3

  • --prefetch #297
  • upload command ported from stackage-upload #225
  • --only-snapshot #310
  • --resolver #224
  • stack init #253
  • --extra-include-dirs and --extra-lib-dirs #333
  • Specify intra-package target #201

0.0.2

  • Fix some Windows specific bugs #216
  • Improve output for package index updates #227
  • Automatically update indices as necessary #227
  • –verbose flag #217
  • Remove packages (HTTPS and Git) #199
  • Config values for system-ghc and install-ghc
  • Merge stack deps functionality into stack build
  • install command #153 and #272
  • overriding architecture value (useful to force 64-bit GHC on Windows, for example)
  • Overhauled test running (allows cycles, avoids unnecessary recompilation, etc)

0.0.1

  • First public release, beta quality

Contributors Guide

Bug Reports

When reporting a bug, please write in the following format:

[Any general summary/comments if desired]
Steps to reproduce:
  1. Remove directory blah.
  2. Run command stack blah.
  3. Edit file blah.
  4. Run command stack blah.
Expected:
What I expected to see and happen.
Actual:

What actually happened.

Here is the stack --version output:

$ stack --version
Version 0.0.2, Git revision 6a86ee32e5b869a877151f74064572225e1a0398

Here is the command I ran with --verbose:

$ stack <your command here> <args> --verbose
<output>

With --verbose mode we can see what the tool is doing and when. Without this output it is much more difficult to surmise what’s going on with your issue. If the above output is larger than a page, paste it in a private Gist instead.

Include any .yaml configuration if relevant.

The more detailed your report, the faster it can be resolved and will ensure it is resolved in the right way. Once your bug has been resolved, the responsible will tag the issue as Needs confirmation and assign the issue back to you. Once you have tested and confirmed that the issue is resolved, close the issue. If you are not a member of the project, you will be asked for confirmation and we will close it.

Documentation

If you would like to help with documentation, please note that for most cases the Wiki has been deprecated in favor of markdown files placed in a new /doc subdirectory of the repository itself. Please submit a pull request with your changes/additions.

The documentation is rendered on haskellstack.org by readthedocs.org using Sphinx and CommonMark. Since links and formatting vary from GFM, please check the documentation there before submitting a PR to fix those.

If your changes move or rename files, or subsume Wiki content, please continue to leave a file/page in the old location temporarily, in addition to the new location. This will allow users time to update any shared links to the old location. Please also update any links in other files, or on the Wiki, to point to the new file location.

Code

If you would like to contribute code to fix a bug, add a new feature, or otherwise improve stack, pull requests are most welcome. It’s a good idea to submit an issue to discuss the change before plowing into writing code.

If you’d like to help out but aren’t sure what to work on, look for issues with the awaiting pr label. Issues that are suitable for newcomers to the codebase have the newcomer label. Best to post a comment to the issue before you start work, in case anyone has already started.

Maintainer guide

Pre-release checks

The following should be tested minimally before a release is considered good to go:

  • Ensure release and stable branches merged to master
  • Integration tests pass on a representative sample of platforms: stack test --flag stack:integration-tests. The actual release script will perform a more thorough test for every platform/variant prior to uploading, so this is just a pre-check
  • Stack builds with stack-7.8.yaml
  • stack can build the wai repo
  • Running stack build a second time on either stack or wai is a no-op
  • Build something that depends on happy (suggestion: hlint), since happy has special logic for moving around the dist directory
  • In release candidate branch:
    • Bump the version number (to even second-to-last component) in the .cabal file
    • Rename Changelog’s “unreleased changes” section to the version (check for any entries that snuck into the previous version’s changes)
  • In master branch:
    • Bump version to next odd second-to-last component
    • Add new “unreleased changes” secion in changelog
    • Bump to use latest LTS version
  • Review documentation for any changes that need to be made
    • Search for old Stack version, unstable stack version, and the next “obvious” version in sequence (if doing a non-obvious jump) and replace with new version
    • Look for any links to “latest” documentation, replace with version tag
    • Ensure to inter-doc links use .html extension (not .md)
    • Ensure all documentation pages listed in doc/index.rst
  • Check that any new Linux distribution versions added to etc/scripts/release.hs and etc/scripts/vagrant-releases.sh
  • Check that no new entries need to be added to releases.yaml, install_and_upgrade.md, and README.md

Release process

See stack-release-script’s README for requirements to perform the release, and more details about the tool.

  • Create a new draft Github release with tag vX.Y.Z (where X.Y.Z is the stack package’s version)
  • On each machine you’ll be releasing from, set environment variables: GITHUB_AUTHORIZATION_TOKEN, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_DEFAULT_REGION
  • On a machine with Vagrant installed:
    • Run etc/scripts/vagrant-releases.sh
  • On Mac OS X:
    • Run etc/scripts/osx-release.sh
  • On Windows:
    • Ensure your working tree is in C:\stack (or a similarly short path)
    • Run etc\scripts\windows-releases.bat
    • Build Windows installers. See https://github.com/borsboom/stack-installer#readme
  • Push signed Git tag, matching Github release tag name, e.g.: git tag -u 9BEFB442 vX.Y.Z && git push origin vX.Y.Z
  • Reset the release branch to the released commit, e.g.: git checkout release && git merge --ff-only vX.Y.Z && git push origin release
  • Update the stable branch
  • Publish Github release
  • Edit stack-setup-2.yaml, and add the new linux64 stack bindist
  • Upload package to Hackage: stack upload . --pvp-bounds=both
  • Activate version for new release tag on readthedocs.org, and ensure that stable documentation has updated
  • On a machine with Vagrant installed:
    • Run etc/scripts/vagrant-distros.sh
  • Update in Arch Linux’s haskell-stack.git: PKGBUILD and .SRCINFO * Be sure to reset pkgrel in both files, and update the SHA1 sum
  • Submit a PR for the haskell-stack Homebrew formula * Be sure to update the SHA sum * The commit message should just be haskell-stack <VERSION>
  • Build new MinGHC distribution
  • Upload haddocks to Hackage, if hackage couldn’t build on its own
  • Keep an eye on the Hackage matrix builder
  • Announce to haskell-cafe@haskell.org, haskell-stack@googlegroups.com, commercialhaskell@googlegroups.com mailing lists

Extra steps

Upload haddocks to Hackage
  • Set STACKVER environment variable to the Stack version (e.g. 0.1.10.0)
  • Run:
stack haddock
STACKDOCDIR=stack-$STACKVER-docs
rm -rf _release/$STACKDOCDIR
mkdir -p _release
cp -r $(stack path --local-doc-root)/stack-$STACKVER _release/$STACKDOCDIR
sed -i '' 's/href="\.\.\/\([^/]*\)\//href="..\/..\/\1\/docs\//g' _release/$STACKDOCDIR/*.html
(cd _release && tar cvz --format=ustar -f $STACKDOCDIR.tar.gz $STACKDOCDIR)
curl -X PUT \
     -H 'Content-Type: application/x-tar' \
     -H 'Content-Encoding: gzip' \
     -u borsboom \
     --data-binary "@_release/$STACKDOCDIR.tar.gz" \
     "https://hackage.haskell.org/package/stack-$STACKVER/docs"
Update MinGHC

Full details of prerequisites and steps for building MinGHC are in its README. What follows is an abbreviated set specifically for including the latest stack version.

  • Ensure makensis.exe and signtool.exe are on your PATH.
  • If you edit build-post-install.hs, run stack exec -- cmd /c build-post-install.bat
  • Set STACKVER environment variable to latest Stack verion (e.g. 0.1.10.0)
  • Adjust commands below for new GHC versions
  • Run:
stack build
stack exec -- minghc-generate 7.10.2 --stack=%STACKVER%
signtool sign /v /n "FP Complete, Corporation" /t "http://timestamp.verisign.com/scripts/timestamp.dll" .build\minghc-7.10.2-i386.exe
stack exec -- minghc-generate 7.10.2 --arch64 --stack=%STACKVER%
signtool sign /v /n "FP Complete, Corporation" /t "http://timestamp.verisign.com/scripts/timestamp.dll" .build\minghc-7.10.2-x86_64.exe
stack exec -- minghc-generate 7.8.4 --stack=%STACKVER%
signtool sign /v /n "FP Complete, Corporation" /t "http://timestamp.verisign.com/scripts/timestamp.dll" .build\minghc-7.8.4-i386.exe
stack exec -- minghc-generate 7.8.4 --arch64 --stack=%STACKVER%
signtool sign /v /n "FP Complete, Corporation" /t "http://timestamp.verisign.com/scripts/timestamp.dll" .build\minghc-7.8.4-x86_64.exe
  • Upload the built binaries to a new Github release
  • Edit README.md and update download links

Signing key

Releases are signed with this key:

-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1

mQENBFVs+cMBCAC5IsLWTikd1V70Ur1FPJMn14Sc/C2fbXc0zRcPuWX+JaXgrIJQ
74A3UGBpa07wJDZiQLLz4AasDQj++9gXdiM9MlK/xWt8BQpgQqSMgkktFVajSWX2
rSXPjqLtsl5dLsc8ziBkd/AARXoeITmXX+n6oRTy6QfdMv2Tacnq7r9M9J6bAz6/
7UsKkyZVwsbUPea4SuD/s7jkXAuly15APaYDmF5mMlpoRWp442lJFpA0h52mREX1
s5FDbuKRQW7OpZdLcmOgoknJBDSpKHuHEoUhdG7Y3WDUGYFZcTtta1qSVHrm3nYa
7q5yOzPW4/VpftkBs1KzIxx0nQ5INT5W5+oTABEBAAG0H0ZQQ29tcGxldGUgPGRl
dkBmcGNvbXBsZXRlLmNvbT6JATcEEwEKACEFAlVs+cMCGwMFCwkIBwMFFQoJCAsF
FgMCAQACHgECF4AACgkQV1FZaJvvtEIP8gf/S/k4C3lp/BFb0K9DHHSt6EaGQPwy
g+O8d+JvL7ghkvMjlQ+UxDw+LfRKANTpl8a4vHtEQLHEy1tPJfrnMA8DNci8HLVx
rK3lIqMfv5t85VST9rz3X8huSw7qwFyxsmIqFtJC/BBQfsOXC+Q5Z2nbResXHMeA
5ZvDopZnqKPdmMOngabPGZd89hOKn6r8k7+yvZ/mXmrGOB8q5ZGbOXUbCshst7lc
yZWmoK3VJdErQjGHCdF4MC9KFBQsYYUy9b1q0OUv9QLtq/TeKxfpvYk9zMWAoafk
M8QBE/qqOpqkBRoKbQHCDQgx7AXJMKnOA0jPx1At57hWl7PuEH4rK38UtLkBDQRV
bPnDAQgAx1+4ENyaMk8XznQQ4l+nl8qw4UedZhnR5Xxr6z2kcMO/0VdwmIDCpxaM
spurOF+yExfY/Chbex7fThWTwVgfsItUc/QLLv9jkvpveMUDuPyh/4QrAQBYoW09
jMJcOTFQU+f4CtKaN/1PNoTSU2YkVpbhvtV3Jn2LPFjUSPb7z2NZ9NKe10M0/yN+
l0CuPlqu6GZR5L3pA5i8PZ0Nh47j0Ux5KIjrjCGne4p+J8qqeRhUf04yHAYfDLgE
aLAG4v4pYbb1jNPUm1Kbk0lo2c3dxx0IU201uAQ6LNLdF/WW/ZF7w3iHn7kbbzXO
jhbq2rvZEn3K9xDr7homVnnj21/LSQARAQABiQEfBBgBCgAJBQJVbPnDAhsMAAoJ
EFdRWWib77RC3ukH/R9jQ4q6LpXynQPJJ9QKwstglKfoKNpGeAYVTEn0e7NB0HV5
BC+Da5SzBowboxC2YCD1wTAjBjLLQfAYNyR+tHpJBaBmruafj87nBCDhSWwWDXwx
OUDpNOwKUkrwZDRlM7n4byoMRl7Vh/7CXxaTqkyao1c5v3mHh/DremiTvOJ4OXgJ
77NHaPXezHkCFZC8/sX6aY0DJxF+LIE84CoLI1LYBatH+NKxoICKA+yeF3RIVw0/
F3mtEFEtmJ6ljSks5tECxfJFvQlkpILBbGvHfuljKMeaj+iN+bsHmV4em/ELB1ku
N9Obs/bFDBMmQklIdLP7dOunDjY4FwwcFcXdNyg=
=YUsC
-----END PGP PUBLIC KEY BLOCK-----

Build command

Overview

The primary command you use in stack is build. This page describes the details of build’s command line interface. The goal of the interface is to do the right thing for simple input, and allow a lot of flexibility for more complicated goals.

Synonyms

One potential point of confusion is the synonym commands for build. These are provided to match commonly expected command line interfaces, and to make common workflows shorter. The important thing to note is that all of these are just the build command in disguise. Each of these commands are called out as synonyms in the --help output. These commands are:

  • stack test is the same as stack build --test
  • stack bench is the same as stack build --bench
  • stack haddock is the same as stack build --haddock
  • stack install is the same as stack build --copy-bins

The advantage of the synonym commands is that they’re convenient and short. The advantage of the options is that they compose. For example, stack build --test --copy-bins will build libraries, executables, and test suites, run the test suites, and then copy the executables to your local bin path (more on this below).

Components

Components are a subtle yet important point to how build operates under the surface. Every cabal package is made up of one or more components. It can have 0 or 1 libraries, and then 0 or more of executable, test, and benchmark components. stack allows you to call out a specific component to be built, e.g. stack build mypackage:test:mytests will build the mytests component of the mypackage package. mytests must be a test suite component.

We’ll get into the details of the target syntax for how to select components in the next section. In this section, the important point is: whenever you target a test suite or a benchmark, it’s built and also run, unless you explicitly disable running via --no-run-tests or --no-run-benchmarks. Case in point: the previous command will in fact build the mytests test suite and run it, even though you haven’t used the stack test command or the --test option. (We’ll get to what exactly --test does below.)

This gives you a lot of flexibility in choosing what you want stack to do. You can run a single test component from a package, run a test component from one package and a benchmark from another package, etc.

One final note on components: you can only control components for local packages, not dependencies. With dependencies, stack will always build the library (if present) and all executables, and ignore test suites and benchmarks. If you want more control over a package, you must add it to your packages setting in your stack.yaml file.

Target syntax

In addition to a number of options (like the aforementioned --test), stack build takes a list of zero or more targets to be built. There are a number of different syntaxes supported for this list:

  • package, e.g. stack build foobar, is the most commonly used target. It will try to find the package in the following locations: local packages, extra dependencies, snapshots, and package index (e.g. Hackage). If it’s found in the package index, then the latest version of that package from the index is implicitly added to your extra dependencies.

    This is where the --test and --bench flags come into play. If the package is a local package, then all of the test suite and benchmark components are selected to be built, respectively. In any event, the library and executable components are also selected to be built.

  • package identifier, e.g. stack build foobar-1.2.3, is usually used to include specific package versions from the index. If the version selected conflicts with an existing local package or extra dep, then stack fails with an error. Otherwise, this is the same as calling stack build foobar, except instead of using the latest version from the index, the version specified is used.

  • component. Instead of referring to an entire package and letting stack decide which components to build, you select individual components from inside a package. This can be done for more fine-grained control over which test suites to run, or to have a faster compilation cycle. There are multiple ways to refer to a specific component (provided for convenience):

    • packagename:comptype:compname is the most explicit. The available comptypes are exe, test, and bench.
    • packagename:compname allows you to leave off the component type, as that will (almost?) always be redundant with the component name. For example, stack build mypackage:mytestsuite.
    • :compname is a useful shortcut, saying “find the component in all of the local packages.” This will result in an error if multiple packages have a component with the same name. To continue the above example, stack build :mytestsuite.
      • Side note: the commonly requested run command is not available because it’s a simple combination of stack build :exename && stack exec exename
  • directory, e.g. stack build foo/bar, will find all local packages that exist in the given directory hierarchy and then follow the same procedure as passing in package names as mentioned above. There’s an important caveat here: if your directory name is parsed as one of the above target types, it will be treated as that. Explicitly starting your target with ./ can be a good way to avoid that, e.g. stack build ./foo

Finally: if you provide no targets (e.g., running stack build), stack will implicitly pass in all of your local packages. If you only want to target packages in the current directory or deeper, you can pass in ., e.g. stack build ..

Examples

FIXME: what examples would be helpful? Need feedback on what’s confusing above

Dependencies

stack will always automatically build all dependencies necessary for its targets.

Other flags

There are a number of other flags accepted by stack build. Instead of listing all of them, please use stack build --help. Some particularly convenient ones worth mentioning here since they compose well with the rest of the build system as described:

  • --file-watch will rebuild your project every time a file changes
  • --exec "cmd [args]" will run a command after a successful build

To come back to the composable approach described above, consider this final example (which uses the wai repository:

stack build --file-watch --test --copy-bins --haddock wai-extra :warp warp:doctest --exec 'echo Yay, it worked!'

This command will:

  • Start stack up in file watch mode, waiting for files in your project to change. When first starting, and each time a file changes, it will do all of the following.
  • Build the wai-extra package and its test suites
  • Build the warp executable
  • Build the warp package’s doctest component (which, as you may guess, is a test site)
  • Run all of the wai-extra package’s test suite components and the doctest test suite component
  • If all of that succeeds:
    • Copy generated executables to the local bin path
    • Run the command echo Yay, it worked!

Dependency visualization

You can use stack to visualize the dependencies between your packages and optionally also external dependencies.

As an example, let’s look at wreq:

$ stack dot | dot -Tpng -o wreq.png

wreq

Okay that is a little boring, let’s also look at external dependencies:

$ stack dot --external | dot -Tpng -o wreq.png

wreq_ext

Well that is certainly a lot. As a start we can exclude base and then depending on our needs we can either limit the depth:

$ stack dot --no-include-base --external --depth 1 | dot -Tpng -o wreq.png

wreq_depth

or prune packages explicitly:

$ stack dot --external --prune base,lens,wreq-examples,http-client,aeson,tls,http-client-tls,exceptions | dot -Tpng -o wreq_pruned.png

wreq_pruned

Keep in mind that you can also save the dot file:

$ stack dot --external --depth 1 > wreq.dot
$ dot -Tpng -o wreq.png wreq.dot

and pass in options to dot or use another graph layout engine like twopi:

$ stack dot --external --prune base,lens,wreq-examples,http-client,aeson,tls,http-client-tls,exceptions | twopi -Groot=wreq -Goverlap=false -Tpng -o wreq_pruned.png

wreq_pruned

Docker integration

stack has support for automatically performing builds inside a Docker container, using volume mounts and user ID switching to make it mostly seamless. FP Complete provides images for use with stack that include GHC, tools, and optionally have all of the Stackage LTS packages pre-installed in in the global package database.

The primary purpose for using stack/docker this way is for teams to ensure all developers are building in an exactly consistent environment without team members needing to deal with Docker themselves.

See the how stack can use Docker under the hood blog post for more information about the motivation and implementation of stack’s Docker support.

Usage

This section assumes that you already have Docker installed and working. If not, see the prerequisites section. If you run into any trouble, see the troubleshooting section.

Enable in stack.yaml

The most basic configuration is to add this to your project’s stack.yaml:

docker:
    enable: true

See configuration for additional options.

Use stack as normal

With Docker enabled, most stack sub-commands will automatically launch themselves in an ephemeral Docker container (the container is deleted as soon as the command completes). The project directory and ~/.stack are volume-mounted into the container, so any build artifacts are “permanent” (not deleted with the container).

The first time you run a command with a new image, you will be prompted to run stack docker pull to pull the image first. This will pull a Docker image with a tag that matches your resolver. Only LTS resolvers are supported (we do not generate images for nightly snapshots). Not every LTS version is guaranteed to have an image existing, and new LTS images tend to lag behind the LTS snapshot being published on stackage.org. Be warned: these images are rather large!

Docker sub-commands

These stack docker sub-commands have Docker-specific functionality. Most other stack commands will also use a Docker container under the surface if Docker is enabled.

pull - Pull latest version of image

stack docker pull pulls an image from the Docker registry for the first time, or updates the image by pulling the latest version.

cleanup - Clean up old images and containers

Docker images can take up quite a lot of disk space, and it’s easy for them to build up if you switch between projects or your projects update their images. This sub-command will help to remove old images and containers.

By default, stack docker cleanup will bring up an editor showing the images and containers on your system, with any stack images that haven’t been used in the last seven days marked for removal. You can add or remove the R in the left-most column to flag or unflag an image/container for removal. When you save the file and quit the text editor, those images marked for removal will be deleted from your system. If you wish to abort the cleanup, delete all all the lines from your editor.

If you use Docker for purposes other than stack, you may have other images on your system as well. These will also appear in in a separate section, but they will not be marked for removal by default.

Run stack docker cleanup --help to see additional options to customize its behaviour.

reset - Reset the Docker “sandbox”

In order to preserve the contents of the in-container home directory between runs, a special “sandbox” directory is volume-mounted into the container. stack docker reset will reset that sandbox to its defaults.

Note: this leaves of ~/.stack (which is separately volume-mounted) alone.

Command-line options

The default Docker configuration can be overridden on the command-line. See stack --docker-help for a list of all Docker options, and consult configuration section below for more information about their meanings. These are global options, and apply to all commands (not just stack docker sub-commands).

Configuration

stack.yaml contains a docker: section with Docker settings. If this section is omitted, Docker containers will not be used. These settings can be included in project, user, or global configuration.

Here is an annotated configuration file. The default values are shown unless otherwise noted.

docker:

  # Set to false to disable using Docker.  In the project configuration,
  # the presence of a `docker:` section implies docker is enabled unless
  # `enable: false` is set.  In user and global configuration, this is not
  # the case.
  enable: true

  # The name of the repository to pull the image from.  See the "reposities"
  # section of this document for more information about available repositories.
  # If this includes a tag (e.g. "my/image:tag"), that tagged image will be
  # used.  Without a tag specified, the LTS version slug is added automatically.
  # Either `repo` or `image` may be specified, but not both.
  repo: "fpco/stack-build"

  # Exact Docker image name or ID.  Overrides `repo`. Either `repo` or `image`
  # may be specified, but not both.  (default none)
  image: "5c624ec1d63f"

  # Registry requires login.  A login will be requested before attempting to
  # pull.
  registry-login: false

  # Username to log into the registry.  (default none)
  registry-username: "myuser"

  # Password to log into the registry.  (default none)
  registry-password: "SETME"

  # If true, the image will be pulled from the registry automatically, without
  # needing to run `stack docker pull`.  See the "security" section of this
  # document for implications of enabling this.
  auto-pull: false

  # If true, the container will be run "detached" (in the background).  Refer
  # to the Docker users guide for information about how to manage containers.
  # This option would rarely make sense in the configuration file, but can be
  # useful on the command-line.  When true, implies `persistent`.`
  detach: false

  # If true, the container will not be deleted after it terminates.  Refer to
  # the Docker users guide for information about how to manage containers. This
  # option would rarely make sense in the configuration file, but can be
  # useful on the command-line.  `detach` implies `persistent`.
  persistent: false

  # What to name the Docker container.  Only useful with `detach` or
  # `persistent` true.  (default none)
  container-name: "example-name"

  # Additional arguments to pass to `docker run`.  (default none)
  run-args: ["--net=bridge"]

  # Directories from the host to volume-mount into the container.  If it
  # contains a `:`, the part before the `:` is the directory on the host and
  # the part after the `:` is where it should be mounted in the container.
  # (default none, aside from the project and stack root directories which are
  # always mounted)
  mount:
    - "/foo/bar"
    - "/baz:/tmp/quux"

  # Environment variables to set in the container.  Environment variables
  # are not automatically inherited from the host, so if you need any specific
  # variables, use the '--docker-env` command-line argument version of this to
  # pass them in.  (default none)
  env:
    - "FOO=BAR"
    - "BAR=BAZ QUUX"

  # Location of database used to track image usage, which `stack docker cleanup`
  # uses to determine which images should be kept.  On shared systems, it may
  # be useful to override this in the global configuration file so that
  # all users share a single database.
  database-path: "~/.stack/docker.db"

  # Location of a Docker container-compatible 'stack' executable with the
  # matching version. This executable must be built on linux-x86_64 and
  # statically linked.
  # Valid values are:
  #   host: use the host's executable.  This is the default when the host's
  #     executable is known to work (e.g., from official linux-x86_64 bindist)
  #   download: download a compatible executable matching the host's version.
  #     This is the default when the host's executable is not known to work
  #   image: use the 'stack' executable baked into the image.  The version
  #     must match the host's version
  #   /path/to/stack: path on the host's local filesystem
  stack-exe: host

  # If true (the default when using the local Docker Engine), run processes
  # in the Docker container as the same UID/GID as the host.  The ensures
  # that files written by the container are owned by you on the host.
  # When the Docker Engine is remote (accessed by tcp), defaults to false.
  set-user: true

  # Require the version of the Docker client to be within the specified
  # Cabal-style version range (e.g., ">= 1.6.0 && < 1.9.0")
  require-docker-version: "any"

Image Repositories

FP Complete provides the following public image repositories on Docker Hub:

  • fpco/stack-build (the default) - GHC (patched), tools (stack, cabal-install, happy, alex, etc.), and system developer libraries required to build all Stackage packages.
  • fpco/stack-ghcjs-build - Like stack-build, but adds GHCJS.
  • fpco/stack-full - Includes all Stackage packages pre-installed in GHC’s global package database. These images are over 10 GB!
  • fpco/stack-ghcjs-full - Like stack-full, but adds GHCJS.
  • fpco/stack-run - Runtime environment for binaries built with Stackage. Includes system shared libraries required by all Stackage packages. Does not necessarily include all data required for every use (e.g. has texlive-binaries for HaTeX, but does not include LaTeX fonts), as that would be prohibitively large. Based on phusion/baseimage.

FP Complete also builds custom variants of these images for their clients.

These images can also be used directory with docker run and provide a complete Haskell build environment.

In addition, most Docker images that contain the basics for running GHC can be used with Stack’s Docker integration. For example, the official Haskell image repository works. See Custom images for more details.

Prerequisites

Linux 64-bit

Docker use requires machine (virtual or metal) running a Linux distribution that Docker supports, with a 64-bit kernel. If you do not already have one, we suggest Ubuntu 14.04 (“trusty”) since this is what we test with.

While Docker does support non-Linux operating systems through the boot2docker VM, there are issues with host volume mounting that prevent stack from being usable in this configuration. See #194 for details and workarounds.

Docker

Install the latest version of Docker by following the instructions for your operating system.

The Docker client should be able to connect to the Docker daemon as a non-root user. For example (from here):

# Add the connected user "${USER}" to the docker group.
# Change the user name to match your preferred user.
sudo gpasswd -a ${USER} docker

# Restart the Docker daemon.
sudo service docker restart

You will now need to log out and log in again for the the group addition to take effect.

Note the above has security implications. See security for more.

Security

Having docker usable as a non-root user is always a security risk, and will allow root access to your system. It is also possible to craft a stack.yaml that will run arbitrary commands in an arbirary docker container through that vector, thus a stack.yaml could cause stack to run arbitrary commands as root. While this is a risk, it is not really a greater risk than is posed by the docker permissions in the first place (for example, if you ever run an unknown shell script or executable, or ever compile an unknown Haskell package that uses Template Haskell, you are at equal risk). Nevertheless, there are plans to close the stack.yaml loophole.

One way to mitigate this risk is, instead of allowing docker to run as non-root, replace docker with a wrapper script that uses sudo to run the real Docker client as root. This way you will at least be prompted for your root password. As @gregwebs pointed out, put this script named docker in your PATH (and make sure you remove your user from the docker group as well, if you added it earlier):

#!/bin/bash -e
# The goal of this script is to maintain the security privileges of sudo
# Without having to constantly type "sudo"
exec sudo /usr/bin/docker "$@"

Additional notes

Volume-mounts and ephemeral containers

Since filesystem changes outside of the volume-mounted project directory are not persisted across runs, this means that if you stack exec sudo apt-get install some-ubuntu-package, that package will be installed but then the container it’s installed in will disappear, thus causing it to have no effect. If you wish to make this kind of change permanent, see later instructions for how to create a derivative Docker image.

Inside the container, your home directory is a special location that volume- mounted from within your project directory’s .stack-work in such a way as that installed GHC/cabal packages are not shared between different Stackage snapshots. In addition, ~/.stack is volume-mounted from the host.

Network

stack containers use use the host’s network stack within the container by default, meaning a process running in the container can connect to services running on the host, and a server process run within the container can be accessed from the host without needing to explicitly publish its port. To run the container with an isolated network, use --docker-run-args to pass a the --net argument to docker-run. For example:

stack --docker-run-args='--net=bridge --publish=3000:3000' \
      exec some-server

will run the container’s network in “bridge” mode (which is Docker’s default) and publish port 3000.

Persistent container

If you do want to do all your work, including editing, in the container, it might be better to use a persistent container in which you can install Ubuntu packages. You could get that by running something like stack --docker-container-name=NAME --docker-persist docker exec bash. This means when the container exits, it won’t be deleted. You can then restart it using docker start -a -i NAME. It’s also possible to detach from a container while it continues running in the background using by pressing Ctrl-P Ctrl-Q, and then reattach to it using docker attach NAME.

Note that each time you run stack --docker-persist, a new persistent container is created (it will not automatically reuse the previous one). See the Docker user guide for more information about managing Docker containers.

Derivative image

Creating your own custom derivative image can be useful if you need to install additional Ubuntu packages or make other changes to the operating system. Here is an example (replace custom if you prefer a different name for your derived container):

# On host
$ stack  --docker-persist --docker-container-name=temp docker exec bash

# In container, make changes to OS
$ sudo apt-get install r-cran-numderiv
[...]
$ exit

# On host again
$ docker commit temp custom
$ docker rm temp

Now you have a new Docker image named custom. To use the new image, run a command such as the following or update the corresponding values in your stack.yaml:

stack --docker-image=custom <COMMAND>

Note, however, that any time a new image is used, you will have to re-do this process. You could also use a Dockerfile to make this reusable. Consult the Docker user guide for more on creating Docker images.

Custom images

The easiest way to create your own custom image us by extending FP Complete’s images, but if you prefer to start from scratch, most images that include the basics for building code with GHC will work. The image doesn’t even, strictly speaking, need to include GHC, but it does need to have libraries and tools that GHC requires (e.g., libgmp, gcc, etc.).

There are also a few ways to set up images that tightens the integration:

  • Create a user and group named stack, and create a ~/.stack directory for it. Any build plans and caches from it will be copied from the image by Stack, meaning they don’t need to be downloaded separately.
  • Any packages in GHC’s global package database will be available. This can be used to add private libraries to the image, or the make available a set of packages from an LTS release.

Troubleshooting

“No Space Left on Device”, but ‘df’ shows plenty of disk space

This is likely due to the storage driver Docker is using, in combination with the large size and number of files in these images. Use docker info|grep 'Storage Driver' to determine the current storage driver.

We recommend using either the overlay or aufs storage driver for stack, as they are least likely to give you trouble. On Ubuntu, aufs is the default for new installations, but older installations sometimes used devicemapper.

The devicemapper storage driver’s default configuration limits it to a 10 GB file system, which the “full” images exceed. We have experienced other instabilities with it as well on Ubuntu, and recommend against its use for this purpose.

The btrfs storage driver has problems running out of metadata space long before running out of actual disk space, which requires rebalancing or adding more metadata space. See CoreOS’s btrfs troubleshooting page for details about how to do this.

Pass the -s <driver> argument to the Docker daemon to set the storage driver (in /etc/default/docker on Ubuntu). See Docker daemon storage-driver option for more details.

You may also be running out of inodes on your filesystem. Use df -i to check for this condition. Unfortunately, the number of inodes is set when creating the filesystem, so fixing this requires reformatting and passing the -N argument to mkfs.ext4.

Name resolution doesn’t work from within container

On Ubuntu 12.04, by default NetworkManager runs dnsmasq service, which sets 127.0.0.1 as your DNS server. Since Docker containers cannot access this dnsmasq, Docker falls back to using Google DNS (8.8.8.8/8.8.4.4). This causes problems if you are forced to use internal DNS server. This can be fixed by executing:

sudo sed 's@dns=dnsmasq@#dns=dnsmasq@' -i \
    /etc/NetworkManager/NetworkManager.conf && \
sudo service network-manager restart

If you have already installed Docker, you must restart the daemon for this change to take effect:

sudo service docker restart

The above commands turn off dnsmasq usage in NetworkManager configuration and restart network manager. They can be reversed by executing sudo sed 's@#dns=dnsmasq@dns=dnsmasq@' -i /etc/NetworkManager/NetworkManager.conf && sudo service network-manager restart. These instructions are adapted from the Shipyard Project’s QuickStart guide.

Cannot pull images from behind firewall that blocks TLS/SSL

If you are behind a firewall that blocks TLS/SSL and pulling images from a private Docker registry, you must edit the system configuration so that the --insecure-registry <registry-hostname> option is passed to the Docker daemon. For example, on Ubuntu:

echo 'DOCKER_OPTS="--insecure-registry registry.example.com"' \
    |sudo tee -a /etc/default/docker
sudo service docker restart

This does require the private registry to be available over plaintext HTTP.

See Docker daemon insecure registries documentation for details.

FAQ

So that this doesn’t become repetitive: for the reasons behind the answers below, see the Architecture page. The goal of the answers here is to be as helpful and concise as possible.

Where is stack installed and will it interfere with ghc (etc) I already have installed?

Stack itself is installed in normal system locations based on the mechanism you used (see the Install and upgrade page). Stack installs the Stackage libraries in ~/.stack and any project libraries or extra dependencies in a .stack-work directory within each project’s directory. None of this should affect any existing Haskell tools at all.

What is the relationship between stack and cabal?

  • Cabal-the-library is used by stack to build your Haskell code.
  • cabal-install (the executable) is used by stack for its dependency solver functionality.
  • A .cabal file is provided for each package, and defines all package-level metadata just like it does in the cabal-install world: modules, executables, test suites, etc. No change at all on this front.
  • A stack.yaml file references 1 or more packages, and provides information on where dependencies come from.
  • stack build currently initializes a stack.yaml from the existing .cabal file. Project initialization is something that is still being discussed and there may be more options here for new projects in the future (see issue 253)

I need to use a different version of a package than what is provided by the LTS Haskell snapshot I’m using, what should I do?

You can make tweaks to a snapshot by modifying the extra-deps configuration value in your stack.yaml file, e.g.:

resolver: lts-2.9
packages:
- '.'
extra-deps:
- text-1.2.1.1

I need to use a package (or version of a package) that is not available on hackage, what should I do?

Add it to the packages list in your project’s stack.yaml, specifying the package’s source code location relative to the directory where your stack.yaml file lives, e.g.

resolver: lts-2.10
packages:
- '.'
- third-party/proprietary-dep
- github-version-of/conduit
- patched/diagrams
extra-deps: []

The above example specifies that the proprietary-dep package is found in the project’s third-party folder, that the conduit package is found in the project’s github-version-of folder, and that the diagrams package is found in the project’s patched folder. This autodetects changes and reinstalls the package.

What is the meaning of the arguments given to stack build, test, etc?

Those are the targets of the build, and can have one of three formats:

  • A package name (e.g., my-package) will mean that the my-package package must be built
  • A package identifier (e.g., my-package-1.2.3), which includes a specific version. This is useful for passing to stack install for getting a specific version from upstream
  • A directory (e.g., ./my-package) for including a local directory’s package, including any packages in subdirectories

I need to modify an upstream package, how should I do it?

Typically, you will want to get the source for the package and then add it to your packages list in stack.yaml. (See the previous question.) stack unpack is one approach for getting the source. Another would be to add the upstream package as a submodule to your project.

Am I required to use a Stackage snapshot to use stack?

No, not at all. If you prefer dependency solving to curation, you can continue with that workflow. Instead of describing the details of how that works here, it’s probably easiest to just say: run stack init --solver and look at the generated stack.yaml.

How do I use this with sandboxes?

Explicit sandboxing on the part of the user is not required by stack. All builds are automatically isolated into separate package databases without any user interaction. This ensures that you won’t accidentally corrupt your installed packages with actions taken in other projects.

Can I run cabal commands inside stack exec?

With a recent enough version of cabal-install (>= 1.22), you can. For older versions, due to haskell/cabal#1800, this does not work. Note that even with recent versions, for some commands you may need this extra level of indirection:

$ stack exec -- cabal exec -- cabal <command>

However, virtually all cabal commands have an equivalent in stack, so this should not be necessary. In particular, cabal users may be accustomed to the cabal run command. In stack:

$ stack build && stack exec <program-name>

Or, if you want to install the binaries in a shared location:

$ stack install
$ <program-name>

assuming your $PATH has been set appropriately.

Using custom preprocessors

If you have a custom preprocessor, for example, Ruby, you may have a file like:

B.erb

module B where

<% (1..5).each do |i| %>
test<%= i %> :: Int
test<%= i %> = <%= i %>
<% end %>

To ensure that Stack picks up changes to this file for rebuilds, add the following line to your .cabal file:

extra-source-files:   B.erb

I already have GHC installed, can I still use stack?

Yes. stack will default to using whatever GHC is on your PATH. If that GHC is a compatible version with the snapshot you’re using, it will simply use it. Otherwise, it will prompt you to run stack setup. Note that stack setup installs GHC into ~/.stack/programs/$platform/ghc-$version/ and not a global location.

Note that GHC installation doesn’t work for all OSes, so in some cases the first option will need to install GHC yourself.

How does stack determine what GHC to use?

It uses the first GHC that it finds on the PATH. If that GHC does not comply with the various requirements (version, architecture) that your project needs, it will prompt you to run stack setup to get it. stack is fully aware of all GHCs that it has installed itself.

See this issue for a detailed discussion.

How do I upgrade to GHC 7.10.2 with stack?

If you already have a prior version of GHC use stack --resolver ghc-7.10 setup --reinstall. If you don’t have any GHC installed, you can skip the --reinstall.

How do I get extra build tools?

stack will automatically install build tools required by your packages or their dependencies, in particular alex and happy.

NOTE: This works when using lts or nightly resolvers, not with ghc or custom resolvers. You can manually install build tools by running, e.g., stack build alex happy.

How does stack choose which snapshot to use when creating a new config file?

It checks the two most recent LTS Haskell major versions and the most recent Stackage Nightly for a snapshot that is compatible with all of the version bounds in your .cabal file, favoring the most recent LTS. For more information, see the snapshot auto-detection section in the architecture document.

I’d like to use my installed packages in a different directory. How do I tell stack where to find my packages?

Set the STACK_YAML environment variable to point to the stack.yaml config file for your project. Then you can run stack exec, stack ghc, etc., from any directory and still use your packages.

My tests are failing. What should I do?

Like all other targets, stack test runs test suites in parallel by default. This can cause problems with test suites that depend on global resources such as a database or binding to a fixed port number. A quick hack is to force stack to run all test suites in sequence, using stack test --jobs=1. For test suites to run in parallel developers should ensure that their test suites do not depend on global resources (e.g. by asking the OS for a random port to bind to) and where unavoidable, add a lock in order to serialize access to shared resources.

Can I get bash autocompletion?

Yes, see the shell-autocompletion documentation

How do I update my package index?

Users of cabal are used to running cabal update regularly. You can do the same with stack by running stack update. But generally, it’s not necessary: if the package index is missing, or if a snapshot refers to package/version that isn’t available, stack will automatically update and then try again. If you run into a situation where stack doesn’t automatically do the update for you, please report it as a bug.

Isn’t it dangerous to automatically update the index? Can’t that corrupt build plans?

No, stack is very explicit about which packages it’s going to build for you. There are three sources of information to tell it which packages to install: the selected snapshot, the extra-deps configuration value, and your local packages. The only way to get stack to change its build plan is to modify one of those three. Updating the index will have no impact on stack’s behavior.

I have a custom package index I’d like to use, how do I do so?

You can configure this in your stack.yaml. See YAML configuration.

How can I make sure my project builds against multiple ghc versions?

You can create multiple yaml files for your project, one for each build plan. For example, you might set up your project directory like so:

myproject/
  stack-7.8.yaml
  stack-7.10.yaml
  stack.yaml --> symlink to stack-7.8.yaml
  myproject.cabal
  src/
    ...

When you run stack build, you can set the STACK_YAML environment variable to indicate which build plan to use.

$ stack build                             # builds using the default stack.yaml
$ STACK_YAML=stack-7.10.yaml stack build  # builds using the given yaml file

I heard you can use this with Docker?

Yes, stack supports using Docker with images that contain preinstalled Stackage packages and the tools. See Docker integration for details.

How do I use this with Travis CI?

See the Travis section in the GUIDE

What is licensing restrictions on Windows?

Currently on Windows GHC produces binaries linked statically with GNU Multiple Precision Arithmetic Library (GMP), which is used by integer-gmp library to provide big integer implementation for Haskell. Contrary to the majority of Haskell code licensed under permissive BSD3 license, GMP library is licensed under LGPL, which means resulting binaries have to be provided with source code or object files. That may or may not be acceptable for your situation. Current workaround is to use GHC built with alternative big integer implementation called integer-simple, which is free from LGPL limitations as it’s pure Haskell and does not use GMP. Unfortunately it has yet to be available out of the box with stack. See issue #399 for the ongoing effort and information on workarounds.

How to get a working executable on Windows?

When executing a binary after building with stack build (e.g. for target “foo”), the command foo.exe might complain about missing runtime libraries (whereas stack exec foo works).

Windows is not able to find the necessary C++ libraries from the standard prompt because they’re not in the PATH environment variable. stack exec works because it’s modifying PATH to include extra things.

Those libraries are shipped with GHC (and, theoretically in some cases, MSYS). The easiest way to find them is stack exec which. E.g.

>stack exec which libstdc++-6.dll
/c/Users/Michael/AppData/Local/Programs/stack/i386-windows/ghc-7.8.4/mingw/bin/libstdc++-6.dll

A quick workaround is adding this path to the PATH environment variable or copying the files somewhere Windows finds them (cf. https://msdn.microsoft.com/de-de/library/7d83bc18.aspx).

Cf. issue #425.

Can I change stack’s default temporary directory?

Stack makes use of a temporary directory for some commands (/tmp by default on linux). If there is not enough free space in this directory, stack may fail (see issue #429 ). For instance stack setup with a GHC installation requires roughly 1GB free.

A custom temporary directory can be forced:

  • on Linux by setting the environment variable TMPDIR (eg $ TMPDIR=path-to-tmp stack setup)
  • on Windows by setting one of the environment variable (given in priority order), TMP, TEMP, USERPROFILE

stack sometimes rebuilds based on flag changes when I wouldn’t expect it to. How come?

stack tries to give you reproducibility whenever possible. In some cases, this means that you get a recompile when one may not seem necessary. The most common example is running something like this in a multi-package project:

stack build --ghc-options -O0 && stack build --ghc-options -O0 one-of-the-packages

This may end up recompiling local dependencies of one-of-the-packages without optimizations on. Whether stack should or shouldn’t do this depends on the needs of the user at the time, and unfortunately we can’t make a solution that will make everyone happy in all cases. If you’re curious for details, there’s a long discussion about it on the issue tracker.

stack setup on a windows system only tells me to add certain paths to the PATH variable instead of doing it

If you are using a powershell session, it is easy to automate even that step:

$env:Path = ( stack setup | %{ $_ -replace '[^ ]+ ', ''} ), $env:Path -join ";"

How do I reset / remove Stack (such as to to do a completely fresh build)?

The first thing to remove is project-specific .stack-work directory within the project’s directory. Next, remove ~/.stack directory overall. You may have errors if you remove the latter but leave the former. Removing Stack itself will relate to how it was installed, and if you used GHC installed outside of Stack, that would need to be removed separately.

How does stack handle parallel builds? What exactly does it run in parallel?

See issue #644 for more details.

I get strange ld errors about recompiling with “-fPIC”

Some users (myself included!) have come across a linker errors (example below) that seem to be dependent on the local environment, i.e. the package may compile on a different machine. There is no known workaround (if you come across one please include details), however the issue has been reported to be [non-deterministic] (https://github.com/commercialhaskell/stack/issues/614) in some cases. I’ve had success using the docker functionality to build the project on a machine that would not compile it otherwise.

tmp-0.1.0.0: build
Building tmp-0.1.0.0...
Preprocessing executable 'tmp' for tmp-0.1.0.0...
Linking dist-stack/x86_64-linux/Cabal-1.22.2.0/build/tmp/tmp ...
/usr/bin/ld: dist-stack/x86_64-linux/Cabal-1.22.2.0/build/tmp/tmp-tmp/Main.o: relocation R_X86_64_32S against `stg_bh_upd_frame_info' can not be used when making a shared object; recompile with -fPIC
dist-stack/x86_64-linux/Cabal-1.22.2.0/build/tmp/tmp-tmp/Main.o: error adding symbols: Bad value
collect2: error: ld returned 1 exit status

--  While building package tmp-0.1.0.0 using:
      /home/philip/.stack/programs/x86_64-linux/ghc-7.10.1/bin/runghc-7.10.1 -package=Cabal-1.22.2.0 -clear-package-db -global-package-db /home/philip/tmp/Setup.hs --builddir=dist-stack/x86_64-linux/Cabal-1.22.2.0/ build
    Process exited with code: ExitFailure 1

Where does the output from --ghc-options=-ddump-splices (and other -ddump* options) go?

These are written to *.dump-* files inside the package’s .stack-work directory.

Why is DYLD_LIBRARY_PATH ignored?

If you are on Mac OS X 10.11 (“El Capitan”) or later, System Integrity Protection (a.k.a. “rootless”) prevents the DYLD_LIBRARY_PATH environment variable from being passed to sub-processes. The only workaround we are aware of is disabling System Integrity Protection:

  1. Reboot into recovery mode (hold down Cmd-R at boot)
  2. Open a terminal (select Terminal from the Utilities menu)
  3. Run csrutil disable; reboot

Note that this reduces the security of your system.

Why do I get a /usr/bin/ar: permission denied error?

On OS X 10.11 (“El Capitan”) and later, this is caused by System Integrity Protection (a.k.a. “rootless”). GHC 7.10.2 includes a fix, so this only effects users of GHC 7.8.4. If you cannot upgrade to GHC 7.10.2, you can work around it by disabling System Integrity Protection

Why is the -- argument separator ignored in Windows PowerShell

Some versions of Windows PowerShell don’t pass the -- to programs. The workaround is to quote the "--", e.g.:

stack exec "--" cabal --version

This is known to be a problem on Windows 7, but seems to be fixed on Windows 10.

GHCJS

To use GHCJS with stack >= 0.1.8, place a GHCJS version in the compiler field of stack.yaml. After this, all stack commands should work with GHCJS, except for ide. In particular:

  • stack setup will install GHCJS from source and boot it, which takes a long time.
  • stack build will compile your code to JavaScript. In particular, the generated code for an executable ends up in $(stack path --local-install-root)/bin/EXECUTABLE.jsexe/all.js (bash syntax, where EXECUTABLE is the name of your executable).

You can also build existing stack projects which target GHC, and instead build them with GHCJS. For example: stack build --compiler ghcjs-0.1.0.20150924_ghc-7.10.2

Example Configurations

GHCJS (old base)

You can use this resolver for GHCJS (old base) in your stack.yaml:

compiler: ghcjs-0.1.0.20150924_ghc-7.10.2
compiler-check: match-exact
GHCJS master (a.k.a. improved base)

To use the master branch, a.k.a improved base, add the following to your stack.yaml:

compiler: ghcjs-0.2.0.20151001_ghc-7.10.2
compiler-check: match-exact
setup-info:
  ghcjs:
    source:
      ghcjs-0.2.0.20151001_ghc-7.10.2:
        url: "https://github.com/fizruk/ghcjs/releases/download/v0.2.0.20151001/ghcjs-0.2.0.20151001.tar.gz"

or for the 2015-10-29 master branch

compiler: ghcjs-0.2.0.20151029_ghc-7.10.2
compiler-check: match-exact
setup-info:
  ghcjs:
      source:
            ghcjs-0.2.0.20151029_ghc-7.10.2:
                    url: "https://github.com/nrolland/ghcjs/releases/download/v0.2.0.20151029/ghcjs-0.2.0.20151029.tar.gz"
Custom installed GHCJS (development branch)

In order to use a GHCJS installed on your path, just add the following to your stack.yaml:

compiler: ghcjs-0.2.0_ghc-7.10.2

(Or, ghcjs-0.1.0_ghc-7.10.2 if you are working with an older version)

Project with both client and server

For projects with both a server and client, the recommended project organization is to put one or both of your stack.yaml files in sub-directories. This way, you can use the current working directory to specify whether you’re working on the client or server. This will also allow more straightforward editor tooling, once projects like ghc-mod and haskell-ide-engine support GHCJS.

For example, here’s what a script for building both client and server looks like:

#!/bin/bash

# Build the client
stack build --stack-yaml=client/stack.yaml

# Copy over the javascript
rm -f server/static/all.js
cp $(stack path --stack-yaml=client/stack.yaml --local-install-root)/bin/client.jsexe/all.js server/static/all.js

# Build the server
stack build --stack-yaml=server/stack.yaml

You can also put both the yaml files in the same directory, and have e.g. ghcjs-stack.yaml, but this won’t work well with editor integrations.

Using stack without a snapshot

If you don’t want to use a snapshot, instead place the ghcjs version in the resolver field of your stack.yaml. This is also necessary when using stack < 0.1.8.

Install/upgrade

Distribution packages are available for Ubuntu, Debian, CentOS / Red Hat / Amazon Linux, Fedora and Arch Linux. Binaries for other operating systems are listed below, and available on the Github releases page. For the future, we are open to supporting more OSes (to request one, please submit an issue).

Binary packages are signed with this signing key.

If you are writing a script that needs to download the latest binary, you can find links that always point to the latest bindists here.

Windows

Note: Due to specific Windows limitations, some temporary workarounds may be required. It is strongly advised to set your STACK_ROOT environment variable similarly to your root (e.g., set STACK_ROOT=c:\stack_root) before running stack.

Note: while generally 32-bit GHC is better tested on Windows, there are reports that recent versions of Windows only work with the 64-bit version of Stack (see issue #393).

Installer

We recommend installing to the default location with these installers, as that will make stack install and stack upgrade work correctly out of the box.

Manual download
  • Download the latest release:
  • Unpack the archive and place stack.exe somewhere on your %PATH% (see Path section below) and you can then run stack on the command line.
  • Now you can run stack from the terminal.

NOTE: These executables have been built and tested on a Windows 7, 8.1, and 10 64-bit machines. They should run on older Windows installs as well, but have not been tested. If you do test, please edit and update this page to indicate as such.

Mac OS X

Note: if you are on OS X 10.11 (“El Capitan”) or later, System Integrity Protection (a.k.a. “rootless”) can cause two problems:

See the above FAQ links for workarounds.

Using Homebrew

If you have a popular brew tool installed, you can just do:

brew install haskell-stack

Note: the Homebrew formula and bottles lag slightly behind new Stack releases, but tend to be updated within a day or two.

Manual download
  • Download the latest release:
  • Extract the archive and place stack somewhere on your $PATH (see Path section below)
  • Now you can run stack from the terminal.

We generally test on the current version of Mac OS X, but stack is known to work on Yosemite and Mavericks as well, and may also work on older versions (YMMV).

Ubuntu

note: for 32-bit, use the generic Linux option

  1. Get the FP Complete key:

    sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 575159689BEFB442
    
  2. Add the appropriate source repository (if not sure, run lsb_release -a to find out your Ubuntu version):

    • Ubuntu 15.10 (amd64):

      echo 'deb http://download.fpcomplete.com/ubuntu wily main'|sudo tee /etc/apt/sources.list.d/fpco.list
      
    • Ubuntu 15.04 (amd64):

      echo 'deb http://download.fpcomplete.com/ubuntu vivid main'|sudo tee /etc/apt/sources.list.d/fpco.list
      
    • Ubuntu 14.10 (amd64)

      echo 'deb http://download.fpcomplete.com/ubuntu utopic main'|sudo tee /etc/apt/sources.list.d/fpco.list
      
    • Ubuntu 14.04 (amd64)

      echo 'deb http://download.fpcomplete.com/ubuntu trusty main'|sudo tee /etc/apt/sources.list.d/fpco.list
      
    • Ubuntu 12.04 (amd64)

      echo 'deb http://download.fpcomplete.com/ubuntu precise main'|sudo tee /etc/apt/sources.list.d/fpco.list
      
  3. Update apt and install

    sudo apt-get update && sudo apt-get install stack -y
    

Debian

note: for 32-bit, use the generic Linux option

  1. Get the FP Complete key:

    sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 575159689BEFB442
    
  2. Add the appropriate source repository:

    • Debian 8 (amd64):

      echo 'deb http://download.fpcomplete.com/debian jessie main'|sudo tee /etc/apt/sources.list.d/fpco.list
      
    • Debian 7 (amd64)

      echo 'deb http://download.fpcomplete.com/debian wheezy main'|sudo tee /etc/apt/sources.list.d/fpco.list
      
  3. Update apt and install

    sudo apt-get update && sudo apt-get install stack -y
    

CentOS / Red Hat / Amazon Linux

note: for 32-bit, use the generic Linux option

  1. Add the appropriate source repository:

    • CentOS 7 / RHEL 7 (x86_64)

      curl -sSL https://s3.amazonaws.com/download.fpcomplete.com/centos/7/fpco.repo | sudo tee /etc/yum.repos.d/fpco.repo
      
    • CentOS 6 / RHEL 6 (x86_64)

      curl -sSL https://s3.amazonaws.com/download.fpcomplete.com/centos/6/fpco.repo | sudo tee /etc/yum.repos.d/fpco.repo
      
  2. Install:

    sudo yum -y install stack
    

Fedora

Note: for 32-bit, you can use this Fedora Copr repo (not managed by the Stack release team, so not guaranteed to have the very latest version) which can be enabled with:

sudo dnf copr enable petersen/stack
  1. Add the appropriate source repository:

    • Fedora 23 (x86_64)

      curl -sSL https://s3.amazonaws.com/download.fpcomplete.com/fedora/23/fpco.repo | sudo tee /etc/yum.repos.d/fpco.repo
      
    • Fedora 22 (x86_64)

      curl -sSL https://s3.amazonaws.com/download.fpcomplete.com/fedora/22/fpco.repo | sudo tee /etc/yum.repos.d/fpco.repo
      
    • Fedora 21 (x86_64)

      curl -sSL https://s3.amazonaws.com/download.fpcomplete.com/fedora/21/fpco.repo | sudo tee /etc/yum.repos.d/fpco.repo
      
  2. Install:

    • Fedora 22 and above

      sudo dnf -y install stack
      
    • Fedora < 22

      sudo yum -y install stack
      

openSUSE / SUSE Linux Enterprise

Note: openSUSE’s and SLE’s stack package isn’t managed by the Stack release team, and since it is based on the version in Stackage LTS, and may lag new releases by ten days or more.

  1. Add the appropriate OBS repository:

    • openSUSE Tumbleweed

      all needed is in distribution

    • openSUSE Leap

      sudo zypper ar http://download.opensuse.org/repositories/devel:/languages:/haskell/openSUSE_Leap_42.1/devel:languages:haskell.repo
      
    • SUSE Linux Enterprise 12

      sudo zypper ar http://download.opensuse.org/repositories/devel:/languages:/haskell/SLE_12/devel:languages:haskell.repo 
      
  2. Install:

    sudo zypper in stack
    

Arch Linux

note: for 32-bit, use the generic Linux option. (You will need to ensure libtinfo is installed, see below.)

stack can be found in the AUR:

In order to install stack from Hackage or from source, you will need the libtinfo Arch Linux package installed. If this package is not installed, stack will not be able to install GHC.

If you use the ArchHaskell repository, you can also get the haskell-stack package from there.

NixOS

Users who follow the nixos-unstable channel or the Nixpkgs master branch can install the latest stack release into their profile by running:

nix-env -f "<nixpkgs>" -iA haskellPackages.stack

Alternatively, the package can be built from source as follows.

  1. Clone the git repo:

    git clone https://github.com/commercialhaskell/stack.git
    
  2. Create a shell.nix file:

    cabal2nix --shell ./. --no-check --no-haddock > shell.nix
    

    Note that the tests fail on NixOS, so disable them with --no-check. Also, haddock currently doesn’t work for stack, so --no-haddock disables it.

  3. Install stack to your user profile:

    nix-env -i -f shell.nix
    

For more information on using Stack together with Nix, please see the NixOS manual section on Stack.

Linux

(64-bit and 32-bit options available)

Tested on Fedora 20: make sure to install the following packages sudo yum install perl make automake gcc gmp-devel. For Gentoo users, make sure to have the ncurses package with USE=tinfo (without it, stack will not be able to install GHC).

Path

You can install stack by copying it anywhere on your PATH environment variable. We recommend installing in the same directory where stack itself will install executables (that way stack is able to upgrade itself!). On Windows, that directory is %APPDATA%\local\bin, e.g. “c:\Users\Michael\AppData\Roaming\local\bin”. For other systems, use $HOME/.local/bin.

If you don’t have that directory in your PATH, you may need to update your PATH (such as by editing .bashrc).

If you’re curious about the choice of these paths, see issue #153

Shell auto-completion

To get tab-completion of commands on bash, just run the following (or add it to .bashrc):

eval "$(stack --bash-completion-script stack)"

For more information and other shells, see the shell auto-completion wiki page

Upgrade

There are essentially three different approaches to upgrade:

  • If you’re using a package manager (e.g., the Ubuntu debs listed above) and are happy with sticking with the officially released binaries, simply follow your normal package manager strategies for upgrading (e.g. apt-get update && apt-get upgrade).
  • If you’re not using a package manager but want to stick with the official binaries (such as on Windows or Mac), you’ll need to manually follow the steps above to download the newest binaries from the release page and replace the old binary.
  • The stack tool itself ships with an upgrade command, which will build stack from source and install it to the default install path (see the previous section). You can use stack upgrade to get the latest official release, and stack upgrade --git to install from Git and live on the bleeding edge. If you follow this, make sure that this directory is on your PATH and takes precedence over the system installed stack. For more information, see this discussion.

Nix integration

(since 0.1.10.0)

stack can build automatically inside a nix-shell (the equivalent of a “container” in Docker parlance), provided Nix is already installed on your system. To do so, please visit the Nix download page.

There are two ways to create a nix-shell:

  • providing a list of packages (by “attribute name”) from Nixpkgs, or
  • providing a custom shell.nix file containing a Nix expression that determines a derivation, i.e. a specification of what resources are available inside the shell.

The second requires writing code in Nix’s custom language. So use this option only if you already know Nix and have special requirements, such as using custom Nix packages that override the standard ones or using system libraries with special requirements.

Additions to your stack.yaml

Add a section to your stack.yaml as follows:

nix:
  enable: true
  packages: [glpk, pcre]

This will instruct stack to build inside a nix-shell that will have the glpk and pcre libraries installed and available. Further, the nix-shell will implicitly also include a version of GHC matching the configured resolver. Enabling Nix support means packages will always be built using a GHC available inside the shell, rather than your globally installed one if any.

Note also that this also means that you cannot set your resolver: to something that has not yet been mirrored in the Nixpkgs package repository. In order to check this, the quickest way is to install and launch a nix-repl:

$ nix-channel --update
$ nix-env -i nix-repl
$ nix-repl

Then, inside the nix-repl, do:

nix-repl> :l <nixpkgs>
nix-repl> haskell.packages.lts-3_13.ghc

Replace the resolver version with whatever version you are using. If it outputs a path of a derivation in the Nix store, like

«derivation /nix/store/00xx8y0p3r0dqyq2frq277yr1ldqzzg0-ghc-7.10.2.drv»

then it means that this resolver has been mirrored and exists in your local copy of the nixpkgs. Whereas an error like

error: attribute ‘lts-3_99’ missing, at (string):1:1

means you should use a different resolver. You can also use autocompletion with TAB to know which attributes haskell.packages contains.

In Nixpkgs master branch, you can find the mirrored resolvers in the Haskell modules here on Github.

Note: currently, stack only discovers dynamic and static libraries in the lib/ folder of any nix package, and likewise header files in the include/ folder. If you’re dealing with a package that doesn’t follow this standard layout, you’ll have to deal with that using a custom shell file (see below).

Use stack as normal

With Nix enabled, stack build and stack exec will automatically launch themselves in a nix-shell. Note that for now stack ghci will not work, due to a bug in GHCi when working with external shared libraries.

If enable: is set to false, you can still build in a nix-shell by passing the --nix flag to stack, for instance stack --nix build. Nix just won’t be used by default.

Command-line options

The configuration present in your stack.yaml can be overriden on the command-line. See stack --nix-help for a list of all Nix options.

Configuration

stack.yaml contains a nix: section with Nix settings. Without this section, Nix will not be used.

Here is a commented configuration file, showing the default values:

nix:

  # `true` by default when the nix section is present. Set
  # it to `false` to disable using Nix.
  enable: true

  # Empty by default. The list of packages you want to be
  # available in the nix-shell at build time (with `stack
  # build`) and run time (with `stack exec`).
  packages: []

  # Unset by default. You cannot set this option if `packages:`
  # is already present and not empty, this will result in an
  # exception
  shell-file: shell.nix

  # A list of strings, empty by default. Additional options that
  # will be passed verbatim to the `nix-shell` command.
  nix-shell-options: []

Using a custom shell.nix file

Nix is also a programming language, and as specified here if you know it you can provide to the shell a fully customized derivation as an environment to use. Here is the equivalent of the configuration used in this section, but with an explicit shell.nix file:

with (import <nixpkgs> {});

stdenv.mkDerivation {

  name = "myEnv";
  
  buildInputs = [
    glpk 
    pcre 
    haskell.packages.lts-3_13.ghc
  ];
  
  STACK_IN_NIX_EXTRA_ARGS
      = " --extra-lib-dirs=${glpk}/lib" 
      + " --extra-include-dirs=${glpk}/include" 
      + " --extra-lib-dirs=${pcre}/lib" 
      + " --extra-include-dirs=${pcre}/include"
  ;
}

Note that in this case, you have to include (a version of) GHC in your buildInputs! This potentially allows you to use a GHC which is not the one of your resolver:. Also, you need to tell Stack where to find the new libraries and headers. This is especially necessary on OS X. The special variable STACK_IN_NIX_EXTRA_ARGS will be looked for by the nix-shell when running the inner stack process. --extra-lib-dirs and --extra-include-dirs are regular stack build options. You can repeat these options for each dependency.

Defining manually a shell.nix file gives you the possibility to override some Nix derivations (“packages”), for instance to change some build options of the libraries you use.

And now for the stack.yaml file:

nix:
  enable: true
  shell-file: shell.nix

The stack build command will behave exactly the same as above. Note that specifying both packages: and a shell-file: results in an error. (Comment one out before adding the other.)

Non-standard project initialization

Introduction

The purpose of this page is to collect information about issues that arise when users either have an existing cabal project or another nonstandard setup such as a private hackage database.

Using a Cabal File

New users may be confused by the fact that you must add dependencies to the package’s cabal file, even in the case when you have already listed the package in the stack.yaml. In most cases, dependencies for your package that are in the Stackage snapshot need only be added to the cabal file. stack makes heavy use of Cabal the library under the hood. In general, your stack packages should also end up being valid cabal-install packages.

Issues Referenced
  • https://github.com/commercialhaskell/stack/issues/105

Passing Flags to Cabal

Any build command, bench, install, haddock, test, etc. takes a --flag option which passes flags to cabal. Another way to do this is using the flags field in a stack.yaml, with the option to specify flags on a per package basis.

As an example, in a stack.yaml for multi-package project with packages foo, bar, baz:

flags:
  foo:
    release: true
  bar:
    default: true
  baz:
    manual: true

It is also possible to pass the same flag to multiple packages, i.e. stack build --flag *:necessary

Currently one needs to list all of your modules that interpret flags in the other-modules section of a cabal file. cabal-install has a different behavior currently and doesn’t require that the modules be listed. This may change in a future release.

Issues Referenced
  • https://github.com/commercialhaskell/stack/issues/191
  • https://github.com/commercialhaskell/stack/issues/417
  • https://github.com/commercialhaskell/stack/issues/335
  • https://github.com/commercialhaskell/stack/issues/301
  • https://github.com/commercialhaskell/stack/issues/365
  • https://github.com/commercialhaskell/stack/issues/105

Selecting a Resolver

stack init or stack new will try to default to the current Haskell LTS present on https://www.stackage.org/snapshots if no snapshot has been previously used locally, and to the latest LTS snapshot locally used for a build otherwise. Using an incorrect resolver can cause a build to fail if the version of GHC it requires is not present.

In order to override the resolver entry at project initialization one can pass --prefer-lts or --prefer-nightly. These options will choose the latest LTS or nightly versions locally used. Alternatively the --resolver option can be used with the name of any snapshots on Stackage, or with lts or nightly to select the latest versions, disregarding previously used ones. This is not the default so as to avoid unnecessary recompilation time.

:TODO: Document --solver

Issues Referenced
  • https://github.com/commercialhaskell/stack/issues/468
  • https://github.com/commercialhaskell/stack/issues/464

Using git Repositories

stack has support for packages that reside in remote git locations.

Example:

packages:
- '.'
- location:
    git: https://github.com/kolmodin/binary
    commit: 8debedd3fcb6525ac0d7de2dd49217dce2abc0d9
Issues Referenced
  • https://github.com/commercialhaskell/stack/issues/254
  • https://github.com/commercialhaskell/stack/issues/199

Private Hackage

Working with a private Hackage is currently supported in certain situations. There exist special entries in stack.yaml that may help you. In a stack.yaml file, it is possible to add lines for packages in your database referencing the sdist locations via an http entry, or to use a Hackage entry.

The recommended stack workflow is to use git submodules instead of a private Hackage. Either by using git submodules and listing the directories in the packages section of stack.yaml, or by adding the private dependencies as git URIs with a commit SHA to the stack.yaml. This has the large benefit of eliminating the need to manage a Hackage database and pointless version bumps.

For further information see YAML configuration

Issues Referenced
  • https://github.com/commercialhaskell/stack/issues/445
  • https://github.com/commercialhaskell/stack/issues/565

Custom Snapshots

Currently WIP?

Issues Referenced
  • https://github.com/commercialhaskell/stack/issues/111
  • https://github.com/commercialhaskell/stack/issues/253
  • https://github.com/commercialhaskell/stack/issues/137

Intra-package Targets

stack supports intra-package targets, similar to cabal build COMPONENTS for situations when you don’t want to build every target inside your package.

Example:

stack build stack:lib:stack
stack test stack:test:stack-integration-test

Note: this does require prefixing the component name with the package name.

Issues referenced
  • https://github.com/commercialhaskell/stack/issues/201

Shell Auto-completion

Note: if you installed a package for you Linux distribution, the bash completion file was automatically installed (you may need the bash-completion package to have it take effect).

The following adds support for shell tab completion for standard Stack arguments, although completion for filenames and executables etc. within stack is still lacking (see issue 823).

for bash users

you need to run following command

eval "$(stack --bash-completion-script stack)"

You can also add it to your .bashrc file if you want.

for ZSH users

documentation says:

Zsh can handle bash completions functions. The latest development version of zsh has a function bashcompinit, that when run will allow zsh to read bash completion specifications and functions. This is documented in the zshcompsys man page. To use it all you need to do is run bashcompinit at any time after compinit. It will define complete and compgen functions corresponding to the bash builtins.

You must so:

  1. launch compinint
  2. launch bashcompinit
  3. eval stack bash completion script
autoload -U +X compinit && compinit
autoload -U +X bashcompinit && bashcompinit
eval "$(stack --bash-completion-script stack)"

:information_source: If you already have quite a large zshrc, or if you use oh-my-zsh, compinit will probably already be loaded. If you have a blank zsh config, all of the 3 lines above are necessary.

:gem: tip: instead of running those 3 lines from your shell every time you want to use stack, you can add those 3 lines in your $HOME/.zshrc file

User guide

stack is a modern, cross-platform build tool for Haskell code.

This guide takes a new stack user through the typical workflows. This guide will not teach Haskell or involve much code, and it requires no prior experience with the Haskell packaging system or other build tools.

Stack’s functions

stack handles the management of your toolchain (including GHC — the Glasgow Haskell Compiler — and, for Windows users, MSYS), building and registering libraries, building build tool dependencies, and more. While it can use existing tools on your system, stack has the capacity to be your one-stop shop for all Haskell tooling you need. This guide will follow that stack-centric approach.

What makes stack special?

The primary stack design point is reproducible builds. If you run stack build today, you should get the same result running stack build tomorrow. There are some cases that can break that rule (changes in your operating system configuration, for example), but, overall, stack follows this design philosophy closely. To make this a simple process, stack uses curated package sets called snapshots.

stack has also been designed from the ground up to be user friendly, with an intuitive, discoverable command line interface. For many users, simply downloading stack and reading stack --help will be enough to get up and running. This guide provides a more gradual tour for users who prefer that learning style.

To build your project, stack uses a stack.yaml file in the root directory of your project as a sort of blueprint. That file contains a reference, called a resolver, to the snapshot which your package will be built against.

Finally, stack is isolated: it will not make changes outside of specific stack directories. stack-built files generally go in either the stack root directory (default ~/.stack) or ./.stack-work directories local to each project. The stack root directory holds packages belonging to snapshots and any stack-installed versions of GHC. Stack will not tamper with any system version of GHC or interfere with packages installed by cabal or any other build tools.

NOTE In this guide, we’ll use commands as run on a GNU/Linux system (specifically Ubuntu 14.04, 64-bit) and share output from that. Output on other systems — or with different versions of stack — will be slightly different, but all commands work cross-platform, unless explicitly stated otherwise.

Downloading and Installation

The documentation dedicated to downloading stack has the most up-to-date information for a variety of operating systems, including multiple GNU/Linux flavors. Instead of repeating that content here, please go check out that page and come back here when you can successfully run stack --version. The rest of this session will demonstrate the installation procedure on a vanilla Ubuntu 14.04 machine.

# Starting with a *really* bare machine
michael@d30748af6d3d:~$ sudo apt-get install wget
# Demonstrate that stack really isn't available
michael@d30748af6d3d:~$ stack
-bash: stack: command not found
# Get the signing key for the package repo
michael@d30748af6d3d:~$ wget -q -O- https://s3.amazonaws.com/download.fpcomplete.com/ubuntu/fpco.key | sudo apt-key add -
OK
michael@d30748af6d3d:~$ echo 'deb http://download.fpcomplete.com/ubuntu/trusty stable main'|sudo tee /etc/apt/sources.list.d/fpco.list
deb http://download.fpcomplete.com/ubuntu/trusty stable main
michael@d30748af6d3d:~$ sudo apt-get update && sudo apt-get install stack -y
# downloading...
michael@d30748af6d3d:~$ stack --version
Version 0.1.3.1, Git revision 908b04205e6f436d4a5f420b1c6c646ed2b804d7

With stack now up and running, you’re good to go. Though not required, we recommend setting your PATH environment variable to include $HOME/.local/bin:

michael@d30748af6d3d:~$ echo 'export PATH=$HOME/.local/bin:$PATH' >> ~/.bashrc

Hello World Example

With stack installed, let’s create a new project from a template and walk through the most common stack commands.

stack new

We’ll start off with the stack new command to create a new project. We’ll call our project helloworld, and we’ll use the new-template project template:

michael@d30748af6d3d:~$ stack new helloworld new-template

For this first stack command, there’s quite a bit of initial setup it needs to do (such as downloading the list of packages available upstream), so you’ll see a lot of output. Though your exact results may vary, below is an example of the sort of output you will see. Over the course of this guide a lot of the content will begin to make more sense:

Downloading template "new-template" to create project "helloworld" in helloworld/ ...
Using the following authorship configuration:
author-email: example@example.com
author-name: Example Author Name
Copy these to /home/michael/.stack/config.yaml and edit to use different values.
Writing default config file to: /home/michael/helloworld/stack.yaml
Basing on cabal files:
- /home/michael/helloworld/helloworld.cabal

Downloaded lts-3.2 build plan.
Caching build plan
Fetched package index.
Populated index cache.
Checking against build plan lts-3.2
Selected resolver: lts-3.2
Wrote project config to: /home/michael/helloworld/stack.yaml

We now have a project in the helloworld directory!

stack setup

Instead of assuming you want stack to download and install GHC for you, it asks you to do this as a separate command: setup. If we don’t run stack setup now, we’ll later see a message that we are missing the right GHC version.

Let’s run stack setup:

michael@d30748af6d3d:~/helloworld$ stack setup
Downloaded ghc-7.10.2.
Installed GHC.
stack will use a locally installed GHC
For more information on paths, see 'stack path' and 'stack exec env'
To use this GHC and packages outside of a project, consider using:
stack ghc, stack ghci, stack runghc, or stack exec

It doesn’t come through in the output here, but you’ll get intermediate download percentage statistics while the download is occurring. This command may take some time, depending on download speeds.

NOTE: GHC will be installed to your global stack root directory, so calling ghc on the command line won’t work. See the stack exec, stack ghc, and stack runghc commands below for more information.

stack build

Next, we’ll run the most important stack command: stack build.

NOTE: If you forgot to run stack setup in the previous step you’ll get an error:

michael@d30748af6d3d:~$ cd helloworld/
michael@d30748af6d3d:~/helloworld$ stack build
No GHC found, expected version 7.10.2 (x86_64) (based on resolver setting in /home/michael/helloworld/stack.yaml).
Try running stack setup

stack needs GHC in order to build your project, and stack setup must be run to check whether GHC is available (and install it if not).

Having run stack setup successfully, stack build should build our project:

michael@d30748af6d3d:~/helloworld$ stack build
helloworld-0.1.0.0: configure
Configuring helloworld-0.1.0.0...
helloworld-0.1.0.0: build
Preprocessing library helloworld-0.1.0.0...
[1 of 1] Compiling Lib              ( src/Lib.hs, .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/Lib.o )
In-place registering helloworld-0.1.0.0...
Preprocessing executable 'helloworld-exe' for helloworld-0.1.0.0...
[1 of 1] Compiling Main             ( app/Main.hs, .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/helloworld-exe/helloworld-exe-tmp/Main.o )
Linking .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/helloworld-exe/helloworld-exe ...
helloworld-0.1.0.0: install
Installing library in
/home/michael/helloworld/.stack-work/install/x86_64-linux/lts-3.2/7.10.2/lib/x86_64-linux-ghc-7.10.2/helloworld-0.1.0.0-6urpPe0MO7OHasGCFSyIAT
Installing executable(s) in
/home/michael/helloworld/.stack-work/install/x86_64-linux/lts-3.2/7.10.2/bin
Registering helloworld-0.1.0.0...
stack exec

Looking closely at the output of the previous command, you can see that it built both a library called “helloworld” and an executable called “helloworld-exe”. We’ll explain more in the next section, but, for now, just notice that the executables are installed in our project’s ./stack-work directory.

Now, Let’s use stack exec to run our executable (which just outputs the string “someFunc”):

michael@d30748af6d3d:~/helloworld$ stack exec helloworld-exe
someFunc

stack exec works by providing the same reproducible environment that was used to build your project to the command that you are running. Thus, it knew where to find helloworld-exe even though it is hidden in the ./stack-work directory.

stack test

Finally, like all good software, helloworld actually has a test suite. Let’s run it with stack test:

michael@d30748af6d3d:~/helloworld$ stack test
NOTE: the test command is functionally equivalent to 'build --test'
helloworld-0.1.0.0: configure (test)
Configuring helloworld-0.1.0.0...
helloworld-0.1.0.0: build (test)
Preprocessing library helloworld-0.1.0.0...
In-place registering helloworld-0.1.0.0...
Preprocessing test suite 'helloworld-test' for helloworld-0.1.0.0...
[1 of 1] Compiling Main             ( test/Spec.hs, .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/helloworld-test/helloworld-test-tmp/Main.o )
Linking .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/helloworld-test/helloworld-test ...
helloworld-0.1.0.0: test (suite: helloworld-test)
Test suite not yet implemented

Reading the output, you’ll see that stack first builds the test suite and then automatically runs it for us. For both the build and test command, already built components are not built again. You can see this by running stack build and stack test a second time:

michael@d30748af6d3d:~/helloworld$ stack build
michael@d30748af6d3d:~/helloworld$ stack test
NOTE: the test command is functionally equivalent to 'build --test'
helloworld-0.1.0.0: test (suite: helloworld-test)
Test suite not yet implemented

Inner Workings of stack

In this subsection, we’ll dissect the helloworld example in more detail.

Files in helloworld

Before studying stack more, let’s understand our project a bit better.

michael@d30748af6d3d:~/helloworld$ find * -type f
LICENSE
Setup.hs
app/Main.hs
helloworld.cabal
src/Lib.hs
stack.yaml
test/Spec.hs

The app/Main.hs, src/Lib.hs, and test/Spec.hs files are all Haskell source files that compose the actual functionality of our project (we won’t dwell on them here). The LICENSE file has no impact on the build, but is there for informational/legal purposes only. The files of interest here are Setup.hs, helloworld.cabal, and stack.yaml.

The Setup.hs file is a component of the Cabal build system which stack uses. It’s technically not needed by stack, but it is still considered good practice in the Haskell world to include it. The file we’re using is straight boilerplate:

import Distribution.Simple
main = defaultMain

Next, let’s look at our stack.yaml file, which gives our project-level settings:

flags: {}
packages:
- '.'
extra-deps: []
resolver: lts-3.2

If you’re familiar with YAML, you may recognize that the flags and extra-deps keys have empty values. We’ll see more interesting usages for these fields later. Let’s focus on the other two fields. packages tells stack which local packages to build. In our simple example, we have only a single package in our project, located in the same directory, so '.' suffices. However, stack has powerful support for multi-package projects, which we’ll elaborate on as this guide progresses.

The final field is resolver. This tells stack how to build your package: which GHC version to use, versions of package dependencies, and so on. Our value here says to use LTS Haskell version 3.2, which implies GHC 7.10.2 (which is why stack setup installs that version of GHC). There are a number of values you can use for resolver, which we’ll cover later.

The final file of import is helloworld.cabal. stack is built on top of the Cabal build system. In Cabal, we have individual packages, each of which contains a single .cabal file. The .cabal file can define 1 or more components: a library, executables, test suites, and benchmarks. It also specifies additional information such as library dependencies, default language pragmas, and so on.

In this guide, we’ll discuss the bare minimum necessary to understand how to modify a .cabal file. Haskell.org has the definitive reference for the .cabal file format.

The setup command

As we saw above, the setup command installed GHC for us. Just for kicks, let’s run setup a second time:

michael@d30748af6d3d:~/helloworld$ stack setup
stack will use a locally installed GHC
For more information on paths, see 'stack path' and 'stack exec env'
To use this GHC and packages outside of a project, consider using:
stack ghc, stack ghci, stack runghc, or stack exec

Thankfully, the command is smart enough to know not to perform an installation twice. setup will either use the first GHC it finds on your PATH, or a locally installed version. As the command output above indicates, you can use stack path for quite a bit of path information (which we’ll play with more later). For now, we’ll just look at where GHC is installed:

michael@d30748af6d3d:~/helloworld$ stack exec which ghc
/home/michael/.stack/programs/x86_64-linux/ghc-7.10.2/bin/ghc

As you can see from that path (and as emphasized earlier), the installation is placed to not interfere with any other GHC installation, whether system-wide or even different GHC versions installed by stack.

The build command

The build command is the heart and soul of stack. It is the engine that powers building your code, testing it, getting dependencies, and more. Quite a bit of the remainder of this guide will cover more advanced build functions and features, such as building test and Haddocks at the same time, or constantly rebuilding blocking on file changes.

On a philosophical note: Running the build command twice with the same options and arguments should generally be a no-op (besides things like rerunning test suites), and should, in general, produce a reproducible result between different runs.

Adding dependencies

Let’s say we decide to modify our helloworld source a bit to use a new library, perhaps the ubiquitous text package. For example:

{-# LANGUAGE OverloadedStrings #-}
module Lib
    ( someFunc
    ) where

import qualified Data.Text.IO as T

someFunc :: IO ()
someFunc = T.putStrLn "someFunc"

When we try to build this, things don’t go as expected:

michael@d30748af6d3d:~/helloworld$ stack build
helloworld-0.1.0.0-c91e853ce4bfbf6d394f54b135573db8: unregistering (local file changes)
helloworld-0.1.0.0: configure
Configuring helloworld-0.1.0.0...
helloworld-0.1.0.0: build
Preprocessing library helloworld-0.1.0.0...

/home/michael/helloworld/src/Lib.hs:6:18:
    Could not find module `Data.Text.IO'
    Use -v to see a list of the files searched for.

--  While building package helloworld-0.1.0.0 using:
      /home/michael/.stack/programs/x86_64-linux/ghc-7.10.2/bin/runhaskell -package=Cabal-1.22.4.0 -clear-package-db -global-package-db -package-db=/home/michael/.stack/snapshots/x86_64-linux/lts-3.2/7.10.2/pkgdb/ /tmp/stack5846/Setup.hs --builddir=.stack-work/dist/x86_64-linux/Cabal-1.22.4.0/ build exe:helloworld-exe --ghc-options -hpcdir .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/hpc/.hpc/ -ddump-hi -ddump-to-file
    Process exited with code: ExitFailure 1

Notice that it says “Could not find module.” This means that the package containing the module in question is not available. To tell stack to use text, you need to add it to your .cabal file — specifically in your build-depends section, like this:

library
  hs-source-dirs:      src
  exposed-modules:     Lib
  build-depends:       base >= 4.7 && < 5
                       -- This next line is the new one
                     , text
  default-language:    Haskell2010

Now if we rerun stack build, we should get a successful result:

michael@d30748af6d3d:~/helloworld$ stack build
text-1.2.1.3: download
text-1.2.1.3: configure
text-1.2.1.3: build
text-1.2.1.3: install
helloworld-0.1.0.0: configure
Configuring helloworld-0.1.0.0...
helloworld-0.1.0.0: build
Preprocessing library helloworld-0.1.0.0...
[1 of 1] Compiling Lib              ( src/Lib.hs, .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/Lib.o )
In-place registering helloworld-0.1.0.0...
Preprocessing executable 'helloworld-exe' for helloworld-0.1.0.0...
[1 of 1] Compiling Main             ( app/Main.hs, .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/helloworld-exe/helloworld-exe-tmp/Main.o ) [Lib changed]
Linking .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/helloworld-exe/helloworld-exe ...
helloworld-0.1.0.0: install
Installing library in
/home/michael/helloworld/.stack-work/install/x86_64-linux/lts-3.2/7.10.2/lib/x86_64-linux-ghc-7.10.2/helloworld-0.1.0.0-HI1deOtDlWiAIDtsSJiOtw
Installing executable(s) in
/home/michael/helloworld/.stack-work/install/x86_64-linux/lts-3.2/7.10.2/bin
Registering helloworld-0.1.0.0...
Completed all 2 actions.

This output means that the text package was downloaded, configured, built, and locally installed. Once that was done, we moved on to building our local package (helloworld). At no point did we need to ask stack to build dependencies — it does so automatically.

extra-deps

Let’s try a more off-the-beaten-track package: the joke acme-missiles package. Our source code is simple:

module Lib
    ( someFunc
    ) where

import Acme.Missiles

someFunc :: IO ()
someFunc = launchMissiles

In this case, we can add acme-missiles to the .cabal file, but we get a new type of error message from stack build:

michael@d30748af6d3d:~/helloworld$ stack build
While constructing the BuildPlan the following exceptions were encountered:

--  While attempting to add dependency,
    Could not find package acme-missiles in known packages

--  Failure when adding dependencies:
      acme-missiles: needed (-any), latest is 0.3, but not present in build plan
    needed for package: helloworld-0.1.0.0

Recommended action: try adding the following to your extra-deps in /home/michael/helloworld/stack.yaml
- acme-missiles-0.3

You may also want to try the 'stack solver' command

It says acme-missiles is “not present in build plan.” This brings us to the next major topic in using stack.

Curated package sets

Remember above when stack new selected the lts-3.2 resolver for us? That defined our build plan and available packages. When we tried using the text package, it just worked, because it was part of the lts-3.2 package set. But acme-missiles is not part of that package set, so building failed.

To add this new dependency, we’ll use the extra-deps field in stack.yaml to define extra dependencies not present in the resolver. With that change, our stack.yaml looks like:

flags: {}
packages:
- '.'
extra-deps:
- acme-missiles-0.3 # not in lts-3.2
resolver: lts-3.2

Now stack build will succeed.

With that out of the way, let’s dig a little bit more into these package sets, also known as snapshots. We mentioned lts-3.2, and you can get quite a bit of information about it at https://www.stackage.org/lts-3.2, including:

  • The appropriate resolver value (resolver: lts-3.2, as we used above)
  • The GHC version used
  • A full list of all packages available in this snapshot
  • The ability to perform a Hoogle search on the packages in this snapshot
  • A list of all modules in a snapshot, which an be useful when trying to determine which package to add to your .cabal file

You can also see a list of all available snapshots. You’ll notice two flavors: LTS (for “Long Term Support”) and Nightly. You can read more about them on the LTS Haskell Github page. If you’re not sure which to use, start with LTS Haskell (which stack will lean towards by default as well).

Resolvers and changing your compiler version

Let’s explore package sets a bit further. Instead of lts-3.2, let’s change our stack.yaml file to use nightly-2015-08-26. Rerunning stack build will produce:

michael@d30748af6d3d:~/helloworld$ stack build
Downloaded nightly-2015-08-26 build plan.
Caching build plan
stm-2.4.4: configure
stm-2.4.4: build
stm-2.4.4: install
acme-missiles-0.3: configure
acme-missiles-0.3: build
acme-missiles-0.3: install
helloworld-0.1.0.0: configure
Configuring helloworld-0.1.0.0...
helloworld-0.1.0.0: build
Preprocessing library helloworld-0.1.0.0...
In-place registering helloworld-0.1.0.0...
Preprocessing executable 'helloworld-exe' for helloworld-0.1.0.0...
Linking .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/helloworld-exe/helloworld-exe ...
helloworld-0.1.0.0: install
Installing library in
/home/michael/helloworld/.stack-work/install/x86_64-linux/nightly-2015-08-26/7.10.2/lib/x86_64-linux-ghc-7.10.2/helloworld-0.1.0.0-6cKaFKQBPsi7wB4XdqRv8w
Installing executable(s) in
/home/michael/helloworld/.stack-work/install/x86_64-linux/nightly-2015-08-26/7.10.2/bin
Registering helloworld-0.1.0.0...
Completed all 3 actions.

We can also change resolvers on the command line, which can be useful in a Continuous Integration (CI) setting, like on Travis. For example:

michael@d30748af6d3d:~/helloworld$ stack --resolver lts-3.1 build
Downloaded lts-3.1 build plan.
Caching build plan
stm-2.4.4: configure
# Rest is the same, no point copying it

When passed on the command line, you also get some additional “short-cut” versions of resolvers: --resolver nightly will use the newest Nightly resolver available, --resolver lts will use the newest LTS, and --resolver lts-2 will use the newest LTS in the 2.X series. The reason these are only available on the command line and not in your stack.yaml file is that using them:

  1. Will slow down your build (since stack then needs to download information on the latest available LTS each time it builds)
  2. Produces unreliable results (since a build run today may proceed differently tomorrow because of changes outside of your control)
Changing GHC versions

Finally, let’s try using an older LTS snapshot. We’ll use the newest 2.X snapshot:

michael@d30748af6d3d:~/helloworld$ stack --resolver lts-2 build
Selected resolver: lts-2.22
Downloaded lts-2.22 build plan.
Caching build plan
No GHC found, expected version 7.8.4 (x86_64) (based on resolver setting in /home/michael/helloworld/stack.yaml). Try running stack setup

This fails, because GHC 7.8.4 (which lts-2.22 uses) is not available on our system. So, we see that different LTS versions (2 vs 3 in this case) use different GHC versions. Now, how do we get the right GHC version after changing the LTS version? One answer is to use stack setup like we did above, this time with the --resolver lts-2 option. However, there’s another method worth mentioning: the --install-ghc flag.

michael@d30748af6d3d:~/helloworld$ stack --resolver lts-2 --install-ghc build
Selected resolver: lts-2.22
Downloaded ghc-7.8.4.
Installed GHC.
stm-2.4.4: configure
# Mostly same as before, nothing interesting to see

What’s nice about --install-ghc is:

  1. You don’t need to have an extra step in your build script
  2. It only requires downloading the information on latest snapshots once

As mentioned above, the default behavior of stack is to not install new versions of GHC automatically. We want to avoid surprising users with large downloads/installs. The --install-ghc flag simply changes that default behavior.

Other resolver values

We’ve mentioned nightly-YYYY-MM-DD and lts-X.Y values for the resolver. There are actually other options available, and the list will grow over time. At the time of writing:

  • ghc-X.Y.Z, for requiring a specific GHC version but no additional packages
  • Experimental GHCJS support
  • Experimental custom snapshot support

The most up-to-date information can always be found in the stack.yaml documentation.

Existing projects

Alright, enough playing around with simple projects. Let’s take an open source package and try to build it. We’ll be ambitious and use yackage, a local package server using Yesod. To get the code, we’ll use the stack unpack command:

michael@d30748af6d3d:~$ stack unpack yackage-0.8.0
yackage-0.8.0: download
Unpacked yackage-0.8.0 to /home/michael/yackage-0.8.0/
michael@d30748af6d3d:~$ cd yackage-0.8.0/

This new directory does not have a stack.yaml file, so we need to make one first. We could do it by hand, but let’s be lazy instead with the stack init command:

michael@d30748af6d3d:~/yackage-0.8.0$ stack init
Writing default config file to: /home/michael/yackage-0.8.0/stack.yaml
Basing on cabal files:
- /home/michael/yackage-0.8.0/yackage.cabal

Checking against build plan lts-3.2
Selected resolver: lts-3.2
Wrote project config to: /home/michael/yackage-0.8.0/stack.yaml
michael@d30748af6d3d:~/yackage-0.8.0$ cat stack.yaml
flags:
  yackage:
    upload: true
packages:
- '.'
extra-deps: []
resolver: lts-3.2

stack init does quite a few things for you behind the scenes:

  • Creates a list of snapshots that would be good candidates.
    • The basic algorithm here is to prefer options in this order:
      • Snapshots for which you’ve already built some packages (to increase sharing of binary package databases, as we’ll discuss later)
      • Recent snapshots
      • LTS
    • These preferences can be tweaked with command line flags (see stack init --help).
  • Finds all of the .cabal files in your current directory and subdirectories (unless you use --ignore-subdirs) and determines the packages and versions they require
  • Finds a combination of snapshot and package flags that allows everything to compile

Assuming it finds a match, it will write your stack.yaml file, and everything will work. Given that LTS Haskell and Stackage Nightly have ~1400 of the most common Haskell packages, this will often be enough. However, let’s simulate a failure by adding acme-missiles to our build-depends and re-initing:

michael@d30748af6d3d:~/yackage-0.8.0$ stack init --force
Writing default config file to: /home/michael/yackage-0.8.0/stack.yaml
Basing on cabal files:
- /home/michael/yackage-0.8.0/yackage.cabal

Checking against build plan lts-3.2

* Build plan did not match your requirements:
    acme-missiles not found
    - yackage requires -any

Checking against build plan lts-3.1

* Build plan did not match your requirements:
    acme-missiles not found
    - yackage requires -any


Checking against build plan nightly-2015-08-26

* Build plan did not match your requirements:
    acme-missiles not found
    - yackage requires -any


Checking against build plan lts-2.22

* Build plan did not match your requirements:
    acme-missiles not found
    - yackage requires -any

    warp version 3.0.13.1 found
    - yackage requires >=3.1


There was no snapshot found that matched the package bounds in your .cabal files.
Please choose one of the following commands to get started.

    stack init --resolver lts-3.2
    stack init --resolver lts-3.1
    stack init --resolver nightly-2015-08-26
    stack init --resolver lts-2.22

You'll then need to add some extra-deps. See the
[stack.yaml documentation](yaml_configuration.html#extra-deps).

You can also try falling back to a dependency solver with:

    stack init --solver

stack has tested four different snapshots, and in every case discovered that acme-missiles is not available. Also, when testing lts-2.22, it found that the warp version provided was too old for yackage. So, what do we do?

The recommended approach is: pick a resolver, and fix the problem. Again, following the advice mentioned above, default to LTS if you don’t have a preference. In this case, the newest LTS listed is lts-3.2. Let’s pick that. stack has told us the correct command to do this. We’ll just remove our old stack.yaml first and then run it:

michael@d30748af6d3d:~/yackage-0.8.0$ rm stack.yaml
michael@d30748af6d3d:~/yackage-0.8.0$ stack init --resolver lts-3.2
Writing default config file to: /home/michael/yackage-0.8.0/stack.yaml
Basing on cabal files:
- /home/michael/yackage-0.8.0/yackage.cabal

Checking against build plan lts-3.2

* Build plan did not match your requirements:
    acme-missiles not found
    - yackage requires -any


Selected resolver: lts-3.2
Wrote project config to: /home/michael/yackage-0.8.0/stack.yaml

As you may guess, stack build will now fail due to the missing acme-missiles. Toward the end of the error message, it says the familiar:

Recommended action: try adding the following to your extra-deps in /home/michael/yackage-0.8.0/stack.yaml
- acme-missiles-0.3

If you’re following along at home, try making the necessary stack.yaml modification to get things building.

Alternative solution: dependency solving

There’s another solution to consider for missing dependencies. At the end of the previous error message, it said:

You may also want to try the 'stack solver' command

This approach uses a full-blown dependency solver to look at all upstream package versions available and compare them to your snapshot selection and version ranges in your .cabal file. In order to use this feature, you’ll need the cabal executable available. Let’s build that with:

michael@d30748af6d3d:~/yackage-0.8.0$ stack build cabal-install
random-1.1: download
mtl-2.2.1: download
network-2.6.2.1: download
old-locale-1.0.0.7: download
random-1.1: configure
random-1.1: build
# ...
cabal-install-1.22.6.0: download
cabal-install-1.22.6.0: configure
cabal-install-1.22.6.0: build
cabal-install-1.22.6.0: install
Completed all 10 actions.

Now we can use stack solver:

michael@d30748af6d3d:~/yackage-0.8.0$ stack solver
This command is not guaranteed to give you a perfect build plan
It's possible that even with the changes generated below, you will still need to do some manual tweaking
Asking cabal to calculate a build plan, please wait
extra-deps:
- acme-missiles-0.3

And if we’re exceptionally lazy, we can ask stack to modify our stack.yaml file for us:

michael@d30748af6d3d:~/yackage-0.8.0$ stack solver --modify-stack-yaml
This command is not guaranteed to give you a perfect build plan
It's possible that even with the changes generated below, you will still need to do some manual tweaking
Asking cabal to calculate a build plan, please wait
extra-deps:
- acme-missiles-0.3
Updated /home/michael/yackage-0.8.0/stack.yaml

With that change, stack build will now run.

NOTE: You should probably back up your stack.yaml before doing this, such as committing to Git/Mercurial/Darcs.

There’s one final approach to mention: skipping the snapshot entirely and just using dependency solving. You can do this with the --solver flag to init. This is not a commonly used workflow with stack, as you end up with a large number of extra-deps and no guarantee that the packages will compile together. For those interested, however, the option is available. You need to make sure you have both the ghc and cabal commands on your PATH. An easy way to do this is to use the stack exec command:

michael@d30748af6d3d:~/yackage-0.8.0$ stack exec -- stack init --solver --force
Writing default config file to: /home/michael/yackage-0.8.0/stack.yaml
Basing on cabal files:
- /home/michael/yackage-0.8.0/yackage.cabal

Asking cabal to calculate a build plan, please wait
Selected resolver: ghc-7.10
Wrote project config to: /home/michael/yackage-0.8.0/stack.yaml

Different databases

Time to take a short break from hands-on examples and discuss a little architecture. stack has the concept of multiple databases. A database consists of a GHC package database (which contains the compiled version of a library), executables, and a few other things as well. To give you an idea:

michael@d30748af6d3d:~/helloworld$ ls .stack-work/install/x86_64-linux/lts-3.2/7.10.2/
bin  doc  flag-cache  lib  pkgdb

Databases in stack are layered. For example, the database listing we just gave is called a local database. That is layered on top of a snapshot database, which contains the libraries and executables specified in the snapshot itself. Finally, GHC itself ships with a number of libraries and executables, which forms the global database. To get a quick idea of this, we can look at the output of the stack exec ghc-pkg list command in our helloworld project:

/home/michael/.stack/programs/x86_64-linux/ghc-7.10.2/lib/ghc-7.10.2/package.conf.d
   Cabal-1.22.4.0
   array-0.5.1.0
   base-4.8.1.0
   bin-package-db-0.0.0.0
   binary-0.7.5.0
   bytestring-0.10.6.0
   containers-0.5.6.2
   deepseq-1.4.1.1
   directory-1.2.2.0
   filepath-1.4.0.0
   ghc-7.10.2
   ghc-prim-0.4.0.0
   haskeline-0.7.2.1
   hoopl-3.10.0.2
   hpc-0.6.0.2
   integer-gmp-1.0.0.0
   pretty-1.1.2.0
   process-1.2.3.0
   rts-1.0
   template-haskell-2.10.0.0
   terminfo-0.4.0.1
   time-1.5.0.1
   transformers-0.4.2.0
   unix-2.7.1.0
   xhtml-3000.2.1
/home/michael/.stack/snapshots/x86_64-linux/nightly-2015-08-26/7.10.2/pkgdb
   stm-2.4.4
/home/michael/helloworld/.stack-work/install/x86_64-linux/nightly-2015-08-26/7.10.2/pkgdb
   acme-missiles-0.3
   helloworld-0.1.0.0

Notice that acme-missiles ends up in the local database. Anything which is not installed from a snapshot ends up in the local database. This includes: your own code, extra-deps, and in some cases even snapshot packages, if you modify them in some way. The reason we have this structure is that:

  • it lets multiple projects reuse the same binary builds of many snapshot packages,
  • but doesn’t allow different projects to “contaminate” each other by putting non-standard content into the shared snapshot database

Typically, the process by which a snapshot package is marked as modified is referred to as “promoting to an extra-dep,” meaning we treat it just like a package in the extra-deps section. This happens for a variety of reasons, including:

  • changing the version of the snapshot package
  • changing build flags
  • one of the packages that the package depends on has been promoted to an extra-dep

As you probably guessed, there are multiple snapshot databases available, e.g.:

michael@d30748af6d3d:~/helloworld$ ls ~/.stack/snapshots/x86_64-linux/
lts-2.22  lts-3.1  lts-3.2  nightly-2015-08-26

These databases don’t get layered on top of each other; they are each used separately.

In reality, you’ll rarely — if ever — interact directly with these databases, but it’s good to have a basic understanding of how they work so you can understand why rebuilding may occur at different points.

The build synonyms

Let’s look at a subset of the stack --help output:

build    Build the package(s) in this directory/configuration
install  Shortcut for 'build --copy-bins'
test     Shortcut for 'build --test'
bench    Shortcut for 'build --bench'
haddock  Shortcut for 'build --haddock'

Note that four of these commands are just synonyms for the build command. They are provided for convenience for common cases (e.g., stack test instead of stack build --test) and so that commonly expected commands just work.

What’s so special about these commands being synonyms? It allows us to make much more composable command lines. For example, we can have a command that builds executables, generates Haddock documentation (Haskell API-level docs), and builds and runs your test suites, with:

stack build --haddock --test

You can even get more inventive as you learn about other flags. For example, take the following:

stack build --pedantic --haddock --test --exec "echo Yay, it succeeded" --file-watch

This will:

  • turn on all warnings and errors
  • build your library and executables
  • generate Haddocks
  • build and run your test suite
  • run the command echo Yay, it succeeded when that completes
  • after building, watch for changes in the files used to build the project, and kick off a new build when done
install and copy-bins

It’s worth calling out the behavior of the install command and --copy-bins option, since this has confused a number of users (especially when compared to behavior of other tools like cabal-install). The install command does precisely one thing in addition to the build command: it copies any generated executables to the local bin path. You may recognize the default value for that path:

michael@d30748af6d3d:~/helloworld$ stack path --local-bin-path
/home/michael/.local/bin

That’s why the download page recommends adding that directory to your PATH environment variable. This feature is convenient, because now you can simply run executable-name in your shell instead of having to run stack exec executable-name from inside your project directory.

Since it’s such a point of confusion, let me list a number of things stack does not do specially for the install command:

  • stack will always build any necessary dependencies for your code. The install command is not necessary to trigger this behavior. If you just want to build a project, run stack build.
  • stack will not track which files it’s copied to your local bin path nor provide a way to automatically delete them. There are many great tools out there for managing installation of binaries, and stack does not attempt to replace those.
  • stack will not necessarily be creating a relocatable executable. If your executables hard-codes paths, copying the executable will not change those hard-coded paths.
    • At the time of writing, there’s no way to change those kinds of paths with stack, but see issue #848 about –prefix for future plans.

That’s really all there is to the install command: for the simplicity of what it does, it occupies a much larger mental space than is warranted.

Targets, locals, and extra-deps

We haven’t discussed this too much yet, but, in addition to having a number of synonyms and taking a number of options on the command line, the build command also takes many arguments. These are parsed in different ways, and can be used to achieve a high level of flexibility in telling stack exactly what you want to build.

We’re not going to cover the full generality of these arguments here; instead, there’s documentation covering the full build command syntax. Here, we’ll just point out a few different types of arguments:

  • You can specify a package name, e.g. stack build vector.
    • This will attempt to build the vector package, whether it’s a local package, in your extra-deps, in your snapshot, or just available upstream. If it’s just available upstream but not included in your locals, extra-deps, or snapshot, the newest version is automatically promoted to an extra-dep.
  • You can also give a package identifier, which is a package name plus version, e.g. stack build yesod-bin-1.4.14.
    • This is almost identical to specifying a package name, except it will (1) choose the given version instead of latest, and (2) error out if the given version conflicts with the version of a local package.
  • The most flexibility comes from specifying individual components, e.g. stack build helloworld:test:helloworld-test says “build the test suite component named helloworld-test from the helloworld package.”
    • In addition to this long form, you can also shorten it by skipping what type of component it is, e.g. stack build helloworld:helloworld-test, or even skip the package name entirely, e.g. stack build :helloworld-test.
  • Finally, you can specify individual directories to build to trigger building of any local packages included in those directories or subdirectories.

When you give no specific arguments on the command line (e.g., stack build), it’s the same as specifying the names of all of your local packages. If you just want to build the package for the directory you’re currently in, you can use stack build ..

Components, –test, and –bench

Here’s one final important yet subtle point. Consider our helloworld package: it has a library component, an executable helloworld-exe, and a test suite helloworld-test. When you run stack build helloworld, how does it know which ones to build? By default, it will build the library (if any) and all of the executables but ignore the test suites and benchmarks.

This is where the --test and --bench flags come into play. If you use them, those components will also be included. So stack build --test helloworld will end up including the helloworld-test component as well.

You can bypass this implicit adding of components by being much more explicit, and stating the components directly. For example, the following will not build the helloworld-exe executable:

michael@d30748af6d3d:~/helloworld$ stack clean
michael@d30748af6d3d:~/helloworld$ stack build :helloworld-test
helloworld-0.1.0.0: configure (test)
Configuring helloworld-0.1.0.0...
helloworld-0.1.0.0: build (test)
Preprocessing library helloworld-0.1.0.0...
[1 of 1] Compiling Lib              ( src/Lib.hs, .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/Lib.o )
In-place registering helloworld-0.1.0.0...
Preprocessing test suite 'helloworld-test' for helloworld-0.1.0.0...
[1 of 1] Compiling Main             ( test/Spec.hs, .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/helloworld-test/helloworld-test-tmp/Main.o )
Linking .stack-work/dist/x86_64-linux/Cabal-1.22.4.0/build/helloworld-test/helloworld-test ...
helloworld-0.1.0.0: test (suite: helloworld-test)
Test suite not yet implemented

We first cleaned our project to clear old results so we know exactly what stack is trying to do. Notice that it builds the helloworld-test test suite, and the helloworld library (since it’s used by the test suite), but it does not build the helloworld-exe executable.

And now the final point: the last line shows that our command also runs the test suite it just built. This may surprise some people who would expect tests to only be run when using stack test, but this design decision is what allows the stack build command to be as composable as it is (as described previously). The same rule applies to benchmarks. To spell it out completely:

  • The –test and –bench flags simply state which components of a package should be built, if no explicit set of components is given
  • The default behavior for any test suite or benchmark component which has been built is to also run it

You can use the --no-run-tests and --no-run-benchmarks (from stack-0.1.4.0 and on) flags to disable running of these components. You can also use --no-rerun-tests to prevent running a test suite which has already passed and has not changed.

NOTE: stack doesn’t build or run test suites and benchmarks for non-local packages. This is done so that running a command like stack test doesn’t need to run 200 test suites!

Multi-package projects

Until now, everything we’ve done with stack has used a single-package project. However, stack’s power truly shines when you’re working on multi-package projects. All the functionality you’d expect to work just does: dependencies between packages are detected and respected, dependencies of all packages are just as one cohesive whole, and if anything fails to build, the build commands exits appropriately.

Let’s demonstrate this with the wai-app-static and yackage packages:

michael@d30748af6d3d:~$ mkdir multi
michael@d30748af6d3d:~$ cd multi/
michael@d30748af6d3d:~/multi$ stack unpack wai-app-static-3.1.1 yackage-0.8.0
wai-app-static-3.1.1: download
Unpacked wai-app-static-3.1.1 to /home/michael/multi/wai-app-static-3.1.1/
Unpacked yackage-0.8.0 to /home/michael/multi/yackage-0.8.0/
michael@d30748af6d3d:~/multi$ stack init
Writing default config file to: /home/michael/multi/stack.yaml
Basing on cabal files:
- /home/michael/multi/yackage-0.8.0/yackage.cabal
- /home/michael/multi/wai-app-static-3.1.1/wai-app-static.cabal

Checking against build plan lts-3.2
Selected resolver: lts-3.2
Wrote project config to: /home/michael/multi/stack.yaml
michael@d30748af6d3d:~/multi$ stack build --haddock --test
# Goes off to build a whole bunch of packages

If you look at the stack.yaml, you’ll see exactly what you’d expect:

flags:
  yackage:
    upload: true
  wai-app-static:
    print: false
packages:
- yackage-0.8.0/
- wai-app-static-3.1.1/
extra-deps: []
resolver: lts-3.2

Notice that multiple directories are listed in the packages key.

In addition to local directories, you can also refer to packages available in a Git repository or in a tarball over HTTP/HTTPS. This can be useful for using a modified version of a dependency that hasn’t yet been released upstream. This is a slightly more advanced usage that we won’t go into detail with here, but it’s covered in the stack.yaml documentation.

Flags and GHC options

There are two common ways to alter how a package will install: with Cabal flags and with GHC options.

Cabal flag management

In the stack.yaml file above, you can see that stack init has detected that — for the yackage package — the upload flag can be set to true, and for wai-app-static, the print flag to false (it’s chosen those values because they’re the default flag values, and their dependencies are compatible with the snapshot we’re using.) To change a flag setting, we can use the command line --flag option:

stack build --flag yackage:-upload

This means: when compiling the yackage package, turn off the upload flag (thus the -). Unlike other tools, stack is explicit about which package’s flag you want to change. It does this for two reasons:

  1. There’s no global meaning for Cabal flags, and therefore two packages can use the same flag name for completely different things.
  2. By following this approach, we can avoid unnecessarily recompiling snapshot packages that happen to use a flag that we’re using.

You can also change flag values on the command line for extra-dep and snapshot packages. If you do this, that package will automatically be promoted to an extra-dep, since the build plan is different than what the plan snapshot definition would entail.

GHC options

GHC options follow a similar logic as in managing Cabal flags, with a few nuances to adjust for common use cases. Let’s consider:

stack build --ghc-options="-Wall -Werror"

This will set the -Wall -Werror options for all local targets. Note that this will not affect extra-dep and snapshot packages at all. This design provides us with reproducible and fast builds.

(By the way: the above GHC options have a special convenience flag: --pedantic.)

There’s one extra nuance about command line GHC options: Since they only apply to local targets, if you change your local targets, they will no longer apply to other packages. Let’s play around with an example from the wai repository, which includes the wai and warp packages, the latter depending on the former. If we run:

stack build --ghc-options=-O0 wai

It will build all of the dependencies of wai, and then build wai with all optimizations disabled. Now let’s add in warp as well:

stack build --ghc-options=-O0 wai warp

This builds the additional dependencies for warp, and then builds warp with optimizations disabled. Importantly: it does not rebuild wai, since wai’s configuration has not been altered. Now the surprising case:

michael@d30748af6d3d:~/wai$ stack build --ghc-options=-O0 warp
wai-3.0.3.0-5a49351d03cba6cbaf906972d788e65d: unregistering (flags changed from ["--ghc-options","-O0"] to [])
warp-3.1.3-a91c7c3108f63376877cb3cd5dbe8a7a: unregistering (missing dependencies: wai)
wai-3.0.3.0: configure

You may expect this to be a no-op: neither wai nor warp has changed. However, stack will instead recompile wai with optimizations enabled again, and then rebuild warp (with optimizations disabled) against this newly built wai. The reason: reproducible builds. If we’d never built wai or warp before, trying to build warp would necessitate building all of its dependencies, and it would do so with default GHC options (optimizations enabled). This dependency would include wai. So when we run:

stack build --ghc-options=-O0 warp

We want its behavior to be unaffected by any previous build steps we took. While this specific corner case does catch people by surprise, the overall goal of reproducible builds is- in the stack maintainers’ views- worth the confusion.

Final point: if you have GHC options that you’ll be regularly passing to your packages, you can add them to your stack.yaml file (starting with stack-0.1.4.0). See the documentation section on ghc-options for more information.

path

NOTE: That’s it, the heavy content of this guide is done! Everything from here on out is simple explanations of commands. Congratulations!

Generally, you don’t need to worry about where stack stores various files. But some people like to know this stuff. That’s when the stack path command is useful.

michael@d30748af6d3d:~/wai$ stack path
global-stack-root: /home/michael/.stack
project-root: /home/michael/wai
config-location: /home/michael/wai/stack.yaml
bin-path: /home/michael/.stack/snapshots/x86_64-linux/lts-2.17/7.8.4/bin:/home/michael/.stack/programs/x86_64-linux/ghc-7.8.4/bin:/home/michael/.stack/programs/x86_64-linux/ghc-7.10.2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ghc-paths: /home/michael/.stack/programs/x86_64-linux
local-bin-path: /home/michael/.local/bin
extra-include-dirs:
extra-library-dirs:
snapshot-pkg-db: /home/michael/.stack/snapshots/x86_64-linux/lts-2.17/7.8.4/pkgdb
local-pkg-db: /home/michael/wai/.stack-work/install/x86_64-linux/lts-2.17/7.8.4/pkgdb
snapshot-install-root: /home/michael/.stack/snapshots/x86_64-linux/lts-2.17/7.8.4
local-install-root: /home/michael/wai/.stack-work/install/x86_64-linux/lts-2.17/7.8.4
snapshot-doc-root: /home/michael/.stack/snapshots/x86_64-linux/lts-2.17/7.8.4/doc
local-doc-root: /home/michael/wai/.stack-work/install/x86_64-linux/lts-2.17/7.8.4/doc
dist-dir: .stack-work/dist/x86_64-linux/Cabal-1.18.1.5

In addition, stack path accepts command line arguments to state which of these keys you’re interested in, which can be convenient for scripting. As a simple example, let’s find out which versions of GHC are installed locally:

michael@d30748af6d3d:~/wai$ ls $(stack path --ghc-paths)/*.installed
/home/michael/.stack/programs/x86_64-linux/ghc-7.10.2.installed
/home/michael/.stack/programs/x86_64-linux/ghc-7.8.4.installed

(Yes, that command requires a *nix shell, and likely won’t run on Windows.)

While we’re talking about paths, to wipe our stack install completely, here’s what needs to be removed:

  1. The stack executable itself
  2. The stack root, e.g. $HOME/.stack on non-Windows systems.
    • See stack path --global-stack-root
    • On Windows, you will also need to delete stack path --ghc-paths
  3. Any local .stack-work directories inside a project

exec

We’ve already used stack exec used multiple times in this guide. As you’ve likely already guessed, it allows you to run executables, but with a slightly modified environment. In particular: stack exec looks for executables on stack’s bin paths, and sets a few additional environment variables (like GHC_PACKAGE_PATH, which tells GHC which package databases to use).

If you want to see exactly what the modified environment looks like, try:

stack exec env

The only issue is how to distinguish flags to be passed to stack versus those for the underlying program. Thanks to the optparse-applicative library, stack follows the Unix convention of -- to separate these, e.g.:

michael@d30748af6d3d:~$ stack exec --package stm -- echo I installed the stm package via --package stm
Run from outside a project, using implicit global project config
Using latest snapshot resolver: lts-3.2
Writing global (non-project-specific) config file to: /home/michael/.stack/global/stack.yaml
Note: You can change the snapshot via the resolver field there.
I installed the stm package via --package stm

Flags worth mentioning:

  • --package foo can be used to force a package to be installed before running the given command.
  • --no-ghc-package-path can be used to stop the GHC_PACKAGE_PATH environment variable from being set. Some tools — notably cabal-install — do not behave well with that variable set.

ghci (the repl)

GHCi is the interactive GHC environment, a.k.a. the REPL. You could access it with:

stack exec ghci

But that won’t load up locally written modules for access. For that, use the stack ghci command. To then load modules from your project, use the :m command (for “module”) followed by the module name.

ghc/runghc

You’ll sometimes want to just compile (or run) a single Haskell source file, instead of creating an entire Cabal package for it. You can use stack exec ghc or stack exec runghc for that. As simple helpers, we also provide the stack ghc and stack runghc commands, for these common cases.

stack also offers a very useful feature for running files: a script interpreter. For too long have Haskellers felt shackled to bash or Python because it’s just too hard to create reusable source-only Haskell scripts. stack attempts to solve that. An example will be easiest to understand:

michael@d30748af6d3d:~$ cat turtle.hs
#!/usr/bin/env stack
-- stack --resolver lts-3.2 --install-ghc runghc --package turtle
{-# LANGUAGE OverloadedStrings #-}
import Turtle
main = echo "Hello World!"
michael@d30748af6d3d:~$ chmod +x turtle.hs
michael@d30748af6d3d:~$ ./turtle.hs
Run from outside a project, using implicit global project config
Using resolver: lts-3.2 specified on command line
hashable-1.2.3.3: configure
# installs some more dependencies
Completed all 22 actions.
Hello World!
michael@d30748af6d3d:~$ ./turtle.hs
Run from outside a project, using implicit global project config
Using resolver: lts-3.2 specified on command line
Hello World!

If you’re on Windows: you can run stack turtle.hs instead of ./turtle.hs.

The first line is the usual “shebang” to use stack as a script interpreter. The second line, which is required, provides additional options to stack (due to the common limitation of the “shebang” line only being allowed a single argument). In this case, the options tell stack to use the lts-3.2 resolver, automatically install GHC if it is not already installed, and ensure the turtle package is available.

The first run can take a while (as it has to download GHC if necessary and build dependencies), but subsequent runs are able to reuse everything already built, and are therefore quite fast.

Finding project configs, and the implicit global

Whenever you run something with stack, it needs a stack.yaml project file. The algorithm stack uses to find this is:

  1. Check for a --stack-yaml option on the command line
  2. Check for a STACK_YAML environment variable
  3. Check the current directory and all ancestor directories for a stack.yaml

The first two provide a convenient method for using an alternate configuration. For example: stack build --stack-yaml stack-7.8.yaml can be used by your CI system to check your code against GHC 7.8. Setting the STACK_YAML environment variable can be convenient if you’re going to be running commands like stack ghc in other directories, but you want to use the configuration you defined in a specific project.

If stack does not find a stack.yaml in any of the three specified locations, the implicit global logic kicks in. You’ve probably noticed that phrase a few times in the output from commands above. Implicit global is essentially a hack to allow stack to be useful in a non-project setting. When no implicit global config file exists, stack creates one for you with the latest LTS snapshot as the resolver. This allows you to do things like:

  • compile individual files easily with stack ghc
  • build executables without starting a project, e.g. stack install pandoc

Keep in mind that there’s nothing magical about this implicit global configuration. It has no impact on projects at all. Every package you install with it is put into isolated databases just like everywhere else. The only magic is that it’s the catch-all project whenever you’re running stack somewhere else.

stack.yaml vs .cabal files

Now that we’ve covered a lot of stack use cases, this quick summary of stack.yaml vs .cabal files will hopefully make sense and be a good reminder for future uses of stack:

  • A project can have multiple packages.
  • Each project has a stack.yaml.
  • Each package has a .cabal file.
  • The .cabal file specifies which packages are dependencies.
  • The stack.yaml file specifies which packages are available to be used.
  • .cabal specifies the components, modules, and build flags provided by a package
  • stack.yaml can override the flag settings for individual packages
  • stack.yaml specifies which packages to include

Comparison to other tools

stack is not the only tool around for building Haskell code. stack came into existence due to limitations with some of the existing tools. If you’re unaffected by those limitations and are happily building Haskell code, you may not need stack. If you’re suffering from some of the common problems in other tools, give stack a try instead.

If you’re a new user who has no experience with other tools, we recommend going with stack. The defaults match modern best practices in Haskell development, and there are less corner cases you need to be aware of. You can develop Haskell code with other tools, but you probably want to spend your time writing code, not convincing a tool to do what you want.

Before jumping into the differences, let me clarify an important similarity:

Same package format. stack, cabal-install, and presumably all other tools share the same underlying Cabal package format, consisting of a .cabal file, modules, etc. This is a Good Thing: we can share the same set of upstream libraries, and collaboratively work on the same project with stack, cabal-install, and NixOS. In that sense, we’re sharing the same ecosystem.

Now the differences:

  • Curation vs dependency solving as a default.
    • stack defaults to using curation (Stackage snapshots, LTS Haskell, Nightly, etc) as a default instead of defaulting to dependency solving, as cabal-install does. This is just a default: as described above, stack can use dependency solving if desired, and cabal-install can use curation. However, most users will stick to the defaults. The stack team firmly believes that the majority of users want to simply ignore dependency resolution nightmares and get a valid build plan from day 1, which is why we’ve made this selection of default behavior.
  • Reproducible.
    • stack goes to great lengths to ensure that stack build today does the same thing tomorrow. cabal-install does not: build plans can be affected by the presence of preinstalled packages, and running cabal update can cause a previously successful build to fail. With stack, changing the build plan is always an explicit decision.
  • Automatically building dependencies.
    • In cabal-install, you need to use cabal install to trigger dependency building. This is somewhat necessary due to the previous point, since building dependencies can, in some cases, break existing installed packages. So for example, in stack, stack test does the same job as cabal install --run-tests, though the latter additionally performs an installation that you may not want. The closer command equivalent is cabal install --enable-tests --only-dependencies && cabal configure --enable-tests && cabal build && cabal test (newer versions of cabal-install may make this command shorter).
  • Isolated by default.
    • This has been a pain point for new stack users. In cabal, the default behavior is a non-isolated build where working on two projects can cause the user package database to become corrupted. The cabal solution to this is sandboxes. stack, however, provides this behavior by default via its databases. In other words: when you use stack, there’s no need for sandboxes, everything is (essentially) sandboxed by default.

Other tools for comparison (including active and historical)

  • cabal-dev (deprecated in favor of cabal-install)
  • cabal-meta inspired a lot of the multi-package functionality of stack. If you’re still using cabal-install, cabal-meta is relevant. For stack work, the feature set is fully subsumed by stack.
  • cabal-src is mostly irrelevant in the presence of both stack and cabal sandboxes, both of which make it easier to add additional package sources easily. The mega-sdist executable that ships with cabal-src is, however, still relevant. Its functionality may some day be folded into stack
  • stackage-cli was an initial attempt to make cabal-install work more easily with curated snapshots, but due to a slight impedance mismatch between cabal.config constraints and snapshots, it did not work as well as hoped. It is deprecated in favor of stack.

More resources

There are lots of resources available for learning more about stack:

Fun features

This is just a quick collection of fun and useful feature stack supports.

Templates

We started off using the new command to create a project. stack provides multiple templates to start a new project from:

michael@d30748af6d3d:~$ stack templates
chrisdone
hakyll-template
new-template
simple
yesod-minimal
yesod-mongo
yesod-mysql
yesod-postgres
yesod-postgres-fay
yesod-simple
yesod-sqlite
michael@d30748af6d3d:~$ stack new my-yesod-project yesod-simple
Downloading template "yesod-simple" to create project "my-yesod-project" in my-yesod-project/ ...
Using the following authorship configuration:
author-email: example@example.com
author-name: Example Author Name
Copy these to /home/michael/.stack/config.yaml and edit to use different values.
Writing default config file to: /home/michael/my-yesod-project/stack.yaml
Basing on cabal files:
- /home/michael/my-yesod-project/my-yesod-project.cabal

Checking against build plan lts-3.2
Selected resolver: lts-3.2
Wrote project config to: /home/michael/my-yesod-project/stack.yaml

To add more templates, see the stack-templates repository.

IDE

stack has a work-in-progress suite of editor integrations, to do things like getting type information in Emacs. For more information, see stack-ide.

Visualizing dependencies

If you’d like to get some insight into the dependency tree of your packages, you can use the stack dot command and Graphviz. More information is available in the Dependency visualization documentation.

Travis with caching

Many people use Travis CI to test out a project for every Git push. We have a Wiki page devoted to Travis. However, for most people, the following example will be sufficient to get started:

# Use new container infrastructure to enable caching
sudo: false

# Choose a lightweight base image; we provide our own build tools.
language: c

# GHC depends on GMP. You can add other dependencies here as well.
addons:
  apt:
    packages:
    - libgmp-dev

# The different configurations we want to test. You could also do things like
# change flags or use --stack-yaml to point to a different file.
env:
- ARGS=""
- ARGS="--resolver lts-2"
- ARGS="--resolver lts-3"
- ARGS="--resolver lts"
- ARGS="--resolver nightly"

before_install:
# Download and unpack the stack executable
- mkdir -p ~/.local/bin
- export PATH=$HOME/.local/bin:$PATH
- travis_retry curl -L https://www.stackage.org/stack/linux-x86_64 | tar xz --wildcards --strip-components=1 -C ~/.local/bin '*/stack'

# This line does all of the work: installs GHC if necessary, build the library,
# executables, and test suites, and runs the test suites. --no-terminal works
# around some quirks in Travis's terminal implementation.
script: stack $ARGS --no-terminal --install-ghc test --haddock

# Caching so the next build will be fast too.
cache:
  directories:
  - $HOME/.stack

Not only will this build and test your project against multiple GHC versions and snapshots, but it will cache your snapshot built packages, meaning that subsequent builds will be much faster.

Once Travis whitelists the stack .deb files, we’ll be able to simply include stack in the addons section, and automatically use the newest version of stack, avoiding that complicated before_install section This is being tracked in the apt-source-whitelist and apt-package-whitelist issue trackers.

In case you’re wondering: we need --no-terminal because stack does some fancy sticky display on smart terminals to give nicer status and progress messages, and the terminal detection is broken on Travis.

Shell auto-completion

Love tab-completion of commands? You’re not alone. If you’re on bash, just run the following (or add it to .bashrc):

eval "$(stack --bash-completion-script stack)"

For more information and other shells, see the Shell auto-completion wiki page

Docker

stack provides two built-in Docker integrations. Firstly, you can build your code inside a Docker image, which means:

  • even more reproducibility to your builds, since you and the rest of your team will always have the same system libraries
  • the Docker images ship with entire precompiled snapshots. That means you have a large initial download, but much faster builds

For more information, see the Docker-integration documentation.

stack can also generate Docker images for you containing your built executables. This feature is great for automating deployments from CI. This feature is not yet well-documented, but the basics are to add a section like the following to stack.yaml:

image:
  # YOU NEED A `container` YAML SECTION FOR `stack image container`
  container:
    # YOU NEED A BASE IMAGE NAME. STACK LAYERS EXES ON TOP OF
    # THE BASE IMAGE. PREPARE YOUR PROJECT IMAGE IN ADVANCE. PUT
    # ALL YOUR RUNTIME DEPENDENCIES IN THE IMAGE.
    base: "fpco/ubuntu-with-libgmp:14.04"
    # YOU CAN OPTIONALY NAME THE IMAGE. STACK WILL USE THE PROJECT
    # DIRECTORY NAME IF YOU LEAVE OUT THIS OPTION.
    name: "fpco/hello-world"
    # OPTIONALLY ADD A HASH OF LOCAL PROJECT DIRECTORIES AND THEIR
    # DESTINATIONS INSIDE THE DOCKER IMAGE.
    add:
      man/: /usr/local/share/man/
    # OPTIONALLY SPECIFY A LIST OF EXECUTABLES. STACK WILL CREATE
    # A TAGGED IMAGE FOR EACH IN THE LIST. THESE IMAGES WILL HAVE
    # THEIR RESPECTIVE "ENTRYPOINT" SET.
    entrypoints:
      - stack

and then run stack image container and then docker images to list the images.

Nix

stack provides an integration with Nix, providing you with the same two benefits as the first Docker integration discussed above:

  • more reproducible builds, since fixed versions of any system libraries and commands required to build the project are automatically built using Nix and managed locally per-project. These system packages never conflict with any existing versions of these libraries on your system. That they are managed locally to the project means that you don’t need to alter your system in any way to build any odd project pulled from the Internet.
  • implicit sharing of system packages between projects, so you don’t have more copies on-disk than you need to.

Both Docker and Nix are methods to isolate builds and thereby make them more reproducible. They just differ in the means of achieving this isolation. Nix provides slightly weaker isolation guarantees than Docker, but is more lightweight and more portable (Linux and OS X mainly, but also Windows). For more on Nix, its command-line interface and its package description language, read the Nix manual. But keep in mind that the point of stack’s support is to obviate the need to write any Nix code in the common case or even to learn how to use the Nix tools (they’re called under the hood).

For more information, see the Nix-integration documentation.

Power user commands

The following commands are a little more powerful, and won’t be needed by all users. Here’s a quick rundown:

  • stack update will download the most recent set of packages from your package indices (e.g. Hackage). Generally, stack runs this for you automatically when necessary, but it can be useful to do this manually sometimes (e.g., before running stack solver, to guarantee you have the most recent upstream packages available).
  • stack unpack is a command we’ve already used quite a bit for examples, but most users won’t use it regularly. It does what you’d expect: downloads a tarball and unpacks it.
  • stack sdist generates an uploading tarball containing your package code
  • stack upload uploads an sdist to Hackage. In the future, it will also perform automatic GPG signing of your packages for additional security, when configured.
    • --sign provides a way to GPG sign your package & submit the result to sig.commercialhaskell.org for storage in the sig-archive git repo. (Signatures will be used later to verify package integrity.)
  • stack upgrade will build a new version of stack from source.
    • --git is a convenient way to get the most recent version from master for those testing and living on the bleeding edge.
  • stack setup --upgrade-cabal can install a newer version of the Cabal library, used for performing actual builds. You shouldn’t generally do this, since new Cabal versions may introduce incompatibilities with package sets, but it can be useful if you’re trying to test a specific bugfix.
  • stack list-dependencies lists all of the packages and versions used for a project
  • stack sig subcommand can help you with GPG signing & verification
    • sign will sign an sdist tarball and submit the signature to sig.commercialhaskell.org for storage in the sig-archive git repo. (Signatures will be used later to verify package integrity.)

Debugging

The following command installs with profiling enabled:

stack install --enable-executable-profiling --enable-library-profiling --ghc-options="-rtsopts"

This command will allow you to use various tools to profile the time, allocation, heap, and more of a program. The -prof GHC option is unnecessary and will result in a warning. Additional compilation options can be added to --ghc-options if needed. To see a general overview of the time and allocation of a program called main compiled with the above command, you can run

./main +RTS -p

to generate a main.prof file containing the requested profiling information. For more commands and uses, see the official GHC chapter on profiling, the Haskell wiki, and the chapter on profiling in Real World Haskell.

YAML Configuration

This page is intended to fully document all configuration options available in the stack.yaml file. Note that this page is likely to be both incomplete and sometimes inaccurate. If you see such cases, please update the page, and if you’re not sure how, open an issue labeled “question”.

The stack.yaml configuration options break down into project specific options in:

  • <project dir>/stack.yaml

and non-project specific options in:

  • /etc/stack/config.yaml – for system global non-project default options
  • ~/.stack/config.yaml – for user non-project default options
  • The project file itself may also contain non-project specific options

Note: When stack is invoked outside a stack project it will source project specific options from ~/.stack/global/stack.yaml. Options in this file will be ignored for a project with its own <project dir>/stack.yaml.

Project config

Project specific options are only valid in the stack.yaml file local to a project, not in the user or global config files.

packages

(Mercurial support since 0.1.10.0)

This lists all local packages. In the simplest usage, it will be a list of directories, e.g.:

packages:
- dir1
- dir2
- dir3

However, it supports three other location types: an HTTP URL referring to a tarball that can be downloaded, and information on a Git or Mercurial repo to clone, together with this SHA1 commit. For example:

packages:
- some-directory
- https://example.com/foo/bar/baz-0.0.2.tar.gz
- location:
    git: git@github.com:commercialhaskell/stack.git
    commit: 6a86ee32e5b869a877151f74064572225e1a0398
- location:
    hg: https://example.com/hg/repo
    commit: da39a3ee5e6b4b0d3255bfef95601890afd80709

Note: it is highly recommended that you only use SHA1 values for a Git or Mercurial commit. Other values may work, but they are not officially supported, and may result in unexpected behavior (namely, stack will not automatically pull to update to new versions).

stack further allows you to tweak your packages by specifying two additional settings:

  • A list of subdirectories to build (useful for mega-repos like wai or digestive-functors)
  • Whether a package should be treated as a dependency: a package marked extra-dep: true will only be built if demanded by a non-dependency, and its test suites and benchmarks will not be run. This is useful for tweaking upstream packages.

To tie this all together, here’s an example of the different settings:

packages:
- local-package
- location: vendor/binary
  extra-dep: true
- location:
    git: git@github.com:yesodweb/wai
    commit: 2f8a8e1b771829f4a8a77c0111352ce45a14c30f
  subdirs:
  - auto-update
  - wai
extra-deps

This is a list of package identifiers for additional packages from upstream to be included. This is usually used to augment an LTS Haskell or Stackage Nightly snapshot with a package that is not present or is at an older version than you wish to use.

extra-deps:
- acme-missiles-0.3
resolver

Specifies how dependencies are resolved. There are currently four resolver types:

  • LTS Haskell snapshots, e.g. resolver: lts-2.14
  • Stackage Nightly snapshot, e.g. resolver: nightly-2015-06-16
  • No snapshot, just use packages shipped with the compiler
    • For GHC this looks like resolver: ghc-7.10.2
    • For GHCJS this looks like resolver: ghcjs-0.1.0_ghc-7.10.2.
  • Custom snapshot

Each of these resolvers will also determine what constraints are placed on the compiler version. See the compiler-check option for some additional control over compiler version.

flags

Flags can be set for each package separately, e.g.

flags:
  package-name:
    flag-name: true

Flags will only affect packages in your packages and extra-deps settings. Packages that come from the snapshot global database or are not affected.

image

The image settings are used for the creation of container images using stack image container, e.g.

image:
  container:
    base: "fpco/stack-build"
    add:
      static: /data/static

base is the docker image that will be used to built upon. The add lines allow you to add additional directories to your image. You can also specify entrypoints. Your executables are placed in /usr/local/bin.

Non-project config

Non-project config options may go in the global config (/etc/stack/config.yaml) or the user config (~/.stack/config.yaml).

docker

See Docker integration.

nix

(since 0.1.10.0)

See Nix integration.

connection-count

Integer indicating how many simultaneous downloads are allowed to happen

Default: 8

hide-th-loading

Strip out the “Loading ...” lines from GHC build output, produced when using Template Haskell

Default: true

latest-snapshot-url

URL providing a JSON with information on the latest LTS and Nightly snapshots, used for automatic project configuration.

Default: https://www.stackage.org/download/snapshots.json

local-bin-path

Target directory for stack install and stack build --copy-bins.

Default: ~/.local/bin

package-indices
package-indices:
- name: Hackage
  download-prefix: https://s3.amazonaws.com/hackage.fpcomplete.com/package/

  # at least one of the following must be present
  git: https://github.com/commercialhaskell/all-cabal-hashes.git
  http: https://s3.amazonaws.com/hackage.fpcomplete.com/00-index.tar.gz

  # optional fields, both default to false
  gpg-verify: false
  require-hashes: false

One thing you should be aware of: if you change the contents of package-version combination by setting a different package index, this can have an effect on other projects by installing into your shared snapshot database.

system-ghc

Enables or disables using the GHC available on the PATH. Useful to disable if you want to force stack to use its own installed GHC (via stack setup), in cases where your system GHC my be incomplete for some reason. Default is true.

# Turn off system GHC
system-ghc: false
install-ghc

Whether or not to automatically install GHC when necessary. Default is false, which means stack will prompt you to run stack setup as needed.

skip-ghc-check

Should we skip the check to confirm that your system GHC version (on the PATH) matches what your project expects? Default is false.

require-stack-version

Require a version of stack within the specified range (cabal-style) to be used for this project. Example: require-stack-version: "== 0.1.*"

Default: "-any"

arch/os

Set the architecture and operating system for GHC, build directories, etc. Values are those recognized by Cabal, e.g.:

arch: i386, x86_64
os: windows, linux

You likely only ever want to change the arch value. This can also be set via the command line.

extra-include-dirs/extra-lib-dirs

A list of extra paths to be searched for header files and libraries, respectively. Paths should be absolute

extra-include-dirs:
- /opt/foo/include
extra-lib-dirs:
- /opt/foo/lib
compiler-check

(Since 0.1.4)

Specifies how the compiler version in the resolver is matched against concrete versions. Valid values:

  • match-minor: make sure that the first three components match, but allow patch-level differences. For example< 7.8.4.1 and 7.8.4.2 would both match 7.8.4. This is useful to allow for custom patch levels of a compiler. This is the default
  • match-exact: the entire version number must match precisely
  • newer-minor: the third component can be increased, e.g. if your resolver is ghc-7.10.1, then 7.10.2 will also be allowed. This was the default up through stack 0.1.3
compiler

(Since 0.1.7)

Overrides the compiler version in the resolver. Note that the compiler-check flag also applies to the version numbers. This uses the same syntax as compiler resolvers like ghc-7.10.2 or ghcjs-0.1.0.20150924_ghc-7.10.2 (version used for the ‘old-base’ version of GHCJS). While it’s useful to override the compiler for a variety of reasons, the main usecase is to use GHCJS with a stackage snapshot, like this:

resolver: lts-3.10
compiler: ghcjs-0.1.0.20150924_ghc-7.10.2
compiler-check: match-exact
ghc-options

(Since 0.1.4)

Allows specifying per-package and global GHC options:

ghc-options:
    # All packages
    "*": -Wall
    some-package: -DSOME_CPP_FLAG

Caveat emptor: setting options like this will affect your snapshot packages, which can lead to unpredictable behavior versus official Stackage snapshots. This is in contrast to the ghc-options command line flag, which will only affect local packages.

ghc-variant

(Since 0.1.5)

Specify a variant binary distribution of GHC to use. Known values:

  • standard: This is the default, uses the standard GHC binary distribution
  • gmp4: Use the “centos6” GHC bindist, for Linux systems with libgmp4 (aka libgmp.so.3), such as CentOS 6. This variant will be used automatically on such systems; you should not need to specify it in the configuration
  • integersimple: Use a GHC bindist that uses integer-simple instead of GMP
  • any other value: Use a custom GHC bindist. You should specify setup-info so stack setup knows where to download it, or pass the stack setup --ghc-bindist argument on the command-line
setup-info

(Since 0.1.5)

Allows overriding from where tools like GHC and msys2 (on Windows) are downloaded. Most useful for specifying locations of custom GHC binary distributions (for use with the ghc-variant option):

setup-info:
  ghc:
    windows32-custom-foo:
      7.10.2:
        url: "https://example.com/ghc-7.10.2-i386-unknown-mingw32-foo.tar.xz"
pvp-bounds

(Since 0.1.5)

When using the sdist and upload commands, this setting determines whether the cabal file’s dependencies should be modified to reflect PVP lower and upper bounds. Values are none (unchanged), upper (add upper bounds), lower (add lower bounds), and both (and upper and lower bounds). The algorithm it follows is:

  • If an upper or lower bound already exists on a dependency, it’s left alone
  • When adding a lower bound, we look at the current version specified by stack.yaml, and set it as the lower bound (e.g., foo >= 1.2.3)
  • When adding an upper bound, we require less than the next major version (e.g., foo < 1.3)
pvp-bounds: none

For more information, see the announcement blog post.

modify-code-page

(Since 0.1.6)

Modify the code page for UTF-8 output when running on Windows. Default behavior is to modify.

modify-code-page: false
explicit-setup-deps

(Since 0.1.6)

Decide whether a custom Setup.hs script should be run with an explicit list of dependencies, based on the dependencies of the package itself. It associates the name of a local package with a boolean. When it’s true, the Setup.hs script is built with an explicit list of packages. When it’s false (default), the Setup.hs script is built without access to the local DB, but can access any package in the snapshot / global DB.

Note that in the future, this will be unnecessary, once Cabal provides full support for explicit Setup.hs dependencies.

explicit-setup-deps:
    "*": true # change the default
    entropy: false # override the new default for one package
rebuild-ghc-options

(Since 0.1.6)

Should we rebuild a package when its GHC options change? Before 0.1.6, this was a non-configurable true. However, in most cases, the flag is used to affect optimization levels and warning behavior, for which GHC itself doesn’t actually recompile the modules anyway. Therefore, the new behavior is to not recompile on an options change, but this behavior can be changed back with the following:

rebuild-ghc-options: true
apply-ghc-options

(Since 0.1.6)

Which packages do ghc-options on the command line get applied to? Before 0.1.6, the default value was targets

apply-ghc-options: locals # all local packages, the default
# apply-ghc-options: targets # all local packages that are targets
# apply-ghc-options: everything # applied even to snapshot and extra-deps

Note that everything is a slightly dangerous value, as it can break invariants about your snapshot database.

allow-newer

(Since 0.1.7)

Ignore version bounds in .cabal files. Default is false.

allow-newer: true

Note that this also ignores lower bounds. The name “allow-newer” is chosen to match the commonly used cabal option.

templates

Templates used with stack new have a number of parameters that affect the generated code. These can be set for all new projects you create. The result of them can be observed in the generated LICENSE and cabal files.

The 5 parameters are: author-email, author-name, category, copyright and github-username.

  • author-email - sets the maintainer property in cabal
  • author-name - sets the author property in cabal and the name used in LICENSE
  • category - sets the category property in cabal. This is used in Hackage. For examples of categories see Packages by category. It makes sense for category to be set on a per project basis because it is uncommon for all projects a user creates to belong to the same category. The category can be set per project by passing -p "category:value" to the stack new command.
  • copyright - sets the copyright property in cabal. It is typically the name of the holder of the copyright on the package and the year(s) from which copyright is claimed. For example: Copyright: (c) 2006-2007 Joe Bloggs
  • github-username - used to generate homepage and source-repository in cabal. For instance github-username: myusername and stack new my-project new-template would result:
homepage: http://github.com/myusername/my-project#readme

source-repository head
  type: git
  location: https://github.com/myusername/my-project

These properties can be set in config.yaml as follows:

templates:
  params:
    author-name: Your Name
    author-email: youremail@example.com
    category: Your Projects Category
    copyright: 'Copyright: (c) 2015 Your Name'
    github-username: yourusername

Architecture

Terminology

  • Package identifier: a package name and version, e.g. text-1.2.1.0
  • GhcPkgId: a package identifier plus the unique hash for the generated binary, e.g. text-1.2.1.0-bb83023b42179dd898ebe815ada112c2
  • Package index: a collection of packages available for download. This is a combination of an index containing all of the .cabal files (either a tarball downloaded via HTTP(S) or a Git repository) and some way to download package tarballs.
    • By default, stack uses a single package index (the Github/S3 mirrors of Hackage), but supports customization and adding more than one index
  • Package database: a collection of metadata about built libraries
  • Install root: a destination for installing packages into. Contains a bin path (for generated executables), lib (for the compiled libraries), pkgdb (for the package database), and a few other things
  • Snapshot: an LTS Haskell or Stackage Nightly, which gives information on a complete set of packages. This contains a lot of metadata, but importantly it can be converted into a mini build plan...
  • Mini build plan: a collection of package identifiers and their build flags that are known to build together
  • Resolver: the means by which stack resolves dependencies for your packages. The two currently supported options are snapshot (using LTS or Nightly), and GHC (which installs no extra dependencies). Others may be added in the future (such as a SAT-based dependency solver). These packages are always taken from a package index
  • extra-deps: additional packages to be taken from the package index for dependencies. This list will shadow packages provided by the resolver
  • Local packages: source code actually present on your file system, and referred to by the packages field in your stack.yaml file. Each local package has exactly one .cabal file
  • Project: a stack.yaml config file and all of the local packages it refers to.

Databases

Every build uses three distinct install roots, which means three separate package databases and bin paths. These are:

  • Global: the packages that ship with GHC. We never install anything into this database
  • Snapshot: a database shared by all projects using the same snapshot. Packages installed in this database must use the exact same dependencies and build flags as specified in the snapshot, and cannot be affected by user flags, ensuring that one project cannot corrupt another. There are two caveats to this:
    • If different projects use different package indices, then their definitions of what package foo-1.2.3 are may be different, in which case they can corrupt each other’s shared databases. This is warned about in the FAQ
    • Turning on profiling may cause a package to be recompiled, which will result in a different GhcPkgId
  • Local: extra-deps, local packages, and snapshot packages which depend on them (more on that in shadowing)

Building

Shadowing

Every project must have precisely one version of a package. If one of your local packages or extra dependencies conflicts with a package in the snapshot, the local/extradep shadows the snapshot version. The way this works is:

  • The package is removed from the list of packages in the snapshot
  • Any package that depends on that package (directly or indirectly) is moved from the snapshot to extra-deps, so that it is available to your packages as dependencies.
    • Note that there is no longer any guarantee that this package will build, since you’re using an untested dependency

After shadowing, you end up with what is called internally a SourceMap, which is Map PackageName PackageSource, where a PackageSource can be either a local package, or a package taken from a package index (specified as a version number and the build flags).

Installed packages

Once you have a SourceMap, you can inspect your three available databases and decide which of the installed packages you wish to use from them. We move from the global, to snapshot, and finally local, with the following rules:

  • If we require profiling, and the library does not provide profiling, do not use it
  • If the package is in the SourceMap, but belongs to a difference database, or has a different version, do not use it
  • If after the above two steps, any of the dependencies are unavailable, do not use it
  • Otherwise: include the package in the list of installed packages

We do something similar for executables, but maintain our own database of installed executables, since GHC does not track them for us.

Plan construction

When running a build, we know which packages we want installed (inventively called “wanteds”), which packages are available to install, and which are already installed. In plan construction, we put them information together to decide which packages must be built. The code in Stack.Build.ConstructPlan is authoritative on this and should be consulted. The basic idea though is:

  • If any of the dependencies have changed, reconfigure and rebuild
  • If a local package has any files changed, rebuild (but don’t bother reconfiguring)
  • If a local package is wanted and we’re running tests or benchmarks, run the test or benchmark even if the code and dependencies haven’t changed
Plan execution

Once we have the plan, execution is a relatively simple process of calling runghc Setup.hs in the correct order with the correct parameters. See Stack.Build.Execute for more information.

Configuration

stack has two layers of configuration: project and non-project. All of these are stored in stack.yaml files, but the former has extra fields (resolver, packages, extra-deps, and flags). The latter can be monoidally combined so that a system config file provides defaults, which a user can override with ~/.stack/config.yaml, and a project can further customize. In addition, environment variables STACK_ROOT and STACK_YAML can be used to tweak where stack gets its configuration from.

stack follows a simple algorithm for finding your project configuration file: start in the current directory, and keep going to the parent until it finds a stack.yaml. When using stack ghc or stack exec as mentioned above, you’ll sometimes want to override that behavior and point to a specific project in order to use its databases and bin directories. To do so, simply set the STACK_YAML environment variable to point to the relevant stack.yaml file.

Snapshot auto-detection

When you run stack build with no stack.yaml, it will create a basic configuration with a single package (the current directory) and an auto-detected snapshot. The algorithm it uses for selecting this snapshot is:

  • Try the latest two LTS major versions at their most recent minor version release, and the most recent Stackage Nightly. For example, at the time of writing, this would be lts-2.10, lts-1.15, and nightly-2015-05-26
  • For each of these, test the version bounds in the package’s .cabal file to see if they are compatible with the snapshot, choosing the first one that matches
  • If no snapshot matches, uses the most recent LTS snapshot, even though it will not compile

If you end up in the no compatible snapshot case, you typically have three options to fix things:

  • Manually specify a different snapshot that you know to be compatible. If you can do that, great, but typically if the auto-detection fails, it means that there’s no compatible snapshot
  • Modify version bounds in your .cabal file to be compatible with the selected snapshot
  • Add extra-deps to your stack.yaml file to fix compatibility problems

Remember that running stack build will give you information on why your build cannot occur, which should help guide you through the steps necessary for the second and third option above. Also, note that those options can be mixed-and-matched, e.g. you may decide to relax some version bounds in your .cabal file, while also adding some extra-deps.

Explicit breakage

As mentioned above, updating your package indices will not cause stack to invalidate any existing package databases. That’s because stack is always explicit about build plans, via:

  1. the selected snapshot
  2. the extra-deps
  3. local packages

The only way to change a plan for packages to be installed is by modifying one of the above. This means that breakage of a set of installed packages is an explicit and contained activity. Specifically, you get the following guarantees:

  • Since snapshots are immutable, the snapshot package database will not be invalidated by any action. If you change the snapshot you’re using, however, you may need to build those packages from scratch.
  • If you modify your extra-deps, stack may need to unregister and reinstall them.
  • Any changes to your local packages trigger a rebuild of that package and its dependencies.