Files
nixpkgs/ci/eval
Wolfgang Walther c1b06db57b workflows/eval: pass outpaths via cachix instead of artifacts
Instead of uploading the outpaths as artifact, this uploads them via
cachix. Most of all, this makes CI less brittle, because Eval in PRs
will still be able to succeed, even if no workflow run for the push
event could be found on the target branch. It will just take longer.

This also makes moving Eval into the Merge Queue easier to do: When
downloading artifacts from a different run, these would always have to
match on the right event, too. By pulling from cachix, the same workflow
can support target branches with merge queue and without merge queue at
the same time. The latter would still use the push event, while the
former could use the merge_group event.

Last but not least, this should fix Eval on PRs targeting `wip-`
branches and any other branches that the push event doesn't trigger on.
These would never find an Eval result from the target branch and could
never show rebuilds accurately. Now these PRs should work at a slightly
higher runtime cost.
2025-08-22 13:01:20 +02:00
..

Nixpkgs CI evaluation

The code in this directory is used by the eval.yml GitHub Actions workflow to evaluate the majority of Nixpkgs for all PRs, effectively making sure that when the development branches are processed by Hydra, no evaluation failures are encountered.

Furthermore it also allows local evaluation using

nix-build ci -A eval.full \
  --max-jobs 4 \
  --cores 2 \
  --arg chunkSize 10000 \
  --arg evalSystems '["x86_64-linux" "aarch64-darwin"]'
  • --max-jobs: The maximum number of derivations to run at the same time. Only each supported system gets a separate derivation, so it doesn't make sense to set this higher than that number.
  • --cores: The number of cores to use for each job. Recommended to set this to the amount of cores on your system divided by --max-jobs.
  • chunkSize: The number of attributes that are evaluated simultaneously on a single core. Lowering this decreases memory usage at the cost of increased evaluation time. If this is too high, there won't be enough chunks to process them in parallel, and will also increase evaluation time.
  • evalSystems: The set of systems for which nixpkgs should be evaluated. Defaults to the four official platforms (x86_64-linux, aarch64-linux, x86_64-darwin and aarch64-darwin).

A good default is to set chunkSize to 10000, which leads to about 3.6GB max memory usage per core, so suitable for fully utilising machines with 4 cores and 16GB memory, 8 cores and 32GB memory or 16 cores and 64GB memory.

Note that 16GB memory is the recommended minimum, while with less than 8GB memory evaluation time suffers greatly.