without `-n`, gzip leaks the file timestamp into the compressed file,
which is likely to leak the build timestamp into the output.
This fixes#434930, a regression introduced in c5252e1 / #406922
otherwise systemd fails to build because autoPatchelfHook patches libs
with debug objects and this creates circular dependencies from out to
debug (of course, debug depends on out)
example of faulty behavior:
searching for dependencies of /nix/store/c8cd7b82py02f0rkags351nhg82wwjm6-systemd-minimal-257.5/bin/systemd-delta
libsystemd-shared-257.so -> found: /nix/store/icpqrawjhsw4fbi4i2hp7cxvf3kbzq7m-systemd-minimal-257.5-debug/lib/debug
libc.so.6 -> found: /nix/store/7lyblga0bzjk0sl93vp4aiwbvb57vp2x-glibc-2.40-66/lib
A debuginfod support must be able to map a build-id to
- debug symbols
- the original elf file for which the debug symbols where separated
- the corresponding source files
Currently, hydra provides an index from build-id to the nar of the debug
output containing the debug symbols.
Add symlinks in these outputs so that we can recover the store path of
the source and original elf file. We can then fetch them by the normal
binary cache protocol.
About source files: to minimize storage demands, in the ideal case,
software would be built from the source store path $src and the
debuginfod server would just have to serve source files from this store
path. In practice, source files are sometimes patched as part of the
build. This commit stores the modified files in the debug output is a so
called source overlay so that the debuginfod serve can serve the patched
content of the file.
The checksum was chosen as follows (where big is 4GB of zeros):
$ hyperfine -L s sysv,bsd,crc,sha1,sha224,sha256,sha384,sha512,blake2b,sm3 'cksum -a {s} big'
Benchmark 1: cksum -a sysv big
Time (mean ± σ): 854.5 ms ± 270.5 ms [User: 245.3 ms, System: 601.8 ms]
Range (min … max): 760.5 ms … 1623.8 ms 10 runs
Warning: The first benchmarking run for this command was significantly slower than the rest (1.624 s). This could be caused by (filesystem) caches that were not filled until after the first run. You should consider using the '--warmup' option to fill those caches before the actual benchmark. Alternatively, use the '--prepare' option to clear the caches before each timing run.
Benchmark 2: cksum -a bsd big
Time (mean ± σ): 5.838 s ± 0.045 s [User: 5.118 s, System: 0.693 s]
Range (min … max): 5.767 s … 5.897 s 10 runs
Benchmark 3: cksum -a crc big
Time (mean ± σ): 829.9 ms ± 28.6 ms [User: 274.5 ms, System: 551.0 ms]
Range (min … max): 803.2 ms … 904.8 ms 10 runs
Benchmark 4: cksum -a sha1 big
Time (mean ± σ): 2.553 s ± 0.010 s [User: 1.912 s, System: 0.631 s]
Range (min … max): 2.543 s … 2.575 s 10 runs
Benchmark 5: cksum -a sha224 big
Time (mean ± σ): 2.716 s ± 0.018 s [User: 2.054 s, System: 0.645 s]
Range (min … max): 2.695 s … 2.743 s 10 runs
Benchmark 6: cksum -a sha256 big
Time (mean ± σ): 2.751 s ± 0.029 s [User: 2.057 s, System: 0.674 s]
Range (min … max): 2.712 s … 2.812 s 10 runs
Benchmark 7: cksum -a sha384 big
Time (mean ± σ): 5.600 s ± 0.049 s [User: 4.820 s, System: 0.753 s]
Range (min … max): 5.515 s … 5.683 s 10 runs
Benchmark 8: cksum -a sha512 big
Time (mean ± σ): 5.543 s ± 0.021 s [User: 4.751 s, System: 0.768 s]
Range (min … max): 5.523 s … 5.579 s 10 runs
Benchmark 9: cksum -a blake2b big
Time (mean ± σ): 5.091 s ± 0.025 s [User: 4.306 s, System: 0.764 s]
Range (min … max): 5.048 s … 5.125 s 10 runs
Benchmark 10: cksum -a sm3 big
Time (mean ± σ): 14.220 s ± 0.120 s [User: 13.376 s, System: 0.783 s]
Range (min … max): 14.077 s … 14.497 s 10 runs
Summary
cksum -a crc big ran
1.03 ± 0.33 times faster than cksum -a sysv big
3.08 ± 0.11 times faster than cksum -a sha1 big
3.27 ± 0.11 times faster than cksum -a sha224 big
3.31 ± 0.12 times faster than cksum -a sha256 big
6.13 ± 0.21 times faster than cksum -a blake2b big
6.68 ± 0.23 times faster than cksum -a sha512 big
6.75 ± 0.24 times faster than cksum -a sha384 big
7.03 ± 0.25 times faster than cksum -a bsd big
17.13 ± 0.61 times faster than cksum -a sm3 big
unfortunately, crc (and sysv) are not supported by --check, so they are
disqualified. sha1 sha224 and sha256 are sensibly as fast as one
another, so let's use a non broken one, even though cryptographic
qualities are not needed here.
The no-broken-symlinks hook does not fail if bad links exist out of
store, but /build is also a special directory for nix derivations: the
build directory in the builder mount namespace.
There should be no link to /build in the output derivation, so also
error on these directories (through $TMPDIR which default to that)
Closes#410508
These bash helpers make it trivial to write shell based setup hooks that utilize all cores.
This also makes it simpler to optimize existing hooks which are not yet utilizing all cores.
Existing hooks which already use `xargs -P` to parallelize their work, can be optimized further by replacing the xargs call with one of the functions added here , eg. `parallelRun` or `parallelMap`.
The new shell based functions `parallelRun` and `parallelMap` are superior to `xargs -P`, because:
- They perform better as they launch $NIX_BUILD_CORE workers, each handling many jobs, vs `xargss -P` usually launches a new process for each job (anything else is difficult to implement nicely with xargs).
- workers can be defined as shell functions, which allows using all declared shell variables and functions inside the worker (e.g isElf or isScript, etc.), vs. `xargs -P` forces the user to create a new shell process, which cannot re-use declared variables and functions.
Improvement:
Checks calling patchelf and grep are executed in parallel with $NIX_BUILD_CORES
Tradeoff:
One more sub-shell is spawned for each file, only if that file is a script or an ELF file.
Decreases the time spent on gzipping man pages.
Decreases the number of processes launched per file from 2 to 1.
Launches multiple processes in parallel via xargs -P.
The behavior of the hook is unchanged.
gzip -f is now needed to retain the behavior of compressing hardlinkgs.
Previously '-f' was not needed because gzip compressed to stdout.
It removes the check checking if gzip failed, because there os no reason it should ever fail.
Even if it fails we probably want to fix the issue instead of silently not gzipping.
This check has been introduced via c06046e5ef.
No comment was given on why it would be necessary.
Without the change the build of upcoming `sqlite-3.49.0` will fail as:
> Error: Unknown option --oldincludedir
> Try: 'configure --help' for options
Looking at https://www.gnu.org/prep/standards/html_node/Directory-Variables.html
it feels like it's something that predates gcc and
it should be an alias to `--includedir=`.
Let's just drop the setting of `--oldincludedir=`
(and `cmake` equivalent).
In case `patchShebangs` encounters an `env -S` interpreter with only one
argument following, it would duplicate that argument and most likely invalidate
the resulting interpreter line.
Reproducer:
```nix
(import <nixpkgs> {}).writeTextFile {
name = "patch-shebangs-env-s";
text = ''
#!/bin/env -S bash
'';
executable = true;
checkPhase = ''
patchShebangs $out
'';
}
```
The resulting file would contain
```
#!/nix/store/pw…fk-coreutils-9.5/bin/env -S /nix/store/4f…g60-bash-5.2p37/bin/bash bash
```
instead of the correct
```
#!/nix/store/pw…fk-coreutils-9.5/bin/env -S /nix/store/4f…g60-bash-5.2p37/bin/bash
```