cudaPackages: build redists from manifests and add CUDA 13 (#437723)
This commit is contained in:
@@ -26,23 +26,9 @@ All CUDA package sets include common CUDA packages like `libcublas`, `cudnn`, `t
|
||||
CUDA support is not enabled by default in Nixpkgs. To enable CUDA support, make sure Nixpkgs is imported with a configuration similar to the following:
|
||||
|
||||
```nix
|
||||
{ pkgs }:
|
||||
{
|
||||
allowUnfreePredicate =
|
||||
let
|
||||
ensureList = x: if builtins.isList x then x else [ x ];
|
||||
in
|
||||
package:
|
||||
builtins.all (
|
||||
license:
|
||||
license.free
|
||||
|| builtins.elem license.shortName [
|
||||
"CUDA EULA"
|
||||
"cuDNN EULA"
|
||||
"cuSPARSELt EULA"
|
||||
"cuTENSOR EULA"
|
||||
"NVidia OptiX EULA"
|
||||
]
|
||||
) (ensureList package.meta.license);
|
||||
allowUnfreePredicate = pkgs._cuda.lib.allowUnfreeCudaPredicate;
|
||||
cudaCapabilities = [ <target-architectures> ];
|
||||
cudaForwardCompat = true;
|
||||
cudaSupport = true;
|
||||
@@ -63,45 +49,23 @@ The `cudaForwardCompat` boolean configuration option determines whether PTX supp
|
||||
|
||||
### Modifying CUDA package sets {#cuda-modifying-cuda-package-sets}
|
||||
|
||||
CUDA package sets are created by using `callPackage` on `pkgs/top-level/cuda-packages.nix` with an explicit argument for `cudaMajorMinorVersion`, a string of the form `"<major>.<minor>"` (e.g., `"12.2"`), which informs the CUDA package set tooling which version of CUDA to use. The majority of the CUDA package set tooling is available through the top-level attribute set `_cuda`, a fixed-point defined outside the CUDA package sets.
|
||||
CUDA package sets are defined in `pkgs/top-level/cuda-packages.nix`. A CUDA package set is created by `callPackage`-ing `pkgs/development/cuda-modules/default.nix` with an attribute set `manifests`, containing NVIDIA manifests for each redistributable. The manifests for supported redistributables are available through `_cuda.manifests` and live in `pkgs/development/cuda-modules/_cuda/manifests`.
|
||||
|
||||
::: {.caution}
|
||||
The `cudaMajorMinorVersion` and `_cuda` attributes are not part of the CUDA package set fixed-point, but are instead provided by `callPackage` from the top-level in the construction of the package set. As such, they must be modified via the package set's `override` attribute.
|
||||
:::
|
||||
The majority of the CUDA package set tooling is available through the top-level attribute set `_cuda`, a fixed-point defined outside the CUDA package sets. As a fixed-point, `_cuda` should be modified through its `extend` attribute.
|
||||
|
||||
::: {.caution}
|
||||
As indicated by the underscore prefix, `_cuda` is an implementation detail and no guarantees are provided with respect to its stability or API. The `_cuda` attribute set is exposed only to ease creation or modification of CUDA package sets by expert, out-of-tree users.
|
||||
:::
|
||||
|
||||
Out-of-tree modifications of packages should use `overrideAttrs` to make any necessary modifications to the package expression.
|
||||
|
||||
::: {.note}
|
||||
The `_cuda` attribute set fixed-point should be modified through its `extend` attribute.
|
||||
The `_cuda` attribute set previously exposed `fixups`, an attribute set mapping from package name (`pname`) to a `callPackage`-compatible expression which provided to `overrideAttrs` on the result of a generic redistributable builder. This functionality has been removed in favor of including full package expressions for each redistributable package to ensure consistent attribute set membership across supported CUDA releases, platforms, and configurations.
|
||||
:::
|
||||
|
||||
The `_cuda.fixups` attribute set is a mapping from package name (`pname`) to a `callPackage`-compatible expression which will be provided to `overrideAttrs` on the result of our generic builder.
|
||||
|
||||
::: {.caution}
|
||||
Fixups are chosen from `_cuda.fixups` by `pname`. As a result, packages with multiple versions (e.g., `cudnn`, `cudnn_8_9`, etc.) all share a single fixup function (i.e., `_cuda.fixups.cudnn`, which is `pkgs/development/cuda-modules/fixups/cudnn.nix`).
|
||||
:::
|
||||
|
||||
As an example, you can change the fixup function used for cuDNN for only the default CUDA package set with this overlay:
|
||||
|
||||
```nix
|
||||
final: prev: {
|
||||
cudaPackages = prev.cudaPackages.override (prevArgs: {
|
||||
_cuda = prevArgs._cuda.extend (
|
||||
_: prevAttrs: {
|
||||
fixups = prevAttrs.fixups // {
|
||||
cudnn = <your-fixup-function>;
|
||||
};
|
||||
}
|
||||
);
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
### Extending CUDA package sets {#cuda-extending-cuda-package-sets}
|
||||
|
||||
CUDA package sets are scopes and provide the usual `overrideScope` attribute for overriding package attributes (see the note about `cudaMajorMinorVersion` and `_cuda` in [Configuring CUDA package sets](#cuda-modifying-cuda-package-sets)).
|
||||
CUDA package sets are scopes and provide the usual `overrideScope` attribute for overriding package attributes (see the note about `_cuda` in [Configuring CUDA package sets](#cuda-modifying-cuda-package-sets)).
|
||||
|
||||
Inspired by `pythonPackagesExtensions`, the `_cuda.extensions` attribute is a list of extensions applied to every version of the CUDA package set, allowing modification of all versions of the CUDA package set without needing to know their names or explicitly enumerate and modify them. As an example, disabling `cuda_compat` across all CUDA package sets can be accomplished with this overlay:
|
||||
|
||||
@@ -115,6 +79,8 @@ final: prev: {
|
||||
}
|
||||
```
|
||||
|
||||
Redistributable packages are constructed by the `buildRedist` helper; see `pkgs/development/cuda-modules/buildRedist/default.nix` for the implementation.
|
||||
|
||||
### Using `cudaPackages` {#cuda-using-cudapackages}
|
||||
|
||||
::: {.caution}
|
||||
@@ -194,7 +160,7 @@ In `pkgsForCudaArch`, the `cudaForwardCompat` option is set to `false` because e
|
||||
::: {.caution}
|
||||
Not every version of CUDA supports every architecture!
|
||||
|
||||
To illustrate: support for Blackwell (e.g., `sm_100`) was added in CUDA 12.8. Assume our Nixpkgs' default CUDA package set is to CUDA 12.6. Then the Nixpkgs variant available through `pkgsForCudaArch.sm_100` is useless, since packages like `pkgsForCudaArch.sm_100.opencv` and `pkgsForCudaArch.sm_100.python3Packages.torch` will try to generate code for `sm_100`, an architecture unknown to CUDA 12.6. In that case, you should use `pkgsForCudaArch.sm_100.cudaPackages_12_8.pkgs` instead (see [Using cudaPackages.pkgs](#cuda-using-cudapackages-pkgs) for more details).
|
||||
To illustrate: support for Blackwell (e.g., `sm_100`) was added in CUDA 12.8. Assume our Nixpkgs' default CUDA package set is to CUDA 12.6. Then the Nixpkgs variant available through `pkgsForCudaArch.sm_100` is useless, since packages like `pkgsForCudaArch.sm_100.opencv` and `pkgsForCudaArch.sm_100.python3Packages.torch` will try to generate code for `sm_100`, an architecture unknown to CUDA 12.6. In that case, you should use `pkgsForCudaArch.sm_100.cudaPackages_12_8.pkgs` instead (see [Using `cudaPackages.pkgs`](#cuda-using-cudapackages-pkgs) for more details).
|
||||
:::
|
||||
|
||||
The `pkgsForCudaArch` attribute set makes it possible to access packages built for a specific architecture without needing to manually call `pkgs.extend` and supply a new `config`. As an example, `pkgsForCudaArch.sm_89.python3Packages.torch` provides PyTorch built for Ada Lovelace GPUs.
|
||||
@@ -306,82 +272,41 @@ This section of the docs is still very much in progress. Feedback is welcome in
|
||||
|
||||
### Package set maintenance {#cuda-package-set-maintenance}
|
||||
|
||||
The CUDA Toolkit is a suite of CUDA libraries and software meant to provide a development environment for CUDA-accelerated applications. Until the release of CUDA 11.4, NVIDIA had only made the CUDA Toolkit available as a multi-gigabyte runfile installer, which we provide through the [`cudaPackages.cudatoolkit`](https://search.nixos.org/packages?channel=unstable&type=packages&query=cudaPackages.cudatoolkit) attribute. From CUDA 11.4 and onwards, NVIDIA has also provided CUDA redistributables (“CUDA-redist”): individually packaged CUDA Toolkit components meant to facilitate redistribution and inclusion in downstream projects. These packages are available in the [`cudaPackages`](https://search.nixos.org/packages?channel=unstable&type=packages&query=cudaPackages) package set.
|
||||
The CUDA Toolkit is a suite of CUDA libraries and software meant to provide a development environment for CUDA-accelerated applications. Until the release of CUDA 11.4, NVIDIA had only made the CUDA Toolkit available as a multi-gigabyte runfile installer. From CUDA 11.4 and onwards, NVIDIA has also provided CUDA redistributables (“CUDA-redist”): individually packaged CUDA Toolkit components meant to facilitate redistribution and inclusion in downstream projects. These packages are available in the [`cudaPackages`](https://search.nixos.org/packages?channel=unstable&type=packages&query=cudaPackages) package set.
|
||||
|
||||
All new projects should use the CUDA redistributables available in [`cudaPackages`](https://search.nixos.org/packages?channel=unstable&type=packages&query=cudaPackages) in place of [`cudaPackages.cudatoolkit`](https://search.nixos.org/packages?channel=unstable&type=packages&query=cudaPackages.cudatoolkit), as they are much easier to maintain and update.
|
||||
While the monolithic CUDA Toolkit runfile installer is no longer provided, [`cudaPackages.cudatoolkit`](https://search.nixos.org/packages?channel=unstable&type=packages&query=cudaPackages.cudatoolkit) provides a `symlinkJoin`-ed approximation which common libraries. The use of [`cudaPackages.cudatoolkit`](https://search.nixos.org/packages?channel=unstable&type=packages&query=cudaPackages.cudatoolkit) is discouraged: all new projects should use the CUDA redistributables available in [`cudaPackages`](https://search.nixos.org/packages?channel=unstable&type=packages&query=cudaPackages) instead, as they are much easier to maintain and update.
|
||||
|
||||
#### Updating redistributables {#cuda-updating-redistributables}
|
||||
|
||||
1. Go to NVIDIA's index of CUDA redistributables: <https://developer.download.nvidia.com/compute/cuda/redist/>
|
||||
2. Make a note of the new version of CUDA available.
|
||||
3. Run
|
||||
Whenever a new version of a redistributable manifest is made available:
|
||||
|
||||
```bash
|
||||
nix run github:connorbaker/cuda-redist-find-features -- \
|
||||
download-manifests \
|
||||
--log-level DEBUG \
|
||||
--version <newest CUDA version> \
|
||||
https://developer.download.nvidia.com/compute/cuda/redist \
|
||||
./pkgs/development/cuda-modules/cuda/manifests
|
||||
```
|
||||
1. Check the corresponding README.md in `pkgs/development/cuda-modules/_cuda/manifests` for the URL to use when vendoring manifests.
|
||||
2. Update the manifest version used in construction of each CUDA package set in `pkgs/top-level/cuda-packages.nix`.
|
||||
3. Update package expressions in `pkgs/development/cuda-modules/packages`.
|
||||
|
||||
This will download a copy of the manifest for the new version of CUDA.
|
||||
4. Run
|
||||
Updating package expressions amounts to:
|
||||
|
||||
```bash
|
||||
nix run github:connorbaker/cuda-redist-find-features -- \
|
||||
process-manifests \
|
||||
--log-level DEBUG \
|
||||
--version <newest CUDA version> \
|
||||
https://developer.download.nvidia.com/compute/cuda/redist \
|
||||
./pkgs/development/cuda-modules/cuda/manifests
|
||||
```
|
||||
|
||||
This will generate a `redistrib_features_<newest CUDA version>.json` file in the same directory as the manifest.
|
||||
5. Update the `cudaVersionMap` attribute set in `pkgs/development/cuda-modules/cuda/extension.nix`.
|
||||
|
||||
#### Updating cuTensor {#cuda-updating-cutensor}
|
||||
|
||||
1. Repeat the steps present in [Updating CUDA redistributables](#cuda-updating-redistributables) with the following changes:
|
||||
- Use the index of cuTensor redistributables: <https://developer.download.nvidia.com/compute/cutensor/redist>
|
||||
- Use the newest version of cuTensor available instead of the newest version of CUDA.
|
||||
- Use `pkgs/development/cuda-modules/cutensor/manifests` instead of `pkgs/development/cuda-modules/cuda/manifests`.
|
||||
- Skip the step of updating `cudaVersionMap` in `pkgs/development/cuda-modules/cuda/extension.nix`.
|
||||
- adding fixes conditioned on newer releases, like added or removed dependencies
|
||||
- adding package expressions for new packages
|
||||
- updating `passthru.brokenConditions` and `passthru.badPlatformsConditions` with various constraints, (e.g., new releases removing support for various architectures)
|
||||
|
||||
#### Updating supported compilers and GPUs {#cuda-updating-supported-compilers-and-gpus}
|
||||
|
||||
1. Update `nvccCompatibilities` in `pkgs/development/cuda-modules/_cuda/data/nvcc.nix` to include the newest release of NVCC, as well as any newly supported host compilers.
|
||||
2. Update `cudaCapabilityToInfo` in `pkgs/development/cuda-modules/_cuda/data/cuda.nix` to include any new GPUs supported by the new release of CUDA.
|
||||
|
||||
#### Updating the CUDA Toolkit runfile installer {#cuda-updating-the-cuda-toolkit}
|
||||
|
||||
::: {.warning}
|
||||
While the CUDA Toolkit runfile installer is still available in Nixpkgs as the [`cudaPackages.cudatoolkit`](https://search.nixos.org/packages?channel=unstable&type=packages&query=cudaPackages.cudatoolkit) attribute, its use is not recommended, and it should be considered deprecated. Please migrate to the CUDA redistributables provided by the [`cudaPackages`](https://search.nixos.org/packages?channel=unstable&type=packages&query=cudaPackages) package set.
|
||||
|
||||
To ensure packages relying on the CUDA Toolkit runfile installer continue to build, it will continue to be updated until a migration path is available.
|
||||
:::
|
||||
|
||||
1. Go to NVIDIA's CUDA Toolkit runfile installer download page: <https://developer.nvidia.com/cuda-downloads>
|
||||
2. Select the appropriate OS, architecture, distribution, and version, and installer type.
|
||||
|
||||
- For example: Linux, x86_64, Ubuntu, 22.04, runfile (local)
|
||||
- NOTE: Typically, we use the Ubuntu runfile. It is unclear if the runfile for other distributions will work.
|
||||
|
||||
3. Take the link provided by the installer instructions on the webpage after selecting the installer type and get its hash by running:
|
||||
|
||||
```bash
|
||||
nix store prefetch-file --hash-type sha256 <link>
|
||||
```
|
||||
|
||||
4. Update `pkgs/development/cuda-modules/cudatoolkit/releases.nix` to include the release.
|
||||
1. Update `nvccCompatibilities` in `pkgs/development/cuda-modules/_cuda/db/bootstrap/nvcc.nix` to include the newest release of NVCC, as well as any newly supported host compilers.
|
||||
2. Update `cudaCapabilityToInfo` in `pkgs/development/cuda-modules/_cuda/db/bootstrap/cuda.nix` to include any new GPUs supported by the new release of CUDA.
|
||||
|
||||
#### Updating the CUDA package set {#cuda-updating-the-cuda-package-set}
|
||||
|
||||
1. Include a new `cudaPackages_<major>_<minor>` package set in `pkgs/top-level/all-packages.nix`.
|
||||
::: {.note}
|
||||
Changing the default CUDA package set should occur in a separate PR, allowing time for additional testing.
|
||||
:::
|
||||
|
||||
- NOTE: Changing the default CUDA package set should occur in a separate PR, allowing time for additional testing.
|
||||
::: {.warning}
|
||||
As described in [Using `cudaPackages.pkgs`](#cuda-using-cudapackages-pkgs), the current implementation fix for package set leakage involves creating a new instance for each non-default CUDA package sets. As such, We should limit the number of CUDA package sets which have `recurseForDerivations` set to true: `lib.recurseIntoAttrs` should only be applied to the default CUDA package set.
|
||||
:::
|
||||
|
||||
2. Successfully build the closure of the new package set, updating `pkgs/development/cuda-modules/cuda/overrides.nix` as needed. Below are some common failures:
|
||||
1. Include a new `cudaPackages_<major>_<minor>` package set in `pkgs/top-level/cuda-packages.nix` and inherit it in `pkgs/top-level/all-packages.nix`.
|
||||
2. Successfully build the closure of the new package set, updating expressions in `pkgs/development/cuda-modules/packages` as needed. Below are some common failures:
|
||||
|
||||
| Unable to ... | During ... | Reason | Solution | Note |
|
||||
| -------------- | -------------------------------- | ------------------------------------------------ | -------------------------- | ------------------------------------------------------------ |
|
||||
@@ -389,6 +314,13 @@ To ensure packages relying on the CUDA Toolkit runfile installer continue to bui
|
||||
| Find libraries | `configurePhase` | Missing dependency on a `dev` output | Add the missing dependency | The `dev` output typically contains CMake configuration files |
|
||||
| Find libraries | `buildPhase` or `patchelf` | Missing dependency on a `lib` or `static` output | Add the missing dependency | The `lib` or `static` output typically contains the libraries |
|
||||
|
||||
::: {.note}
|
||||
Two utility derivations ease testing updates to the package set:
|
||||
|
||||
- `cudaPackages.tests.redists-unpacked`: the `src` of each redistributable package unpacked and `symlinkJoin`-ed
|
||||
- `cudaPackages.tests.redists-installed`: each output of each redistributable package `symlinkJoin`-ed
|
||||
:::
|
||||
|
||||
Failure to run the resulting binary is typically the most challenging to diagnose, as it may involve a combination of the aforementioned issues. This type of failure typically occurs when a library attempts to load or open a library it depends on that it does not declare in its `DT_NEEDED` section. Try the following debugging steps:
|
||||
|
||||
1. First ensure that dependencies are patched with [`autoAddDriverRunpath`](https://search.nixos.org/packages?channel=unstable&type=packages&query=autoAddDriverRunpath).
|
||||
|
||||
@@ -58,9 +58,8 @@
|
||||
],
|
||||
"cuda-updating-redistributables": [
|
||||
"index.html#cuda-updating-redistributables",
|
||||
"index.html#updating-cuda-redistributables"
|
||||
],
|
||||
"cuda-updating-cutensor": [
|
||||
"index.html#updating-cuda-redistributables",
|
||||
"index.html#updating-the-cuda-toolkit",
|
||||
"index.html#cuda-updating-cutensor",
|
||||
"index.html#updating-cutensor"
|
||||
],
|
||||
@@ -72,10 +71,6 @@
|
||||
"index.html#cuda-updating-the-cuda-package-set",
|
||||
"index.html#updating-the-cuda-package-set"
|
||||
],
|
||||
"cuda-updating-the-cuda-toolkit": [
|
||||
"index.html#cuda-updating-the-cuda-toolkit",
|
||||
"index.html#updating-the-cuda-toolkit"
|
||||
],
|
||||
"cuda-user-guide": [
|
||||
"index.html#cuda-user-guide"
|
||||
],
|
||||
|
||||
@@ -44,7 +44,10 @@ stdenv.mkDerivation (finalAttrs: {
|
||||
buildInputs = [
|
||||
bluez
|
||||
libnotify
|
||||
opencv
|
||||
# NOTE: Specifically not using lib.getOutput here because it would select the out output of opencv, which changes
|
||||
# semantics since make-derivation uses lib.getDev on the dependency arrays, which won't touch derivations with
|
||||
# specified outputs.
|
||||
(opencv.cxxdev or opencv)
|
||||
qt6.qtbase
|
||||
qt6.qtmultimedia
|
||||
qt6.qttools
|
||||
|
||||
@@ -434,9 +434,7 @@ stdenv'.mkDerivation (finalAttrs: {
|
||||
# They comment two licenses: GPLv2 and Blender License, but they
|
||||
# say: "We've decided to cancel the BL offering for an indefinite period."
|
||||
# OptiX, enabled with cudaSupport, is non-free.
|
||||
license =
|
||||
with lib.licenses;
|
||||
[ gpl2Plus ] ++ lib.optional cudaSupport (unfree // { shortName = "NVidia OptiX EULA"; });
|
||||
license = with lib.licenses; [ gpl2Plus ] ++ lib.optional cudaSupport nvidiaCudaRedist;
|
||||
|
||||
platforms = [
|
||||
"aarch64-linux"
|
||||
|
||||
@@ -55,7 +55,7 @@ let
|
||||
]
|
||||
++ lib.optionals cudaSupport [
|
||||
cudatoolkit
|
||||
cudaPackages.cuda_cudart.static
|
||||
(lib.getOutput "static" cudaPackages.cuda_cudart)
|
||||
]
|
||||
++ lib.optional stdenv'.cc.isClang llvmPackages.openmp;
|
||||
|
||||
@@ -129,7 +129,7 @@ stdenv'.mkDerivation {
|
||||
mainProgram = "colmap";
|
||||
homepage = "https://colmap.github.io/index.html";
|
||||
license = licenses.bsd3;
|
||||
platforms = platforms.unix;
|
||||
platforms = if cudaSupport then platforms.linux else platforms.unix;
|
||||
maintainers = with maintainers; [
|
||||
lebastr
|
||||
usertam
|
||||
|
||||
@@ -60,7 +60,7 @@ let
|
||||
in
|
||||
[
|
||||
(lib.cmakeFeature "CUDA${version}_INCLUDE_DIR" "${headers}")
|
||||
(lib.cmakeFeature "CUDA${version}_LIBS" "${cudaPackages.cuda_cudart.stubs}/lib/stubs/libcuda.so")
|
||||
(lib.cmakeFeature "CUDA${version}_LIBS" "${lib.getOutput "stubs" cudaPackages.cuda_cudart}/lib/stubs/libcuda.so")
|
||||
(lib.cmakeFeature "CUDA${version}_STATIC_LIBS" "${lib.getLib cudaPackages.cuda_cudart}/lib/libcudart.so")
|
||||
(lib.cmakeFeature "CUDA${version}_STATIC_CUBLAS_LIBS" (
|
||||
lib.concatStringsSep ";" [
|
||||
|
||||
@@ -6,7 +6,6 @@
|
||||
lib,
|
||||
}:
|
||||
let
|
||||
inherit (lib.attrsets) getBin;
|
||||
inherit (lib.lists) last map optionals;
|
||||
inherit (lib.trivial) boolToString;
|
||||
inherit (config) cudaSupport;
|
||||
@@ -36,7 +35,11 @@ backendStdenv.mkDerivation {
|
||||
substituteInPlace gpu_burn-drv.cpp \
|
||||
--replace-fail \
|
||||
'#define COMPARE_KERNEL "compare.ptx"' \
|
||||
"#define COMPARE_KERNEL \"$out/share/compare.ptx\""
|
||||
'#define COMPARE_KERNEL "${placeholder "out"}/share/compare.ptx"'
|
||||
substituteInPlace Makefile \
|
||||
--replace-fail \
|
||||
'${''''${CUDAPATH}/bin/nvcc''}' \
|
||||
'${lib.getExe cuda_nvcc}'
|
||||
'';
|
||||
|
||||
nativeBuildInputs = [
|
||||
@@ -52,7 +55,8 @@ backendStdenv.mkDerivation {
|
||||
];
|
||||
|
||||
makeFlags = [
|
||||
"CUDAPATH=${getBin cuda_nvcc}"
|
||||
# NOTE: CUDAPATH assumes cuda_cudart is a single output containing all of lib, dev, and stubs.
|
||||
"CUDAPATH=${cuda_cudart}"
|
||||
"COMPUTE=${last (map dropDots cudaCapabilities)}"
|
||||
"IS_JETSON=${boolToString isJetsonBuild}"
|
||||
];
|
||||
|
||||
@@ -57,6 +57,14 @@ effectiveStdenv.mkDerivation (finalAttrs: {
|
||||
python3Packages.wrapPython
|
||||
];
|
||||
|
||||
postPatch = ''
|
||||
nixLog "patching $PWD/Makefile to remove explicit linking against CUDA driver"
|
||||
substituteInPlace "$PWD/Makefile" \
|
||||
--replace-fail \
|
||||
'CUBLASLD_FLAGS = -lcuda ' \
|
||||
'CUBLASLD_FLAGS = '
|
||||
'';
|
||||
|
||||
pythonInputs = builtins.attrValues { inherit (python3Packages) tkinter customtkinter packaging; };
|
||||
|
||||
buildInputs = [
|
||||
|
||||
@@ -23,7 +23,7 @@
|
||||
darwinMinVersionHook,
|
||||
pythonSupport ? true,
|
||||
cudaSupport ? config.cudaSupport,
|
||||
ncclSupport ? config.cudaSupport,
|
||||
ncclSupport ? cudaSupport && cudaPackages.nccl.meta.available,
|
||||
withFullProtobuf ? false,
|
||||
cudaPackages ? { },
|
||||
}@inputs:
|
||||
@@ -154,12 +154,7 @@ effectiveStdenv.mkDerivation rec {
|
||||
cudnn # cudnn.h
|
||||
cuda_cudart
|
||||
]
|
||||
++ lib.optionals (cudaSupport && ncclSupport) (
|
||||
with cudaPackages;
|
||||
[
|
||||
nccl
|
||||
]
|
||||
)
|
||||
++ lib.optionals ncclSupport [ nccl ]
|
||||
)
|
||||
++ lib.optionals effectiveStdenv.hostPlatform.isDarwin [
|
||||
(darwinMinVersionHook "13.3")
|
||||
@@ -270,7 +265,7 @@ effectiveStdenv.mkDerivation rec {
|
||||
'';
|
||||
|
||||
passthru = {
|
||||
inherit cudaSupport cudaPackages; # for the python module
|
||||
inherit cudaSupport cudaPackages ncclSupport; # for the python module
|
||||
inherit protobuf;
|
||||
tests = lib.optionalAttrs pythonSupport {
|
||||
python = python3Packages.onnxruntime;
|
||||
|
||||
@@ -123,8 +123,9 @@ stdenv.mkDerivation (finalAttrs: {
|
||||
# TODO: add UCX support, which is recommended to use with cuda for the most robust OpenMPI build
|
||||
# https://github.com/openucx/ucx
|
||||
# https://www.open-mpi.org/faq/?category=buildcuda
|
||||
(lib.withFeatureAs cudaSupport "cuda" (lib.getDev cudaPackages.cuda_cudart))
|
||||
(lib.withFeatureAs cudaSupport "cuda-libdir" "${cudaPackages.cuda_cudart.stubs}/lib")
|
||||
# NOTE: Open MPI requires the header files specifically, which are in the `include` output.
|
||||
(lib.withFeatureAs cudaSupport "cuda" (lib.getOutput "include" cudaPackages.cuda_cudart))
|
||||
(lib.withFeatureAs cudaSupport "cuda-libdir" "${lib.getLib cudaPackages.cuda_cudart}/lib")
|
||||
(lib.enableFeature cudaSupport "dlopen")
|
||||
(lib.withFeatureAs fabricSupport "psm2" (lib.getDev libpsm2))
|
||||
(lib.withFeatureAs fabricSupport "ofi" (lib.getDev libfabric))
|
||||
|
||||
@@ -34,7 +34,7 @@
|
||||
# enable internal X11 support via libssh2
|
||||
enableX11 ? true,
|
||||
enableNVML ? config.cudaSupport,
|
||||
nvml,
|
||||
cudaPackages,
|
||||
}:
|
||||
|
||||
stdenv.mkDerivation (finalAttrs: {
|
||||
@@ -110,8 +110,8 @@ stdenv.mkDerivation (finalAttrs: {
|
||||
++ lib.optionals enableNVML [
|
||||
(runCommand "collect-nvml" { } ''
|
||||
mkdir $out
|
||||
ln -s ${lib.getDev nvml}/include $out/include
|
||||
ln -s ${lib.getLib nvml}/lib/stubs $out/lib
|
||||
ln -s ${lib.getOutput "include" cudaPackages.cuda_nvml_dev}/include $out/include
|
||||
ln -s ${lib.getOutput "stubs" cudaPackages.cuda_nvml_dev}/lib/stubs $out/lib
|
||||
'')
|
||||
];
|
||||
|
||||
|
||||
@@ -15,7 +15,7 @@ inputs@{
|
||||
enableSse42 ? stdenv.hostPlatform.sse4_2Support,
|
||||
}:
|
||||
let
|
||||
inherit (lib.attrsets) getLib;
|
||||
inherit (lib.attrsets) getOutput;
|
||||
inherit (lib.lists) optionals;
|
||||
inherit (lib.strings) concatStringsSep;
|
||||
|
||||
@@ -88,8 +88,10 @@ effectiveStdenv.mkDerivation (finalAttrs: {
|
||||
# referring to an array!
|
||||
env.LDFLAGS = toString (
|
||||
optionals enableCuda [
|
||||
# Fake libcuda.so (the real one is deployed impurely)
|
||||
"-L${getOutput "stubs" cuda_cudart}/lib/stubs"
|
||||
# Fake libnvidia-ml.so (the real one is deployed impurely)
|
||||
"-L${getLib cuda_nvml_dev}/lib/stubs"
|
||||
"-L${getOutput "stubs" cuda_nvml_dev}/lib/stubs"
|
||||
]
|
||||
);
|
||||
|
||||
|
||||
@@ -34,6 +34,13 @@ let
|
||||
};
|
||||
in
|
||||
stdenv.mkDerivation (finalAttrs: {
|
||||
__structuredAttrs = true;
|
||||
# TODO(@connorbaker):
|
||||
# When strictDeps is enabled, `cuda_nvcc` is required as the argument to `--with-cuda` in `configureFlags` or else
|
||||
# configurePhase fails with `checking for cuda_runtime.h... no`.
|
||||
# This is odd, especially given `cuda_runtime.h` is provided by `cuda_cudart.dev`, which is already in `buildInputs`.
|
||||
strictDeps = true;
|
||||
|
||||
pname = "ucx";
|
||||
version = "1.19.0";
|
||||
|
||||
@@ -75,10 +82,17 @@ stdenv.mkDerivation (finalAttrs: {
|
||||
]
|
||||
++ lib.optionals enableRocm rocmList;
|
||||
|
||||
LDFLAGS = lib.optionals enableCuda [
|
||||
# Fake libnvidia-ml.so (the real one is deployed impurely)
|
||||
"-L${lib.getLib cudaPackages.cuda_nvml_dev}/lib/stubs"
|
||||
];
|
||||
# NOTE: With `__structuredAttrs` enabled, `LDFLAGS` must be set under `env` so it is assured to be a string;
|
||||
# otherwise, we might have forgotten to convert it to a string and Nix would make LDFLAGS a shell variable
|
||||
# referring to an array!
|
||||
env.LDFLAGS = toString (
|
||||
lib.optionals enableCuda [
|
||||
# Fake libcuda.so (the real one is deployed impurely)
|
||||
"-L${lib.getOutput "stubs" cudaPackages.cuda_cudart}/lib/stubs"
|
||||
# Fake libnvidia-ml.so (the real one is deployed impurely)
|
||||
"-L${lib.getOutput "stubs" cudaPackages.cuda_nvml_dev}/lib/stubs"
|
||||
]
|
||||
);
|
||||
|
||||
configureFlags = [
|
||||
"--with-rdmacm=${lib.getDev rdma-core}"
|
||||
|
||||
@@ -16,7 +16,7 @@
|
||||
rPackages,
|
||||
}@inputs:
|
||||
|
||||
assert ncclSupport -> (cudaSupport && !cudaPackages.nccl.meta.unsupported);
|
||||
assert ncclSupport -> (cudaSupport && cudaPackages.nccl.meta.available);
|
||||
# Disable regular tests when building the R package
|
||||
# because 1) the R package runs its own tests and
|
||||
# 2) the R package creates a different binary shared
|
||||
|
||||
@@ -10,39 +10,48 @@ package set by [cuda-packages.nix](../../top-level/cuda-packages.nix).
|
||||
|
||||
## Top-level directories
|
||||
|
||||
- `cuda`: CUDA redistributables! Provides extension to `cudaPackages` scope.
|
||||
- `cudatoolkit`: monolithic CUDA Toolkit run-file installer. Provides extension
|
||||
to `cudaPackages` scope.
|
||||
- `cudnn`: NVIDIA cuDNN library.
|
||||
- `cutensor`: NVIDIA cuTENSOR library.
|
||||
- `fixups`: Each file or directory (excluding `default.nix`) should contain a
|
||||
`callPackage`-able expression to be provided to the `overrideAttrs` attribute
|
||||
of a package produced by the generic manifest builder.
|
||||
These fixups are applied by `pname`, so packages with multiple versions
|
||||
(e.g., `cudnn`, `cudnn_8_9`, etc.) all share a single fixup function
|
||||
(i.e., `fixups/cudnn.nix`).
|
||||
- `generic-builders`:
|
||||
- Contains a builder `manifest.nix` which operates on the `Manifest` type
|
||||
defined in `modules/generic/manifests`. Most packages are built using this
|
||||
builder.
|
||||
- Contains a builder `multiplex.nix` which leverages the Manifest builder. In
|
||||
short, the Multiplex builder adds multiple versions of a single package to
|
||||
single instance of the CUDA Packages package set. It is used primarily for
|
||||
packages like `cudnn` and `cutensor`.
|
||||
- `modules`: Nixpkgs modules to check the shape and content of CUDA
|
||||
redistributable and feature manifests. These modules additionally use shims
|
||||
provided by some CUDA packages to allow them to re-use the
|
||||
`genericManifestBuilder`, even if they don't have manifest files of their
|
||||
own. `cudnn` and `tensorrt` are examples of packages which provide such
|
||||
shims. These modules are further described in the
|
||||
[Modules](./modules/README.md) documentation.
|
||||
- `_cuda`: Fixed-point used to configure, construct, and extend the CUDA package
|
||||
set. This includes NVIDIA manifests.
|
||||
- `buildRedist`: Contains the logic to build packages using NVIDIA's manifests.
|
||||
- `packages`: Contains packages which exist in every instance of the CUDA
|
||||
package set. These packages are built in a `by-name` fashion.
|
||||
- `setup-hooks`: Nixpkgs setup hooks for CUDA.
|
||||
- `tensorrt`: NVIDIA TensorRT library.
|
||||
- `tests`: Contains tests which can be run against the CUDA package set.
|
||||
|
||||
Many redistributable packages are in the `packages` directory. Their presence
|
||||
ensures that, even if a CUDA package set which no longer includes a given package
|
||||
is being constructed, the attribute for that package will still exist (but refer
|
||||
to a broken package). This prevents missing attribute errors as the package set
|
||||
evolves.
|
||||
|
||||
## Distinguished packages
|
||||
|
||||
Some packages are purposefully not in the `packages` directory. These are packages
|
||||
which do not make sense for Nixpkgs, require further investigation, or are otherwise
|
||||
not straightforward to include. These packages are:
|
||||
|
||||
- `cuda`:
|
||||
- `collectx_bringup`: missing `libssl.so.1.1` and `libcrypto.so.1.1`; not sure how
|
||||
to provide them or what the package does.
|
||||
- `cuda_sandbox_dev`: unclear on purpose.
|
||||
- `driver_assistant`: we don't use the drivers from the CUDA releases; irrelevant.
|
||||
- `mft_autocomplete`: unsure of purpose; contains FHS paths.
|
||||
- `mft_oem`: unsure of purpose; contains FHS paths.
|
||||
- `mft`: unsure of purpose; contains FHS paths.
|
||||
- `nvidia_driver`: we don't use the drivers from the CUDA releases; irrelevant.
|
||||
- `nvlsm`: contains FHS paths/NVSwitch and NVLINK software
|
||||
- `libnvidia_nscq`: NVSwitch software
|
||||
- `libnvsdm`: NVSwitch software
|
||||
- `cublasmp`:
|
||||
- `libcublasmp`: `nvshmem` isnt' packaged.
|
||||
- `cudnn`:
|
||||
- `cudnn_samples`: requires FreeImage, which is abandoned and not packaged.
|
||||
|
||||
> [!NOTE]
|
||||
>
|
||||
> When packaging redistributables, prefer `autoPatchelfIgnoreMissingDeps` to providing
|
||||
> paths to stubs with `extraAutoPatchelfLibs`; the stubs are meant to be used for
|
||||
> projects where linking against libraries available only at runtime is unavoidable.
|
||||
|
||||
### CUDA Compatibility
|
||||
|
||||
[CUDA Compatibility](https://docs.nvidia.com/deploy/cuda-compatibility/),
|
||||
|
||||
@@ -100,10 +100,27 @@
|
||||
}
|
||||
)
|
||||
{
|
||||
# Tesla K40
|
||||
"3.5" = {
|
||||
archName = "Kepler";
|
||||
minCudaMajorMinorVersion = "10.0";
|
||||
dontDefaultAfterCudaMajorMinorVersion = "11.0";
|
||||
maxCudaMajorMinorVersion = "11.8";
|
||||
};
|
||||
|
||||
# Tesla K80
|
||||
"3.7" = {
|
||||
archName = "Kepler";
|
||||
minCudaMajorMinorVersion = "10.0";
|
||||
dontDefaultAfterCudaMajorMinorVersion = "11.0";
|
||||
maxCudaMajorMinorVersion = "11.8";
|
||||
};
|
||||
|
||||
# Tesla/Quadro M series
|
||||
"5.0" = {
|
||||
archName = "Maxwell";
|
||||
minCudaMajorMinorVersion = "10.0";
|
||||
maxCudaMajorMinorVersion = "12.9";
|
||||
dontDefaultAfterCudaMajorMinorVersion = "11.0";
|
||||
};
|
||||
|
||||
@@ -111,6 +128,7 @@
|
||||
"5.2" = {
|
||||
archName = "Maxwell";
|
||||
minCudaMajorMinorVersion = "10.0";
|
||||
maxCudaMajorMinorVersion = "12.9";
|
||||
dontDefaultAfterCudaMajorMinorVersion = "11.0";
|
||||
};
|
||||
|
||||
@@ -118,6 +136,7 @@
|
||||
"6.0" = {
|
||||
archName = "Pascal";
|
||||
minCudaMajorMinorVersion = "10.0";
|
||||
maxCudaMajorMinorVersion = "12.9";
|
||||
# Removed from TensorRT 10.0, which corresponds to CUDA 12.4 release.
|
||||
# https://docs.nvidia.com/deeplearning/tensorrt/archives/tensorrt-1001/support-matrix/index.html
|
||||
dontDefaultAfterCudaMajorMinorVersion = "12.3";
|
||||
@@ -128,6 +147,7 @@
|
||||
"6.1" = {
|
||||
archName = "Pascal";
|
||||
minCudaMajorMinorVersion = "10.0";
|
||||
maxCudaMajorMinorVersion = "12.9";
|
||||
# Removed from TensorRT 10.0, which corresponds to CUDA 12.4 release.
|
||||
# https://docs.nvidia.com/deeplearning/tensorrt/archives/tensorrt-1001/support-matrix/index.html
|
||||
dontDefaultAfterCudaMajorMinorVersion = "12.3";
|
||||
@@ -137,11 +157,22 @@
|
||||
"7.0" = {
|
||||
archName = "Volta";
|
||||
minCudaMajorMinorVersion = "10.0";
|
||||
maxCudaMajorMinorVersion = "12.9";
|
||||
# Removed from TensorRT 10.5, which corresponds to CUDA 12.6 release.
|
||||
# https://docs.nvidia.com/deeplearning/tensorrt/archives/tensorrt-1050/support-matrix/index.html
|
||||
dontDefaultAfterCudaMajorMinorVersion = "12.5";
|
||||
};
|
||||
|
||||
# Jetson AGX Xavier, Drive AGX Pegasus, Xavier NX
|
||||
"7.2" = {
|
||||
archName = "Volta";
|
||||
minCudaMajorMinorVersion = "10.0";
|
||||
# Note: without `cuda_compat`, maxCudaMajorMinorVersion is 11.8
|
||||
# https://docs.nvidia.com/cuda/cuda-for-tegra-appnote/index.html#deployment-considerations-for-cuda-upgrade-package
|
||||
maxCudaMajorMinorVersion = "12.2";
|
||||
isJetson = true;
|
||||
};
|
||||
|
||||
# GTX/RTX Turing – GTX 1660 Ti, RTX 2060, RTX 2070, RTX 2080, Titan RTX, Quadro RTX 4000,
|
||||
# Quadro RTX 5000, Quadro RTX 6000, Quadro RTX 8000, Quadro T1000/T2000, Tesla T4
|
||||
"7.5" = {
|
||||
@@ -163,20 +194,30 @@
|
||||
minCudaMajorMinorVersion = "11.2";
|
||||
};
|
||||
|
||||
# Jetson AGX Orin and Drive AGX Orin only
|
||||
# Tegra T234 (Jetson Orin)
|
||||
"8.7" = {
|
||||
archName = "Ampere";
|
||||
minCudaMajorMinorVersion = "11.5";
|
||||
minCudaMajorMinorVersion = "11.4";
|
||||
isJetson = true;
|
||||
};
|
||||
|
||||
# Tegra T239 (Switch 2?)
|
||||
# "8.8" = {
|
||||
# archName = "Ampere";
|
||||
# minCudaMajorMinorVersion = "13.0";
|
||||
# # It's not a Jetson device, but it does use the same architecture.
|
||||
# isJetson = true;
|
||||
# # Should never be default.
|
||||
# dontDefaultAfterCudaMajorMinorVersion = "13.0";
|
||||
# };
|
||||
|
||||
# NVIDIA GeForce RTX 4090, RTX 4080, RTX 6000, Tesla L40
|
||||
"8.9" = {
|
||||
archName = "Ada";
|
||||
minCudaMajorMinorVersion = "11.8";
|
||||
};
|
||||
|
||||
# NVIDIA H100 (GH100)
|
||||
# NVIDIA H100, H200, GH200
|
||||
"9.0" = {
|
||||
archName = "Hopper";
|
||||
minCudaMajorMinorVersion = "11.8";
|
||||
@@ -187,7 +228,7 @@
|
||||
minCudaMajorMinorVersion = "12.0";
|
||||
};
|
||||
|
||||
# NVIDIA B100
|
||||
# NVIDIA B200, GB200
|
||||
"10.0" = {
|
||||
archName = "Blackwell";
|
||||
minCudaMajorMinorVersion = "12.7";
|
||||
@@ -203,26 +244,33 @@
|
||||
minCudaMajorMinorVersion = "12.9";
|
||||
};
|
||||
|
||||
# NVIDIA Jetson Thor Blackwell
|
||||
# NVIDIA Jetson Thor Blackwell, T4000, T5000 (CUDA 12.7-12.9)
|
||||
# Okay, so:
|
||||
# - Support for Thor was added in CUDA 12.7, which was never released but is referenced in docs
|
||||
# - NVIDIA changed the compute capability from 10.0 to 11.0 in CUDA 13.0
|
||||
# - From CUDA 13.0 and on, 10.1 is no longer a valid compute capability
|
||||
"10.1" = {
|
||||
archName = "Blackwell";
|
||||
minCudaMajorMinorVersion = "12.7";
|
||||
maxCudaMajorMinorVersion = "12.9";
|
||||
isJetson = true;
|
||||
};
|
||||
|
||||
"10.1a" = {
|
||||
archName = "Blackwell";
|
||||
minCudaMajorMinorVersion = "12.7";
|
||||
maxCudaMajorMinorVersion = "12.9";
|
||||
isJetson = true;
|
||||
};
|
||||
|
||||
"10.1f" = {
|
||||
archName = "Blackwell";
|
||||
minCudaMajorMinorVersion = "12.9";
|
||||
maxCudaMajorMinorVersion = "12.9";
|
||||
isJetson = true;
|
||||
};
|
||||
|
||||
# NVIDIA ???
|
||||
# NVIDIA B300
|
||||
"10.3" = {
|
||||
archName = "Blackwell";
|
||||
minCudaMajorMinorVersion = "12.9";
|
||||
@@ -238,6 +286,25 @@
|
||||
minCudaMajorMinorVersion = "12.9";
|
||||
};
|
||||
|
||||
# NVIDIA Jetson Thor Blackwell, T4000, T5000 (CUDA 13.0+)
|
||||
"11.0" = {
|
||||
archName = "Blackwell";
|
||||
minCudaMajorMinorVersion = "13.0";
|
||||
isJetson = true;
|
||||
};
|
||||
|
||||
"11.0a" = {
|
||||
archName = "Blackwell";
|
||||
minCudaMajorMinorVersion = "13.0";
|
||||
isJetson = true;
|
||||
};
|
||||
|
||||
"11.0f" = {
|
||||
archName = "Blackwell";
|
||||
minCudaMajorMinorVersion = "13.0";
|
||||
isJetson = true;
|
||||
};
|
||||
|
||||
# NVIDIA GeForce RTX 5090 (GB202) etc.
|
||||
"12.0" = {
|
||||
archName = "Blackwell";
|
||||
@@ -254,7 +321,7 @@
|
||||
minCudaMajorMinorVersion = "12.9";
|
||||
};
|
||||
|
||||
# NVIDIA ???
|
||||
# NVIDIA DGX Spark
|
||||
"12.1" = {
|
||||
archName = "Blackwell";
|
||||
minCudaMajorMinorVersion = "12.9";
|
||||
|
||||
@@ -28,7 +28,204 @@
|
||||
```
|
||||
*/
|
||||
nvccCompatibilities = {
|
||||
# Our baseline
|
||||
# https://docs.nvidia.com/cuda/archive/11.0/cuda-toolkit-release-notes/index.html#cuda-compiler-new-features
|
||||
"11.0" = {
|
||||
clang = {
|
||||
maxMajorVersion = "9";
|
||||
minMajorVersion = "7";
|
||||
};
|
||||
gcc = {
|
||||
maxMajorVersion = "9";
|
||||
minMajorVersion = "6";
|
||||
};
|
||||
};
|
||||
|
||||
# Added support for Clang 10 and GCC 10
|
||||
# https://docs.nvidia.com/cuda/archive/11.1.1/cuda-toolkit-release-notes/index.html#cuda-compiler-new-features
|
||||
"11.1" = {
|
||||
clang = {
|
||||
maxMajorVersion = "10";
|
||||
minMajorVersion = "7";
|
||||
};
|
||||
gcc = {
|
||||
maxMajorVersion = "10";
|
||||
minMajorVersion = "6";
|
||||
};
|
||||
};
|
||||
|
||||
# Added support for Clang 11
|
||||
# https://docs.nvidia.com/cuda/archive/11.2.2/cuda-installation-guide-linux/index.html#system-requirements
|
||||
"11.2" = {
|
||||
clang = {
|
||||
maxMajorVersion = "11";
|
||||
minMajorVersion = "7";
|
||||
};
|
||||
gcc = {
|
||||
maxMajorVersion = "10";
|
||||
minMajorVersion = "6";
|
||||
};
|
||||
};
|
||||
|
||||
# No changes from 11.2 to 11.3
|
||||
"11.3" = {
|
||||
clang = {
|
||||
maxMajorVersion = "11";
|
||||
minMajorVersion = "7";
|
||||
};
|
||||
gcc = {
|
||||
maxMajorVersion = "10";
|
||||
minMajorVersion = "6";
|
||||
};
|
||||
};
|
||||
|
||||
# Added support for Clang 12 and GCC 11
|
||||
# https://docs.nvidia.com/cuda/archive/11.4.4/cuda-toolkit-release-notes/index.html#cuda-general-new-features
|
||||
# NOTE: There is a bug in the version of GLIBC that GCC 11 uses which causes it to fail to compile some CUDA
|
||||
# code. As such, we skip it for this release, and do the bump in 11.6 (skipping 11.5).
|
||||
# https://forums.developer.nvidia.com/t/cuda-11-5-samples-throw-multiple-error-attribute-malloc-does-not-take-arguments/192750/15
|
||||
"11.4" = {
|
||||
clang = {
|
||||
maxMajorVersion = "12";
|
||||
minMajorVersion = "7";
|
||||
};
|
||||
gcc = {
|
||||
maxMajorVersion = "10";
|
||||
minMajorVersion = "6";
|
||||
};
|
||||
};
|
||||
|
||||
# No changes from 11.4 to 11.5
|
||||
"11.5" = {
|
||||
clang = {
|
||||
maxMajorVersion = "12";
|
||||
minMajorVersion = "7";
|
||||
};
|
||||
gcc = {
|
||||
maxMajorVersion = "10";
|
||||
minMajorVersion = "6";
|
||||
};
|
||||
};
|
||||
|
||||
# No changes from 11.5 to 11.6
|
||||
# However, as mentioned above, we add GCC 11 this release.
|
||||
"11.6" = {
|
||||
clang = {
|
||||
maxMajorVersion = "12";
|
||||
minMajorVersion = "7";
|
||||
};
|
||||
gcc = {
|
||||
maxMajorVersion = "11";
|
||||
minMajorVersion = "6";
|
||||
};
|
||||
};
|
||||
|
||||
# Added support for Clang 13
|
||||
# https://docs.nvidia.com/cuda/archive/11.7.1/cuda-toolkit-release-notes/index.html#cuda-compiler-new-features
|
||||
"11.7" = {
|
||||
clang = {
|
||||
maxMajorVersion = "13";
|
||||
minMajorVersion = "7";
|
||||
};
|
||||
gcc = {
|
||||
maxMajorVersion = "11";
|
||||
minMajorVersion = "6";
|
||||
};
|
||||
};
|
||||
|
||||
# Added support for Clang 14
|
||||
# https://docs.nvidia.com/cuda/archive/11.8.0/cuda-installation-guide-linux/index.html#system-requirements
|
||||
"11.8" = {
|
||||
clang = {
|
||||
maxMajorVersion = "14";
|
||||
minMajorVersion = "7";
|
||||
};
|
||||
gcc = {
|
||||
maxMajorVersion = "11";
|
||||
minMajorVersion = "6";
|
||||
};
|
||||
};
|
||||
|
||||
# Added support for GCC 12
|
||||
# https://docs.nvidia.com/cuda/archive/12.0.1/cuda-installation-guide-linux/index.html#system-requirements
|
||||
"12.0" = {
|
||||
clang = {
|
||||
maxMajorVersion = "14";
|
||||
minMajorVersion = "7";
|
||||
};
|
||||
gcc = {
|
||||
maxMajorVersion = "12";
|
||||
minMajorVersion = "6";
|
||||
};
|
||||
};
|
||||
|
||||
# Added support for Clang 15
|
||||
# https://docs.nvidia.com/cuda/archive/12.1.1/cuda-toolkit-release-notes/index.html#cuda-compilers-new-features
|
||||
"12.1" = {
|
||||
clang = {
|
||||
maxMajorVersion = "15";
|
||||
minMajorVersion = "7";
|
||||
};
|
||||
gcc = {
|
||||
maxMajorVersion = "12";
|
||||
minMajorVersion = "6";
|
||||
};
|
||||
};
|
||||
|
||||
# Added support for Clang 16
|
||||
# https://docs.nvidia.com/cuda/archive/12.2.2/cuda-installation-guide-linux/index.html#host-compiler-support-policy
|
||||
"12.2" = {
|
||||
clang = {
|
||||
maxMajorVersion = "16";
|
||||
minMajorVersion = "7";
|
||||
};
|
||||
gcc = {
|
||||
maxMajorVersion = "12";
|
||||
minMajorVersion = "6";
|
||||
};
|
||||
};
|
||||
|
||||
# No changes from 12.2 to 12.3
|
||||
# https://docs.nvidia.com/cuda/archive/12.3.2/cuda-installation-guide-linux/index.html#host-compiler-support-policy
|
||||
"12.3" = {
|
||||
clang = {
|
||||
maxMajorVersion = "16";
|
||||
minMajorVersion = "7";
|
||||
};
|
||||
gcc = {
|
||||
maxMajorVersion = "12";
|
||||
minMajorVersion = "6";
|
||||
};
|
||||
};
|
||||
|
||||
# Maximum Clang version is 17
|
||||
# Minimum GCC version is still 6, but all versions prior to GCC 7.3 are deprecated.
|
||||
# Maximum GCC version is 13.2
|
||||
# https://docs.nvidia.com/cuda/archive/12.4.1/cuda-installation-guide-linux/index.html#host-compiler-support-policy
|
||||
"12.4" = {
|
||||
clang = {
|
||||
maxMajorVersion = "17";
|
||||
minMajorVersion = "7";
|
||||
};
|
||||
gcc = {
|
||||
maxMajorVersion = "13";
|
||||
minMajorVersion = "6";
|
||||
};
|
||||
};
|
||||
|
||||
# No changes from 12.4 to 12.5
|
||||
# https://docs.nvidia.com/cuda/archive/12.5.1/cuda-installation-guide-linux/index.html#host-compiler-support-policy
|
||||
"12.5" = {
|
||||
clang = {
|
||||
maxMajorVersion = "17";
|
||||
minMajorVersion = "7";
|
||||
};
|
||||
gcc = {
|
||||
maxMajorVersion = "13";
|
||||
minMajorVersion = "6";
|
||||
};
|
||||
};
|
||||
|
||||
# Added support for Clang 18
|
||||
# https://docs.nvidia.com/cuda/archive/12.6.0/cuda-installation-guide-linux/index.html#host-compiler-support-policy
|
||||
"12.6" = {
|
||||
clang = {
|
||||
@@ -55,7 +252,7 @@
|
||||
};
|
||||
|
||||
# No changes from 12.8 to 12.9
|
||||
# https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#host-compiler-support-policy
|
||||
# https://docs.nvidia.com/cuda/archive/12.9.1/cuda-installation-guide-linux/index.html#host-compiler-support-policy
|
||||
"12.9" = {
|
||||
clang = {
|
||||
maxMajorVersion = "19";
|
||||
@@ -66,5 +263,18 @@
|
||||
minMajorVersion = "6";
|
||||
};
|
||||
};
|
||||
|
||||
# 12.9 to 13.0 adds support for GCC 15 and Clang 20
|
||||
# https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#host-compiler-support-policy
|
||||
"13.0" = {
|
||||
clang = {
|
||||
maxMajorVersion = "20";
|
||||
minMajorVersion = "7";
|
||||
};
|
||||
gcc = {
|
||||
maxMajorVersion = "15";
|
||||
minMajorVersion = "6";
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
||||
|
||||
@@ -1,9 +1,7 @@
|
||||
# The _cuda attribute set is a fixed-point which contains the static functionality required to construct CUDA package
|
||||
# sets. For example, `_cuda.bootstrapData` includes information about NVIDIA's redistributables (such as the names
|
||||
# NVIDIA uses for different systems), `_cuda.lib` contains utility functions like `formatCapabilities` (which generate
|
||||
# common arguments passed to NVCC and `cmakeFlags`), and `_cuda.fixups` contains `callPackage`-able functions which
|
||||
# are provided to the corresponding package's `overrideAttrs` attribute to provide package-specific fixups
|
||||
# out of scope of the generic redistributable builder.
|
||||
# NVIDIA uses for different systems), and `_cuda.lib` contains utility functions like `formatCapabilities` (which generate
|
||||
# common arguments passed to NVCC and `cmakeFlags`).
|
||||
#
|
||||
# Since this attribute set is used to construct the CUDA package sets, it must exist outside the fixed point of the
|
||||
# package sets. Make these attributes available directly in the package set construction could cause confusion if
|
||||
@@ -23,7 +21,7 @@ lib.fixedPoints.makeExtensible (final: {
|
||||
inherit lib;
|
||||
};
|
||||
extensions = [ ]; # Extensions applied to every CUDA package set.
|
||||
fixups = import ./fixups { inherit lib; };
|
||||
manifests = import ./manifests { inherit lib; };
|
||||
lib = import ./lib {
|
||||
_cuda = final;
|
||||
inherit lib;
|
||||
|
||||
@@ -1,12 +0,0 @@
|
||||
{ flags, lib }:
|
||||
prevAttrs: {
|
||||
autoPatchelfIgnoreMissingDeps = prevAttrs.autoPatchelfIgnoreMissingDeps or [ ] ++ [
|
||||
"libnvrm_gpu.so"
|
||||
"libnvrm_mem.so"
|
||||
"libnvdla_runtime.so"
|
||||
];
|
||||
# `cuda_compat` only works on aarch64-linux, and only when building for Jetson devices.
|
||||
badPlatformsConditions = prevAttrs.badPlatformsConditions or { } // {
|
||||
"Trying to use cuda_compat on aarch64-linux targeting non-Jetson devices" = !flags.isJetsonBuild;
|
||||
};
|
||||
}
|
||||
@@ -1,37 +0,0 @@
|
||||
# TODO(@connorbaker): cuda_cudart.dev depends on crt/host_config.h, which is from
|
||||
# (getDev cuda_nvcc). It would be nice to be able to encode that.
|
||||
{ addDriverRunpath, lib }:
|
||||
prevAttrs: {
|
||||
# Remove once cuda-find-redist-features has a special case for libcuda
|
||||
outputs =
|
||||
prevAttrs.outputs or [ ]
|
||||
++ lib.lists.optionals (!(builtins.elem "stubs" prevAttrs.outputs)) [ "stubs" ];
|
||||
|
||||
allowFHSReferences = false;
|
||||
|
||||
# The libcuda stub's pkg-config doesn't follow the general pattern:
|
||||
postPatch =
|
||||
prevAttrs.postPatch or ""
|
||||
+ ''
|
||||
while IFS= read -r -d $'\0' path; do
|
||||
sed -i \
|
||||
-e "s|^libdir\s*=.*/lib\$|libdir=''${!outputLib}/lib/stubs|" \
|
||||
-e "s|^Libs\s*:\(.*\)\$|Libs: \1 -Wl,-rpath,${addDriverRunpath.driverLink}/lib|" \
|
||||
"$path"
|
||||
done < <(find -iname 'cuda-*.pc' -print0)
|
||||
''
|
||||
# Namelink may not be enough, add a soname.
|
||||
# Cf. https://gitlab.kitware.com/cmake/cmake/-/issues/25536
|
||||
+ ''
|
||||
if [[ -f lib/stubs/libcuda.so && ! -f lib/stubs/libcuda.so.1 ]]; then
|
||||
ln -s libcuda.so lib/stubs/libcuda.so.1
|
||||
fi
|
||||
'';
|
||||
|
||||
postFixup = prevAttrs.postFixup or "" + ''
|
||||
mv "''${!outputDev}/share" "''${!outputDev}/lib"
|
||||
moveToOutput lib/stubs "$stubs"
|
||||
ln -s "$stubs"/lib/stubs/* "$stubs"/lib/
|
||||
ln -s "$stubs"/lib/stubs "''${!outputLib}/lib/stubs"
|
||||
'';
|
||||
}
|
||||
@@ -1,18 +0,0 @@
|
||||
{
|
||||
libglut,
|
||||
libcufft,
|
||||
libcurand,
|
||||
libGLU,
|
||||
libglvnd,
|
||||
libgbm,
|
||||
}:
|
||||
prevAttrs: {
|
||||
buildInputs = prevAttrs.buildInputs or [ ] ++ [
|
||||
libglut
|
||||
libcufft
|
||||
libcurand
|
||||
libGLU
|
||||
libglvnd
|
||||
libgbm
|
||||
];
|
||||
}
|
||||
@@ -1,35 +0,0 @@
|
||||
{
|
||||
cudaAtLeast,
|
||||
gmp,
|
||||
expat,
|
||||
libxcrypt-legacy,
|
||||
ncurses6,
|
||||
python310,
|
||||
python311,
|
||||
python312,
|
||||
stdenv,
|
||||
lib,
|
||||
}:
|
||||
prevAttrs: {
|
||||
buildInputs =
|
||||
prevAttrs.buildInputs or [ ]
|
||||
++ [
|
||||
gmp
|
||||
libxcrypt-legacy
|
||||
ncurses6
|
||||
python310
|
||||
python311
|
||||
python312
|
||||
]
|
||||
# aarch64,sbsa needs expat
|
||||
++ lib.lists.optionals (stdenv.hostPlatform.isAarch64) [ expat ];
|
||||
|
||||
installPhase =
|
||||
prevAttrs.installPhase or ""
|
||||
# Python 3.8 is not in nixpkgs anymore, delete Python 3.8 cuda-gdb support
|
||||
# to avoid autopatchelf failing to find libpython3.8.so.
|
||||
+ ''
|
||||
find $bin -name '*python3.8*' -delete
|
||||
find $bin -name '*python3.9*' -delete
|
||||
'';
|
||||
}
|
||||
@@ -1,62 +0,0 @@
|
||||
{
|
||||
lib,
|
||||
backendStdenv,
|
||||
setupCudaHook,
|
||||
}:
|
||||
prevAttrs: {
|
||||
# Merge "bin" and "dev" into "out" to avoid circular references
|
||||
outputs = builtins.filter (
|
||||
x:
|
||||
!(builtins.elem x [
|
||||
"dev"
|
||||
"bin"
|
||||
])
|
||||
) prevAttrs.outputs or [ ];
|
||||
|
||||
# Patch the nvcc.profile.
|
||||
# Syntax:
|
||||
# - `=` for assignment,
|
||||
# - `?=` for conditional assignment,
|
||||
# - `+=` to "prepend",
|
||||
# - `=+` to "append".
|
||||
|
||||
# Cf. https://web.archive.org/web/20230308044351/https://arcb.csc.ncsu.edu/~mueller/cluster/nvidia/2.0/nvcc_2.0.pdf
|
||||
|
||||
# We set all variables with the lowest priority (=+), but we do force
|
||||
# nvcc to use the fixed backend toolchain. Cf. comments in
|
||||
# backend-stdenv.nix
|
||||
|
||||
postPatch =
|
||||
prevAttrs.postPatch or ""
|
||||
+ ''
|
||||
substituteInPlace bin/nvcc.profile \
|
||||
--replace-fail \
|
||||
'$(TOP)/$(_TARGET_DIR_)/include' \
|
||||
"''${!outputDev}/include"
|
||||
''
|
||||
+ ''
|
||||
cat << EOF >> bin/nvcc.profile
|
||||
|
||||
# Fix a compatible backend compiler
|
||||
PATH += "${backendStdenv.cc}/bin":
|
||||
|
||||
# Expose the split-out nvvm
|
||||
LIBRARIES =+ "-L''${!outputBin}/nvvm/lib"
|
||||
INCLUDES =+ "-I''${!outputBin}/nvvm/include"
|
||||
EOF
|
||||
'';
|
||||
|
||||
# Entries here will be in nativeBuildInputs when cuda_nvcc is in nativeBuildInputs.
|
||||
propagatedBuildInputs = prevAttrs.propagatedBuildInputs or [ ] ++ [ setupCudaHook ];
|
||||
|
||||
postInstall = prevAttrs.postInstall or "" + ''
|
||||
moveToOutput "nvvm" "''${!outputBin}"
|
||||
'';
|
||||
|
||||
# The nvcc and cicc binaries contain hard-coded references to /usr
|
||||
allowFHSReferences = true;
|
||||
|
||||
meta = prevAttrs.meta or { } // {
|
||||
mainProgram = "nvcc";
|
||||
};
|
||||
}
|
||||
@@ -1 +0,0 @@
|
||||
{ cuda_cupti }: prevAttrs: { buildInputs = prevAttrs.buildInputs or [ ] ++ [ cuda_cupti ]; }
|
||||
@@ -1 +0,0 @@
|
||||
_: _: { outputs = [ "out" ]; }
|
||||
@@ -1,75 +0,0 @@
|
||||
{
|
||||
cudaOlder,
|
||||
cudaMajorMinorVersion,
|
||||
fetchurl,
|
||||
lib,
|
||||
libcublas,
|
||||
patchelf,
|
||||
zlib,
|
||||
}:
|
||||
let
|
||||
inherit (lib)
|
||||
attrsets
|
||||
maintainers
|
||||
meta
|
||||
strings
|
||||
;
|
||||
in
|
||||
finalAttrs: prevAttrs: {
|
||||
src = fetchurl { inherit (finalAttrs.passthru.redistribRelease) hash url; };
|
||||
|
||||
# Useful for inspecting why something went wrong.
|
||||
badPlatformsConditions =
|
||||
let
|
||||
cudaTooOld = cudaOlder finalAttrs.passthru.featureRelease.minCudaVersion;
|
||||
cudaTooNew =
|
||||
(finalAttrs.passthru.featureRelease.maxCudaVersion != null)
|
||||
&& strings.versionOlder finalAttrs.passthru.featureRelease.maxCudaVersion cudaMajorMinorVersion;
|
||||
in
|
||||
prevAttrs.badPlatformsConditions or { }
|
||||
// {
|
||||
"CUDA version is too old" = cudaTooOld;
|
||||
"CUDA version is too new" = cudaTooNew;
|
||||
};
|
||||
|
||||
buildInputs = prevAttrs.buildInputs or [ ] ++ [
|
||||
zlib
|
||||
(attrsets.getLib libcublas)
|
||||
];
|
||||
|
||||
# Tell autoPatchelf about runtime dependencies. *_infer* libraries only
|
||||
# exist in CuDNN 8.
|
||||
# NOTE: Versions from CUDNN releases have four components.
|
||||
postFixup =
|
||||
prevAttrs.postFixup or ""
|
||||
+
|
||||
strings.optionalString
|
||||
(
|
||||
strings.versionAtLeast finalAttrs.version "8.0.5.0"
|
||||
&& strings.versionOlder finalAttrs.version "9.0.0.0"
|
||||
)
|
||||
''
|
||||
${meta.getExe patchelf} $lib/lib/libcudnn.so --add-needed libcudnn_cnn_infer.so
|
||||
${meta.getExe patchelf} $lib/lib/libcudnn_ops_infer.so --add-needed libcublas.so --add-needed libcublasLt.so
|
||||
'';
|
||||
|
||||
meta = prevAttrs.meta or { } // {
|
||||
homepage = "https://developer.nvidia.com/cudnn";
|
||||
maintainers =
|
||||
prevAttrs.meta.maintainers or [ ]
|
||||
++ (with maintainers; [
|
||||
mdaiter
|
||||
samuela
|
||||
connorbaker
|
||||
]);
|
||||
# TODO(@connorbaker): Temporary workaround to avoid changing the derivation hash since introducing more
|
||||
# brokenConditions would change the derivation as they're top-level and __structuredAttrs is set.
|
||||
teams = prevAttrs.meta.teams or [ ];
|
||||
license = {
|
||||
shortName = "cuDNN EULA";
|
||||
fullName = "NVIDIA cuDNN Software License Agreement (EULA)";
|
||||
url = "https://docs.nvidia.com/deeplearning/sdk/cudnn-sla/index.html#supplement";
|
||||
free = false;
|
||||
};
|
||||
};
|
||||
}
|
||||
@@ -1,11 +0,0 @@
|
||||
{ lib }:
|
||||
lib.concatMapAttrs (
|
||||
fileName: _type:
|
||||
let
|
||||
# Fixup is in `./${attrName}.nix` or in `./${fileName}/default.nix`:
|
||||
attrName = lib.removeSuffix ".nix" fileName;
|
||||
fixup = import (./. + "/${fileName}");
|
||||
isFixup = fileName != "default.nix";
|
||||
in
|
||||
lib.optionalAttrs isFixup { ${attrName} = fixup; }
|
||||
) (builtins.readDir ./.)
|
||||
@@ -1,5 +0,0 @@
|
||||
_: prevAttrs: {
|
||||
badPlatformsConditions = prevAttrs.badPlatformsConditions or { } // {
|
||||
"Package is not supported; use drivers from linuxPackages" = true;
|
||||
};
|
||||
}
|
||||
@@ -1 +0,0 @@
|
||||
{ zlib }: prevAttrs: { buildInputs = prevAttrs.buildInputs or [ ] ++ [ zlib ]; }
|
||||
@@ -1 +0,0 @@
|
||||
{ zlib }: prevAttrs: { buildInputs = prevAttrs.buildInputs or [ ] ++ [ zlib ]; }
|
||||
@@ -1,12 +0,0 @@
|
||||
{
|
||||
libcublas,
|
||||
numactl,
|
||||
rdma-core,
|
||||
}:
|
||||
prevAttrs: {
|
||||
buildInputs = prevAttrs.buildInputs or [ ] ++ [
|
||||
libcublas
|
||||
numactl
|
||||
rdma-core
|
||||
];
|
||||
}
|
||||
@@ -1,19 +0,0 @@
|
||||
{
|
||||
cudaAtLeast,
|
||||
lib,
|
||||
libcublas,
|
||||
libcusparse ? null,
|
||||
libnvjitlink ? null,
|
||||
}:
|
||||
prevAttrs: {
|
||||
buildInputs = prevAttrs.buildInputs or [ ] ++ [
|
||||
libcublas
|
||||
libnvjitlink
|
||||
libcusparse
|
||||
];
|
||||
|
||||
brokenConditions = prevAttrs.brokenConditions or { } // {
|
||||
"libnvjitlink missing (CUDA >= 12.0)" = libnvjitlink == null;
|
||||
"libcusparse missing (CUDA >= 12.1)" = libcusparse == null;
|
||||
};
|
||||
}
|
||||
@@ -1,12 +0,0 @@
|
||||
{
|
||||
cudaAtLeast,
|
||||
lib,
|
||||
libnvjitlink ? null,
|
||||
}:
|
||||
prevAttrs: {
|
||||
buildInputs = prevAttrs.buildInputs or [ ] ++ [ libnvjitlink ];
|
||||
|
||||
brokenConditions = prevAttrs.brokenConditions or { } // {
|
||||
"libnvjitlink missing (CUDA >= 12.0)" = libnvjitlink == null;
|
||||
};
|
||||
}
|
||||
@@ -1,23 +0,0 @@
|
||||
{
|
||||
cuda_cudart,
|
||||
lib,
|
||||
libcublas,
|
||||
}:
|
||||
finalAttrs: prevAttrs: {
|
||||
buildInputs =
|
||||
prevAttrs.buildInputs or [ ]
|
||||
++ [ (lib.getLib libcublas) ]
|
||||
# For some reason, the 1.4.x release of cusparselt requires the cudart library.
|
||||
++ lib.optionals (lib.hasPrefix "1.4" finalAttrs.version) [ (lib.getLib cuda_cudart) ];
|
||||
meta = prevAttrs.meta or { } // {
|
||||
description = "cuSPARSELt: A High-Performance CUDA Library for Sparse Matrix-Matrix Multiplication";
|
||||
homepage = "https://developer.nvidia.com/cusparselt-downloads";
|
||||
maintainers = prevAttrs.meta.maintainers or [ ] ++ [ lib.maintainers.sepiabrown ];
|
||||
teams = prevAttrs.meta.teams or [ ];
|
||||
license = lib.licenses.unfreeRedistributable // {
|
||||
shortName = "cuSPARSELt EULA";
|
||||
fullName = "cuSPARSELt SUPPLEMENT TO SOFTWARE LICENSE AGREEMENT FOR NVIDIA SOFTWARE DEVELOPMENT KITS";
|
||||
url = "https://docs.nvidia.com/cuda/cusparselt/license.html";
|
||||
};
|
||||
};
|
||||
}
|
||||
@@ -1,23 +0,0 @@
|
||||
{
|
||||
cuda_cudart,
|
||||
lib,
|
||||
libcublas,
|
||||
}:
|
||||
finalAttrs: prevAttrs: {
|
||||
buildInputs =
|
||||
prevAttrs.buildInputs or [ ]
|
||||
++ [ (lib.getLib libcublas) ]
|
||||
# For some reason, the 1.4.x release of cuTENSOR requires the cudart library.
|
||||
++ lib.optionals (lib.hasPrefix "1.4" finalAttrs.version) [ (lib.getLib cuda_cudart) ];
|
||||
meta = prevAttrs.meta or { } // {
|
||||
description = "cuTENSOR: A High-Performance CUDA Library For Tensor Primitives";
|
||||
homepage = "https://developer.nvidia.com/cutensor";
|
||||
maintainers = prevAttrs.meta.maintainers or [ ] ++ [ lib.maintainers.obsidian-systems-maintenance ];
|
||||
teams = prevAttrs.meta.teams;
|
||||
license = lib.licenses.unfreeRedistributable // {
|
||||
shortName = "cuTENSOR EULA";
|
||||
fullName = "cuTENSOR SUPPLEMENT TO SOFTWARE LICENSE AGREEMENT FOR NVIDIA SOFTWARE DEVELOPMENT KITS";
|
||||
url = "https://docs.nvidia.com/cuda/cutensor/license.html";
|
||||
};
|
||||
};
|
||||
}
|
||||
@@ -1,86 +0,0 @@
|
||||
{
|
||||
cudaAtLeast,
|
||||
cudaMajorMinorVersion,
|
||||
cudaOlder,
|
||||
e2fsprogs,
|
||||
elfutils,
|
||||
flags,
|
||||
gst_all_1,
|
||||
lib,
|
||||
libjpeg8,
|
||||
qt6,
|
||||
rdma-core,
|
||||
stdenv,
|
||||
ucx,
|
||||
}:
|
||||
prevAttrs:
|
||||
let
|
||||
qtwayland = lib.getLib qt6.qtwayland;
|
||||
inherit (qt6) wrapQtAppsHook qtwebview;
|
||||
archDir =
|
||||
{
|
||||
aarch64-linux = "linux-" + (if flags.isJetsonBuild then "v4l_l4t" else "desktop") + "-t210-a64";
|
||||
x86_64-linux = "linux-desktop-glibc_2_11_3-x64";
|
||||
}
|
||||
.${stdenv.hostPlatform.system} or (throw "Unsupported system: ${stdenv.hostPlatform.system}");
|
||||
in
|
||||
{
|
||||
outputs = [ "out" ]; # NOTE(@connorbaker): Force a single output so relative lookups work.
|
||||
nativeBuildInputs = prevAttrs.nativeBuildInputs or [ ] ++ [ wrapQtAppsHook ];
|
||||
buildInputs =
|
||||
prevAttrs.buildInputs or [ ]
|
||||
++ [
|
||||
qtwayland
|
||||
qtwebview
|
||||
qt6.qtwebengine
|
||||
rdma-core
|
||||
]
|
||||
++ lib.optionals (cudaOlder "12.7") [
|
||||
e2fsprogs
|
||||
ucx
|
||||
]
|
||||
++ lib.optionals (cudaMajorMinorVersion == "12.9") [
|
||||
elfutils
|
||||
];
|
||||
dontWrapQtApps = true;
|
||||
preInstall = prevAttrs.preInstall or "" + ''
|
||||
if [[ -d nsight-compute ]]; then
|
||||
nixLog "Lifting components of Nsight Compute to the top level"
|
||||
mv -v nsight-compute/*/* .
|
||||
nixLog "Removing empty directories"
|
||||
rmdir -pv nsight-compute/*
|
||||
fi
|
||||
|
||||
rm -rf host/${archDir}/Mesa/
|
||||
'';
|
||||
postInstall =
|
||||
prevAttrs.postInstall or ""
|
||||
+ ''
|
||||
moveToOutput 'ncu' "''${!outputBin}/bin"
|
||||
moveToOutput 'ncu-ui' "''${!outputBin}/bin"
|
||||
moveToOutput 'host/${archDir}' "''${!outputBin}/bin"
|
||||
moveToOutput 'target/${archDir}' "''${!outputBin}/bin"
|
||||
wrapQtApp "''${!outputBin}/bin/host/${archDir}/ncu-ui.bin"
|
||||
''
|
||||
# NOTE(@connorbaker): No idea what this platform is or how to patchelf for it.
|
||||
+ lib.optionalString (flags.isJetsonBuild && cudaOlder "12.9") ''
|
||||
nixLog "Removing QNX 700 target directory for Jetson builds"
|
||||
rm -rfv "''${!outputBin}/target/qnx-700-t210-a64"
|
||||
''
|
||||
+ lib.optionalString (flags.isJetsonBuild && cudaAtLeast "12.8") ''
|
||||
nixLog "Removing QNX 800 target directory for Jetson builds"
|
||||
rm -rfv "''${!outputBin}/target/qnx-800-tegra-a64"
|
||||
'';
|
||||
# lib needs libtiff.so.5, but nixpkgs provides libtiff.so.6
|
||||
preFixup = prevAttrs.preFixup or "" + ''
|
||||
patchelf --replace-needed libtiff.so.5 libtiff.so "''${!outputBin}/bin/host/${archDir}/Plugins/imageformats/libqtiff.so"
|
||||
'';
|
||||
autoPatchelfIgnoreMissingDeps = prevAttrs.autoPatchelfIgnoreMissingDeps or [ ] ++ [
|
||||
"libnvidia-ml.so.1"
|
||||
];
|
||||
# NOTE(@connorbaker): It might be a problem that when nsight_compute contains hosts and targets of different
|
||||
# architectures, that we patchelf just the binaries matching the builder's platform; autoPatchelfHook prints
|
||||
# messages like
|
||||
# skipping [$out]/host/linux-desktop-glibc_2_11_3-x64/libQt6Core.so.6 because its architecture (x64) differs from
|
||||
# target (AArch64)
|
||||
}
|
||||
@@ -1,136 +0,0 @@
|
||||
{
|
||||
boost178,
|
||||
cuda_cudart,
|
||||
cudaAtLeast,
|
||||
e2fsprogs,
|
||||
gst_all_1,
|
||||
lib,
|
||||
nss,
|
||||
numactl,
|
||||
pulseaudio,
|
||||
qt6,
|
||||
rdma-core,
|
||||
stdenv,
|
||||
ucx,
|
||||
wayland,
|
||||
xorg,
|
||||
}:
|
||||
prevAttrs:
|
||||
let
|
||||
qtwayland = lib.getLib qt6.qtwayland;
|
||||
qtWaylandPlugins = "${qtwayland}/${qt6.qtbase.qtPluginPrefix}";
|
||||
# NOTE(@connorbaker): nsight_systems doesn't support Jetson, so no need for case splitting on aarch64-linux.
|
||||
hostDir =
|
||||
{
|
||||
aarch64-linux = "host-linux-armv8";
|
||||
x86_64-linux = "host-linux-x64";
|
||||
}
|
||||
.${stdenv.hostPlatform.system} or (throw "Unsupported system: ${stdenv.hostPlatform.system}");
|
||||
targetDir =
|
||||
{
|
||||
aarch64-linux = "target-linux-sbsa-armv8";
|
||||
x86_64-linux = "target-linux-x64";
|
||||
}
|
||||
.${stdenv.hostPlatform.system} or (throw "Unsupported system: ${stdenv.hostPlatform.system}");
|
||||
in
|
||||
{
|
||||
outputs = [ "out" ]; # NOTE(@connorbaker): Force a single output so relative lookups work.
|
||||
|
||||
# An ad hoc replacement for
|
||||
# https://github.com/ConnorBaker/cuda-redist-find-features/issues/11
|
||||
env = prevAttrs.env or { } // {
|
||||
rmPatterns =
|
||||
prevAttrs.env.rmPatterns or ""
|
||||
+ toString [
|
||||
"${hostDir}/lib{arrow,jpeg}*"
|
||||
"${hostDir}/lib{ssl,ssh,crypto}*"
|
||||
"${hostDir}/libboost*"
|
||||
"${hostDir}/libexec"
|
||||
"${hostDir}/libstdc*"
|
||||
"${hostDir}/python/bin/python"
|
||||
"${hostDir}/Mesa"
|
||||
];
|
||||
};
|
||||
|
||||
# NOTE(@connorbaker): nsight-exporter and nsight-sys are deprecated scripts wrapping nsys, it's fine to remove them.
|
||||
prePatch = prevAttrs.prePatch or "" + ''
|
||||
if [[ -d bin ]]; then
|
||||
nixLog "Removing bin wrapper scripts"
|
||||
for knownWrapper in bin/{nsys{,-ui},nsight-{exporter,sys}}; do
|
||||
[[ -e $knownWrapper ]] && rm -v "$knownWrapper"
|
||||
done
|
||||
unset -v knownWrapper
|
||||
|
||||
nixLog "Removing empty bin directory"
|
||||
rmdir -v bin
|
||||
fi
|
||||
|
||||
if [[ -d nsight-systems ]]; then
|
||||
nixLog "Lifting components of Nsight System to the top level"
|
||||
mv -v nsight-systems/*/* .
|
||||
nixLog "Removing empty nsight-systems directory"
|
||||
rmdir -pv nsight-systems/*
|
||||
fi
|
||||
'';
|
||||
|
||||
postPatch = prevAttrs.postPatch or "" + ''
|
||||
for path in $rmPatterns; do
|
||||
rm -r "$path"
|
||||
done
|
||||
patchShebangs nsight-systems
|
||||
'';
|
||||
|
||||
nativeBuildInputs = prevAttrs.nativeBuildInputs or [ ] ++ [ qt6.wrapQtAppsHook ];
|
||||
|
||||
dontWrapQtApps = true;
|
||||
|
||||
buildInputs =
|
||||
prevAttrs.buildInputs or [ ]
|
||||
++ [
|
||||
qt6.qtdeclarative
|
||||
qt6.qtsvg
|
||||
qt6.qtimageformats
|
||||
qt6.qtpositioning
|
||||
qt6.qtscxml
|
||||
qt6.qttools
|
||||
qt6.qtwebengine
|
||||
qt6.qtwayland
|
||||
boost178
|
||||
cuda_cudart.stubs
|
||||
e2fsprogs
|
||||
gst_all_1.gst-plugins-base
|
||||
gst_all_1.gstreamer
|
||||
nss
|
||||
numactl
|
||||
pulseaudio
|
||||
qt6.qtbase
|
||||
qtWaylandPlugins
|
||||
rdma-core
|
||||
ucx
|
||||
wayland
|
||||
xorg.libXcursor
|
||||
xorg.libXdamage
|
||||
xorg.libXrandr
|
||||
xorg.libXtst
|
||||
]
|
||||
# NOTE(@connorbaker): Seems to be required only for aarch64-linux.
|
||||
++ lib.optionals stdenv.hostPlatform.isAarch64 [
|
||||
gst_all_1.gst-plugins-bad
|
||||
];
|
||||
|
||||
postInstall = prevAttrs.postInstall or "" + ''
|
||||
moveToOutput '${hostDir}' "''${!outputBin}"
|
||||
moveToOutput '${targetDir}' "''${!outputBin}"
|
||||
moveToOutput 'bin' "''${!outputBin}"
|
||||
wrapQtApp "''${!outputBin}/${hostDir}/nsys-ui.bin"
|
||||
'';
|
||||
|
||||
# lib needs libtiff.so.5, but nixpkgs provides libtiff.so.6
|
||||
preFixup = prevAttrs.preFixup or "" + ''
|
||||
patchelf --replace-needed libtiff.so.5 libtiff.so "''${!outputBin}/${hostDir}/Plugins/imageformats/libqtiff.so"
|
||||
'';
|
||||
|
||||
autoPatchelfIgnoreMissingDeps = prevAttrs.autoPatchelfIgnoreMissingDeps or [ ] ++ [
|
||||
"libnvidia-ml.so.1"
|
||||
];
|
||||
}
|
||||
@@ -1,5 +0,0 @@
|
||||
_: prevAttrs: {
|
||||
badPlatformsConditions = prevAttrs.badPlatformsConditions or { } // {
|
||||
"Package is not supported; use drivers from linuxPackages" = true;
|
||||
};
|
||||
}
|
||||
@@ -1,127 +0,0 @@
|
||||
{
|
||||
_cuda,
|
||||
cudaOlder,
|
||||
cudaPackages,
|
||||
cudaMajorMinorVersion,
|
||||
lib,
|
||||
patchelf,
|
||||
requireFile,
|
||||
stdenv,
|
||||
}:
|
||||
let
|
||||
inherit (lib)
|
||||
attrsets
|
||||
maintainers
|
||||
meta
|
||||
strings
|
||||
versions
|
||||
;
|
||||
inherit (stdenv) hostPlatform;
|
||||
# targetArch :: String
|
||||
targetArch = attrsets.attrByPath [ hostPlatform.system ] "unsupported" {
|
||||
x86_64-linux = "x86_64-linux-gnu";
|
||||
aarch64-linux = "aarch64-linux-gnu";
|
||||
};
|
||||
in
|
||||
finalAttrs: prevAttrs: {
|
||||
# Useful for inspecting why something went wrong.
|
||||
brokenConditions =
|
||||
let
|
||||
cudaTooOld = cudaOlder finalAttrs.passthru.featureRelease.minCudaVersion;
|
||||
cudaTooNew =
|
||||
(finalAttrs.passthru.featureRelease.maxCudaVersion != null)
|
||||
&& strings.versionOlder finalAttrs.passthru.featureRelease.maxCudaVersion cudaMajorMinorVersion;
|
||||
cudnnVersionIsSpecified = finalAttrs.passthru.featureRelease.cudnnVersion != null;
|
||||
cudnnVersionSpecified = versions.majorMinor finalAttrs.passthru.featureRelease.cudnnVersion;
|
||||
cudnnVersionProvided = versions.majorMinor finalAttrs.passthru.cudnn.version;
|
||||
cudnnTooOld =
|
||||
cudnnVersionIsSpecified && (strings.versionOlder cudnnVersionProvided cudnnVersionSpecified);
|
||||
cudnnTooNew =
|
||||
cudnnVersionIsSpecified && (strings.versionOlder cudnnVersionSpecified cudnnVersionProvided);
|
||||
in
|
||||
prevAttrs.brokenConditions or { }
|
||||
// {
|
||||
"CUDA version is too old" = cudaTooOld;
|
||||
"CUDA version is too new" = cudaTooNew;
|
||||
"CUDNN version is too old" = cudnnTooOld;
|
||||
"CUDNN version is too new" = cudnnTooNew;
|
||||
};
|
||||
|
||||
src = requireFile {
|
||||
name = finalAttrs.passthru.redistribRelease.filename;
|
||||
inherit (finalAttrs.passthru.redistribRelease) hash;
|
||||
message = ''
|
||||
To use the TensorRT derivation, you must join the NVIDIA Developer Program and
|
||||
download the ${finalAttrs.version} TAR package for CUDA ${cudaMajorMinorVersion} from
|
||||
${finalAttrs.meta.homepage}.
|
||||
|
||||
Once you have downloaded the file, add it to the store with the following
|
||||
command, and try building this derivation again.
|
||||
|
||||
$ nix-store --add-fixed sha256 ${finalAttrs.passthru.redistribRelease.filename}
|
||||
'';
|
||||
};
|
||||
|
||||
# We need to look inside the extracted output to get the files we need.
|
||||
sourceRoot = "TensorRT-${finalAttrs.version}";
|
||||
|
||||
buildInputs = prevAttrs.buildInputs or [ ] ++ [ (finalAttrs.passthru.cudnn.lib or null) ];
|
||||
|
||||
preInstall =
|
||||
prevAttrs.preInstall or ""
|
||||
+ strings.optionalString (targetArch != "unsupported") ''
|
||||
# Replace symlinks to bin and lib with the actual directories from targets.
|
||||
for dir in bin lib; do
|
||||
rm "$dir"
|
||||
mv "targets/${targetArch}/$dir" "$dir"
|
||||
done
|
||||
|
||||
# Remove broken symlinks
|
||||
for dir in include samples; do
|
||||
rm "targets/${targetArch}/$dir" || :
|
||||
done
|
||||
'';
|
||||
|
||||
# Tell autoPatchelf about runtime dependencies.
|
||||
postFixup =
|
||||
let
|
||||
versionTriple = "${versions.majorMinor finalAttrs.version}.${versions.patch finalAttrs.version}";
|
||||
in
|
||||
prevAttrs.postFixup or ""
|
||||
+ ''
|
||||
${meta.getExe' patchelf "patchelf"} --add-needed libnvinfer.so \
|
||||
"$lib/lib/libnvinfer.so.${versionTriple}" \
|
||||
"$lib/lib/libnvinfer_plugin.so.${versionTriple}" \
|
||||
"$lib/lib/libnvinfer_builder_resource.so.${versionTriple}"
|
||||
'';
|
||||
|
||||
passthru = prevAttrs.passthru or { } // {
|
||||
# The CUDNN used with TensorRT.
|
||||
# If null, the default cudnn derivation will be used.
|
||||
# If a version is specified, the cudnn derivation with that version will be used,
|
||||
# unless it is not available, in which case the default cudnn derivation will be used.
|
||||
cudnn =
|
||||
let
|
||||
desiredName = _cuda.lib.mkVersionedName "cudnn" (
|
||||
lib.versions.majorMinor finalAttrs.passthru.featureRelease.cudnnVersion
|
||||
);
|
||||
in
|
||||
if finalAttrs.passthru.featureRelease.cudnnVersion == null || (cudaPackages ? desiredName) then
|
||||
cudaPackages.cudnn
|
||||
else
|
||||
cudaPackages.${desiredName};
|
||||
};
|
||||
|
||||
meta = prevAttrs.meta or { } // {
|
||||
badPlatforms =
|
||||
prevAttrs.meta.badPlatforms or [ ]
|
||||
++ lib.optionals (targetArch == "unsupported") [ hostPlatform.system ];
|
||||
homepage = "https://developer.nvidia.com/tensorrt";
|
||||
teams = prevAttrs.meta.teams or [ ];
|
||||
|
||||
# Building TensorRT on Hydra is impossible because of the non-redistributable
|
||||
# license and because the source needs to be manually downloaded from the
|
||||
# NVIDIA Developer Program (see requireFile above).
|
||||
hydraPlatforms = lib.platforms.none;
|
||||
};
|
||||
}
|
||||
@@ -1,4 +1,4 @@
|
||||
{ lib }:
|
||||
{ _cuda, lib }:
|
||||
{
|
||||
/**
|
||||
Returns whether a capability should be built by default for a particular CUDA version.
|
||||
@@ -114,16 +114,14 @@
|
||||
```
|
||||
*/
|
||||
allowUnfreeCudaPredicate =
|
||||
package:
|
||||
lib.all (
|
||||
license:
|
||||
license.free
|
||||
|| lib.elem license.shortName [
|
||||
"CUDA EULA"
|
||||
"cuDNN EULA"
|
||||
"cuSPARSELt EULA"
|
||||
"cuTENSOR EULA"
|
||||
"NVidia OptiX EULA"
|
||||
let
|
||||
cudaLicenseNames = [
|
||||
lib.licenses.nvidiaCuda.shortName
|
||||
]
|
||||
) (lib.toList package.meta.license);
|
||||
++ lib.map (license: license.shortName) (lib.attrValues _cuda.lib.licenses);
|
||||
in
|
||||
package:
|
||||
lib.all (license: license.free || lib.elem (license.shortName or null) cudaLicenseNames) (
|
||||
lib.toList package.meta.license
|
||||
);
|
||||
}
|
||||
|
||||
@@ -11,13 +11,16 @@
|
||||
;
|
||||
|
||||
# See ./cuda.nix for documentation.
|
||||
inherit (import ./cuda.nix { inherit lib; })
|
||||
inherit (import ./cuda.nix { inherit _cuda lib; })
|
||||
_cudaCapabilityIsDefault
|
||||
_cudaCapabilityIsSupported
|
||||
_mkCudaVariant
|
||||
allowUnfreeCudaPredicate
|
||||
;
|
||||
|
||||
# See ./licenses.nix for documentation.
|
||||
licenses = import ./licenses.nix;
|
||||
|
||||
# See ./meta.nix for documentation.
|
||||
inherit (import ./meta.nix { inherit _cuda lib; })
|
||||
_mkMetaBadPlatforms
|
||||
@@ -30,6 +33,7 @@
|
||||
getNixSystems
|
||||
getRedistSystem
|
||||
mkRedistUrl
|
||||
selectManifests
|
||||
;
|
||||
|
||||
# See ./strings.nix for documentation.
|
||||
|
||||
55
pkgs/development/cuda-modules/_cuda/lib/licenses.nix
Normal file
55
pkgs/development/cuda-modules/_cuda/lib/licenses.nix
Normal file
@@ -0,0 +1,55 @@
|
||||
{
|
||||
cudnn = {
|
||||
shortName = "cuDNN EULA";
|
||||
fullName = "cuDNN SUPPLEMENT TO SOFTWARE LICENSE AGREEMENT FOR NVIDIA SOFTWARE DEVELOPMENT KITS";
|
||||
url = "https://docs.nvidia.com/deeplearning/cudnn/backend/latest/reference/eula.html";
|
||||
free = false;
|
||||
redistributable = false;
|
||||
};
|
||||
|
||||
cusparse_lt = {
|
||||
shortName = "cuSPARSELt EULA";
|
||||
fullName = "cuSPARSELt SUPPLEMENT TO SOFTWARE LICENSE AGREEMENT FOR NVIDIA SOFTWARE DEVELOPMENT KITS";
|
||||
url = "https://docs.nvidia.com/cuda/cusparselt/license.html";
|
||||
free = false;
|
||||
redistributable = false;
|
||||
};
|
||||
|
||||
cutensor = {
|
||||
shortName = "cuTENSOR EULA";
|
||||
fullName = "cuTENSOR SUPPLEMENT TO SOFTWARE LICENSE AGREEMENT FOR NVIDIA SOFTWARE DEVELOPMENT KITS";
|
||||
url = "https://docs.nvidia.com/cuda/cutensor/latest/license.html";
|
||||
free = false;
|
||||
redistributable = false;
|
||||
};
|
||||
|
||||
tensorrt = {
|
||||
shortName = "TensorRT EULA";
|
||||
fullName = "TensorRT SUPPLEMENT TO SOFTWARE LICENSE AGREEMENT FOR NVIDIA SOFTWARE DEVELOPMENT KITS";
|
||||
url = "https://docs.nvidia.com/deeplearning/tensorrt/latest/reference/sla.html";
|
||||
free = false;
|
||||
redistributable = false;
|
||||
};
|
||||
|
||||
math_sdk_sla = {
|
||||
shortName = "NVIDIA Math SDK SLA";
|
||||
fullName = "LICENSE AGREEMENT FOR NVIDIA MATH LIBRARIES SOFTWARE DEVELOPMENT KITS";
|
||||
url = "https://developer.download.nvidia.com/compute/mathdx/License.txt";
|
||||
free = false;
|
||||
redistributable = false;
|
||||
};
|
||||
|
||||
# "license": "CUDA Toolkit",
|
||||
# "license": "NVIDIA Driver",
|
||||
# "license": "NVIDIA Proprietary",
|
||||
# "license": "NVIDIA",
|
||||
# "license": "NVIDIA SLA",
|
||||
# "license": "cuDSS library",
|
||||
# "license": "cuQuantum",
|
||||
# "license": "libcusolvermp library",
|
||||
# "license": "NPP PLUS library",
|
||||
# "license": "nvCOMP library",
|
||||
# "license": "nvJPEG 2K",
|
||||
# "license": "NVPL",
|
||||
# "license": "nvTIFF",
|
||||
}
|
||||
@@ -2,7 +2,7 @@
|
||||
{
|
||||
/**
|
||||
Returns a list of bad platforms for a given package if assertsions in `finalAttrs.passthru.platformAssertions`
|
||||
fail, optionally logging evaluation warnings for each reason.
|
||||
fail, optionally logging evaluation warnings with `builtins.traceVerbose` for each reason.
|
||||
|
||||
NOTE: No guarantees are made about this function's stability. You may use it at your own risk.
|
||||
|
||||
@@ -12,31 +12,39 @@
|
||||
# Type
|
||||
|
||||
```
|
||||
_mkMetaBadPlatforms :: (warn :: Bool) -> (finalAttrs :: AttrSet) -> List String
|
||||
_mkMetaBadPlatforms :: (finalAttrs :: AttrSet) -> List String
|
||||
```
|
||||
|
||||
# Inputs
|
||||
|
||||
`finalAttrs`
|
||||
|
||||
: The final attributes of the package
|
||||
*/
|
||||
_mkMetaBadPlatforms =
|
||||
warn: finalAttrs:
|
||||
finalAttrs:
|
||||
let
|
||||
failedAssertionsString = _cuda.lib._mkFailedAssertionsString finalAttrs.passthru.platformAssertions;
|
||||
hasFailedAssertions = failedAssertionsString != "";
|
||||
finalStdenv = finalAttrs.finalPackage.stdenv;
|
||||
in
|
||||
lib.warnIf (warn && hasFailedAssertions)
|
||||
"Package ${finalAttrs.finalPackage.name} is unsupported on this platform due to the following failed assertions:${failedAssertionsString}"
|
||||
(
|
||||
lib.optionals hasFailedAssertions (
|
||||
lib.unique [
|
||||
finalStdenv.buildPlatform.system
|
||||
finalStdenv.hostPlatform.system
|
||||
finalStdenv.targetPlatform.system
|
||||
]
|
||||
)
|
||||
badPlatforms = lib.optionals hasFailedAssertions (
|
||||
lib.unique [
|
||||
finalStdenv.buildPlatform.system
|
||||
finalStdenv.hostPlatform.system
|
||||
finalStdenv.targetPlatform.system
|
||||
]
|
||||
);
|
||||
handle =
|
||||
if hasFailedAssertions then
|
||||
builtins.traceVerbose "Package ${finalAttrs.finalPackage.name} is unsupported on this platform due to the following failed assertions:${failedAssertionsString}"
|
||||
else
|
||||
lib.id;
|
||||
in
|
||||
handle badPlatforms;
|
||||
|
||||
/**
|
||||
Returns a boolean indicating whether the package is broken as a result of `finalAttrs.passthru.brokenAssertions`,
|
||||
optionally logging evaluation warnings for each reason.
|
||||
optionally logging evaluation warnings with `builtins.traceVerbose` for each reason.
|
||||
|
||||
NOTE: No guarantees are made about this function's stability. You may use it at your own risk.
|
||||
|
||||
@@ -46,26 +54,25 @@
|
||||
# Type
|
||||
|
||||
```
|
||||
_mkMetaBroken :: (warn :: Bool) -> (finalAttrs :: AttrSet) -> Bool
|
||||
_mkMetaBroken :: (finalAttrs :: AttrSet) -> Bool
|
||||
```
|
||||
|
||||
# Inputs
|
||||
|
||||
`warn`
|
||||
|
||||
: A boolean indicating whether to log warnings
|
||||
|
||||
`finalAttrs`
|
||||
|
||||
: The final attributes of the package
|
||||
*/
|
||||
_mkMetaBroken =
|
||||
warn: finalAttrs:
|
||||
finalAttrs:
|
||||
let
|
||||
failedAssertionsString = _cuda.lib._mkFailedAssertionsString finalAttrs.passthru.brokenAssertions;
|
||||
hasFailedAssertions = failedAssertionsString != "";
|
||||
handle =
|
||||
if hasFailedAssertions then
|
||||
builtins.traceVerbose "Package ${finalAttrs.finalPackage.name} is marked as broken due to the following failed assertions:${failedAssertionsString}"
|
||||
else
|
||||
lib.id;
|
||||
in
|
||||
lib.warnIf (warn && hasFailedAssertions)
|
||||
"Package ${finalAttrs.finalPackage.name} is marked as broken due to the following failed assertions:${failedAssertionsString}"
|
||||
hasFailedAssertions;
|
||||
handle hasFailedAssertions;
|
||||
}
|
||||
|
||||
@@ -111,26 +111,39 @@
|
||||
/**
|
||||
Maps a Nix system to a NVIDIA redistributable system.
|
||||
|
||||
NOTE: We swap out the default `linux-sbsa` redist (for server-grade ARM chips) with the `linux-aarch64` redist
|
||||
(which is for Jetson devices) if we're building any Jetson devices. Since both are based on aarch64, we can only
|
||||
have one or the other, otherwise there's an ambiguity as to which should be used.
|
||||
NOTE: Certain Nix systems can map to multiple NVIDIA redistributable systems. In particular, ARM systems can map to
|
||||
either `linux-sbsa` (for server-grade ARM chips) or `linux-aarch64` (for Jetson devices). Complicating matters
|
||||
further, as of CUDA 13.0, Jetson Thor devices use `linux-sbsa` instead of `linux-aarch64`. (It is unknown whether
|
||||
NVIDIA plans to make the Orin series use `linux-sbsa` as well for the CUDA 13.0 release.)
|
||||
|
||||
NOTE: This function *will* be called by unsupported systems because `cudaPackages` is evaluated on all systems. As
|
||||
such, we need to handle unsupported systems gracefully.
|
||||
|
||||
NOTE: This function does not check whether the provided CUDA capabilities are valid for the given CUDA version.
|
||||
The heavy validation work to ensure consistency of CUDA capabilities is performed by backendStdenv.
|
||||
|
||||
# Type
|
||||
|
||||
```
|
||||
getRedistSystem :: (hasJetsonCudaCapability :: Bool) -> (nixSystem :: String) -> String
|
||||
getRedistSystem ::
|
||||
{ cudaCapabilities :: List String
|
||||
, cudaMajorMinorVersion :: String
|
||||
, system :: String
|
||||
}
|
||||
-> String
|
||||
```
|
||||
|
||||
# Inputs
|
||||
|
||||
`hasJetsonCudaCapability`
|
||||
`cudaCapabilities`
|
||||
|
||||
: If configured for a Jetson device
|
||||
: The list of CUDA capabilities to build GPU code for
|
||||
|
||||
`nixSystem`
|
||||
`cudaMajorMinorVersion`
|
||||
|
||||
: The major and minor version of CUDA (e.g. "12.6")
|
||||
|
||||
`system`
|
||||
|
||||
: The Nix system
|
||||
|
||||
@@ -140,22 +153,53 @@
|
||||
## `cudaLib.getRedistSystem` usage examples
|
||||
|
||||
```nix
|
||||
getRedistSystem true "aarch64-linux"
|
||||
getRedistSystem {
|
||||
cudaCapabilities = [ "8.7" ];
|
||||
cudaMajorMinorVersion = "12.6";
|
||||
system = "aarch64-linux";
|
||||
}
|
||||
=> "linux-aarch64"
|
||||
```
|
||||
|
||||
```nix
|
||||
getRedistSystem false "aarch64-linux"
|
||||
getRedistSystem {
|
||||
cudaCapabilities = [ "11.0" ];
|
||||
cudaMajorMinorVersion = "13.0";
|
||||
system = "aarch64-linux";
|
||||
}
|
||||
=> "linux-sbsa"
|
||||
```
|
||||
|
||||
```nix
|
||||
getRedistSystem {
|
||||
cudaCapabilities = [ "8.0" "8.9" ];
|
||||
cudaMajorMinorVersion = "12.6";
|
||||
system = "aarch64-linux";
|
||||
}
|
||||
=> "linux-sbsa"
|
||||
```
|
||||
:::
|
||||
*/
|
||||
getRedistSystem =
|
||||
hasJetsonCudaCapability: nixSystem:
|
||||
if nixSystem == "x86_64-linux" then
|
||||
{
|
||||
cudaCapabilities,
|
||||
cudaMajorMinorVersion,
|
||||
system,
|
||||
}:
|
||||
if system == "x86_64-linux" then
|
||||
"linux-x86_64"
|
||||
else if nixSystem == "aarch64-linux" then
|
||||
if hasJetsonCudaCapability then "linux-aarch64" else "linux-sbsa"
|
||||
else if system == "aarch64-linux" then
|
||||
# If all the Jetson devices are at least 10.1 (Thor, CUDA 12.9; CUDA 13.0 and later use 11.0 for Thor), then
|
||||
# we've got SBSA.
|
||||
if
|
||||
lib.all (
|
||||
cap: _cuda.db.cudaCapabilityToInfo.${cap}.isJetson -> lib.versionAtLeast cap "10.1"
|
||||
) cudaCapabilities
|
||||
then
|
||||
"linux-sbsa"
|
||||
# Otherwise we've got some Jetson devices older than Thor and need to use linux-aarch64.
|
||||
else
|
||||
"linux-aarch64"
|
||||
else
|
||||
"unsupported";
|
||||
|
||||
@@ -193,4 +237,34 @@
|
||||
)
|
||||
++ [ relativePath ]
|
||||
);
|
||||
|
||||
/**
|
||||
Function which accepts an attribute set mapping redistributable name to version and retrieves the corresponding
|
||||
collection of manifests from `_cuda.manifests`. Additionally, the version provided is used to populate the
|
||||
`release_label` field in the corresponding manifest if it is missing.
|
||||
|
||||
It is an error to provide a redistributable name and version for which there is no corresponding manifest.
|
||||
|
||||
# Type
|
||||
|
||||
```
|
||||
selectManifests :: (versions :: AttrSet RedistName Version) -> AttrSet RedistName Manifest
|
||||
```
|
||||
|
||||
# Inputs
|
||||
|
||||
`versions`
|
||||
|
||||
: An attribute set mapping redistributable name to manifest version
|
||||
*/
|
||||
selectManifests = lib.mapAttrs (
|
||||
name: version:
|
||||
let
|
||||
manifest = _cuda.manifests.${name}.${version};
|
||||
in
|
||||
manifest
|
||||
// {
|
||||
release_label = manifest.release_label or version;
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
@@ -0,0 +1,5 @@
|
||||
# cublasmp
|
||||
|
||||
Link: <https://developer.download.nvidia.com/compute/cublasmp/redist/>
|
||||
|
||||
Requirements: <https://docs.nvidia.com/cuda/cublasmp/getting_started/index.html#hardware-and-software-requirements>
|
||||
@@ -0,0 +1,43 @@
|
||||
{
|
||||
"release_date": "2025-09-09",
|
||||
"release_label": "0.6.0",
|
||||
"release_product": "cublasmp",
|
||||
"libcublasmp": {
|
||||
"name": "NVIDIA cuBLASMp library",
|
||||
"license": "cuBLASMp library",
|
||||
"license_path": "libcublasmp/LICENSE.txt",
|
||||
"version": "0.6.0.84",
|
||||
"linux-x86_64": {
|
||||
"cuda12": {
|
||||
"relative_path": "libcublasmp/linux-x86_64/libcublasmp-linux-x86_64-0.6.0.84_cuda12-archive.tar.xz",
|
||||
"sha256": "214a439031cc53be7d02961651e5e6ee520d80ab09b772d5a470e678477a6c57",
|
||||
"md5": "2a6a91fd58b90a16a1c2b3c3e4d2bdce",
|
||||
"size": "4324732"
|
||||
},
|
||||
"cuda13": {
|
||||
"relative_path": "libcublasmp/linux-x86_64/libcublasmp-linux-x86_64-0.6.0.84_cuda13-archive.tar.xz",
|
||||
"sha256": "f3892486ac72649ab5e140fd1466421e5638ce23a56a5360a42f32450fcfbf83",
|
||||
"md5": "b3f96dce5e52f432a36e0a6a006f6b27",
|
||||
"size": "4815848"
|
||||
}
|
||||
},
|
||||
"cuda_variant": [
|
||||
"12",
|
||||
"13"
|
||||
],
|
||||
"linux-sbsa": {
|
||||
"cuda12": {
|
||||
"relative_path": "libcublasmp/linux-sbsa/libcublasmp-linux-sbsa-0.6.0.84_cuda12-archive.tar.xz",
|
||||
"sha256": "6af07f02ed01eee761509ad5c733a7196520f09ce036d5d047f38a1768287080",
|
||||
"md5": "f6efeba7b2e1ae8b164a69f208c5a53b",
|
||||
"size": "4273376"
|
||||
},
|
||||
"cuda13": {
|
||||
"relative_path": "libcublasmp/linux-sbsa/libcublasmp-linux-sbsa-0.6.0.84_cuda13-archive.tar.xz",
|
||||
"sha256": "9c3ea75c2a2705cb415d37316a6f540dbeb021ac3dc7bf0404dac314eb098aa0",
|
||||
"md5": "35bac35e00eb29a86e54bb4fb703d258",
|
||||
"size": "4751836"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,5 @@
|
||||
# cuda
|
||||
|
||||
Link: <https://developer.download.nvidia.com/compute/cuda/redist/>
|
||||
|
||||
Requirements: <https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#cuda-toolkit-major-component-versions>
|
||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,8 @@
|
||||
# cudnn
|
||||
|
||||
Link: <https://developer.download.nvidia.com/compute/cudnn/redist/>
|
||||
|
||||
Requirements: <https://docs.nvidia.com/deeplearning/cudnn/backend/v9.8.0/reference/support-matrix.html#gpu-cuda-toolkit-and-cuda-driver-requirements>
|
||||
|
||||
8.9.7 is the latest release from the 8.x series and supports everything but Jetson.
|
||||
8.9.5 is the latest release from the 8.x series that supports Jetson.
|
||||
@@ -0,0 +1,139 @@
|
||||
{
|
||||
"release_date": "2024-03-15",
|
||||
"release_label": "8.9.5",
|
||||
"release_product": "cudnn",
|
||||
"cudnn": {
|
||||
"name": "NVIDIA CUDA Deep Neural Network library",
|
||||
"license": "cudnn",
|
||||
"license_path": "cudnn/LICENSE.txt",
|
||||
"version": "8.9.5.30",
|
||||
"cuda_variant": [
|
||||
"11",
|
||||
"12"
|
||||
],
|
||||
"linux-x86_64": {
|
||||
"cuda11": {
|
||||
"relative_path": "cudnn/linux-x86_64/cudnn-linux-x86_64-8.9.5.30_cuda11-archive.tar.xz",
|
||||
"sha256": "bbe10e3c08cd7e4aea1012213781e4fe270e1c908263444f567cafefb2cc6525",
|
||||
"md5": "300aaaa05ca6d12b3ac058fd0bd70c6b",
|
||||
"size": "857471712"
|
||||
},
|
||||
"cuda12": {
|
||||
"relative_path": "cudnn/linux-x86_64/cudnn-linux-x86_64-8.9.5.30_cuda12-archive.tar.xz",
|
||||
"sha256": "2a2eb89a2ab51071151c6082f1e816c702167a711a9372f9f73a7b5c4b06e01a",
|
||||
"md5": "afb13f2d7377f4a16b54a6acc373bbd9",
|
||||
"size": "861488496"
|
||||
}
|
||||
},
|
||||
"linux-ppc64le": {
|
||||
"cuda11": {
|
||||
"relative_path": "cudnn/linux-ppc64le/cudnn-linux-ppc64le-8.9.5.30_cuda11-archive.tar.xz",
|
||||
"sha256": "d678f8b2903b95de7eeaef38890c5674705864ea049b2b63e90565f2c0ea682f",
|
||||
"md5": "daed75ed0c9f4dcc5b9521d2a833be3d",
|
||||
"size": "860245008"
|
||||
},
|
||||
"cuda12": {
|
||||
"relative_path": "cudnn/linux-ppc64le/cudnn-linux-ppc64le-8.9.5.30_cuda12-archive.tar.xz",
|
||||
"sha256": "38388ec3c99c6646aaf5c707985cd35e25c67f653d780c4081c2df5557ab665f",
|
||||
"md5": "8893605a415202937ad9f2587e7a16ce",
|
||||
"size": "862346664"
|
||||
}
|
||||
},
|
||||
"linux-sbsa": {
|
||||
"cuda11": {
|
||||
"relative_path": "cudnn/linux-sbsa/cudnn-linux-sbsa-8.9.5.30_cuda11-archive.tar.xz",
|
||||
"sha256": "50e3d38cb70a53bb059da0aefc60e1460729c6988e2697200c43b80d218e556c",
|
||||
"md5": "3479f3fdbda83cd6df104851dc1f940a",
|
||||
"size": "857816268"
|
||||
},
|
||||
"cuda12": {
|
||||
"relative_path": "cudnn/linux-sbsa/cudnn-linux-sbsa-8.9.5.30_cuda12-archive.tar.xz",
|
||||
"sha256": "107d3dbec6345e1a3879a151cf3cbf6a2d96162c7b8eeb2ff85b84a67e79e2d1",
|
||||
"md5": "90715ef0e48f6f153587ee59df7c1a87",
|
||||
"size": "859978180"
|
||||
}
|
||||
},
|
||||
"windows-x86_64": {
|
||||
"cuda11": {
|
||||
"relative_path": "cudnn/windows-x86_64/cudnn-windows-x86_64-8.9.5.30_cuda11-archive.zip",
|
||||
"sha256": "e42aaa92203cc101a1619656ae50852a0d818a06ca99684c5f51ba95bd7a7cf9",
|
||||
"md5": "d2f4fbc710da61253570306ed2e63ac4",
|
||||
"size": "701179425"
|
||||
},
|
||||
"cuda12": {
|
||||
"relative_path": "cudnn/windows-x86_64/cudnn-windows-x86_64-8.9.5.30_cuda12-archive.zip",
|
||||
"sha256": "be76d407ce0e609f94688aa45bfd5648fd21a4d9f84a588fad10aa4802ca1301",
|
||||
"md5": "54146d8da6df9da3ef125171da959dcf",
|
||||
"size": "705347747"
|
||||
}
|
||||
},
|
||||
"linux-aarch64": {
|
||||
"cuda12": {
|
||||
"relative_path": "cudnn/linux-aarch64/cudnn-linux-aarch64-8.9.5.30_cuda12-archive.tar.xz",
|
||||
"sha256": "0491f7b02f55c22077eb678bf314c1f917524bd507cf5b658239bf98a47233a1",
|
||||
"md5": "fffd4a177c3e2ebaaceb83131d69e4e3",
|
||||
"size": "891432124"
|
||||
}
|
||||
}
|
||||
},
|
||||
"cudnn_samples": {
|
||||
"name": "NVIDIA cuDNN samples",
|
||||
"license": "cudnn",
|
||||
"license_path": "cudnn_samples/LICENSE.txt",
|
||||
"version": "8.9.5.30",
|
||||
"cuda_variant": [
|
||||
"11",
|
||||
"12"
|
||||
],
|
||||
"linux-x86_64": {
|
||||
"cuda11": {
|
||||
"relative_path": "cudnn_samples/linux-x86_64/cudnn_samples-linux-x86_64-8.9.5.30_cuda11-archive.tar.xz",
|
||||
"sha256": "9c0d951788461f6e9e000209cf4b100839effd1fd300371dfa6929552c8b1d4e",
|
||||
"md5": "dcbdaaa0171aa6b8331fcd6218558953",
|
||||
"size": "1665468"
|
||||
},
|
||||
"cuda12": {
|
||||
"relative_path": "cudnn_samples/linux-x86_64/cudnn_samples-linux-x86_64-8.9.5.30_cuda12-archive.tar.xz",
|
||||
"sha256": "441d262d82888c6ca5a02c8ad0f07c3a876be7b473bc2ec2638d86ea2e80a884",
|
||||
"md5": "a2ca6bf77b610024aff5c1a7ee53ea01",
|
||||
"size": "1664272"
|
||||
}
|
||||
},
|
||||
"linux-ppc64le": {
|
||||
"cuda11": {
|
||||
"relative_path": "cudnn_samples/linux-ppc64le/cudnn_samples-linux-ppc64le-8.9.5.30_cuda11-archive.tar.xz",
|
||||
"sha256": "ded84be373031ff843c0b7118e9fdb48b06ec763eae3c76cb9c57e121b47c228",
|
||||
"md5": "d4d76362cf7ba0a0711088c38a3e17a7",
|
||||
"size": "1666372"
|
||||
},
|
||||
"cuda12": {
|
||||
"relative_path": "cudnn_samples/linux-ppc64le/cudnn_samples-linux-ppc64le-8.9.5.30_cuda12-archive.tar.xz",
|
||||
"sha256": "275d6a6671c210d4c4a92240de24cba0c5ca17e9007f91656b18bbff81621f81",
|
||||
"md5": "b13c3befd24473ad536ef6ea3f4dc939",
|
||||
"size": "1666788"
|
||||
}
|
||||
},
|
||||
"linux-sbsa": {
|
||||
"cuda11": {
|
||||
"relative_path": "cudnn_samples/linux-sbsa/cudnn_samples-linux-sbsa-8.9.5.30_cuda11-archive.tar.xz",
|
||||
"sha256": "fa2150dff6f574fb2927bfd2d10b5c2e2e90603f59d3d3371eaa41f2e9528c74",
|
||||
"md5": "80783b38089b6573943959e873693f0a",
|
||||
"size": "1665660"
|
||||
},
|
||||
"cuda12": {
|
||||
"relative_path": "cudnn_samples/linux-sbsa/cudnn_samples-linux-sbsa-8.9.5.30_cuda12-archive.tar.xz",
|
||||
"sha256": "af98dec9cf613cb7f67e27f5a5da24fc183996fc25a875aa0a7dc2914c986fe3",
|
||||
"md5": "5f5b67f5d2862190ae9440ca7041b7a5",
|
||||
"size": "1668336"
|
||||
}
|
||||
},
|
||||
"linux-aarch64": {
|
||||
"cuda12": {
|
||||
"relative_path": "cudnn_samples/linux-aarch64/cudnn_samples-linux-aarch64-8.9.5.30_cuda12-archive.tar.xz",
|
||||
"sha256": "044c0d4436e1ecff6785a8bacf45cf2b5d504eb1c04bb9617aed86bfea77e45f",
|
||||
"md5": "ad2c201cf63561b5f0ddf505706eed97",
|
||||
"size": "1663868"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,123 @@
|
||||
{
|
||||
"release_date": "2024-03-15",
|
||||
"release_label": "8.9.7",
|
||||
"release_product": "cudnn",
|
||||
"cudnn": {
|
||||
"name": "NVIDIA CUDA Deep Neural Network library",
|
||||
"license": "cudnn",
|
||||
"license_path": "cudnn/LICENSE.txt",
|
||||
"version": "8.9.7.29",
|
||||
"cuda_variant": [
|
||||
"11",
|
||||
"12"
|
||||
],
|
||||
"linux-x86_64": {
|
||||
"cuda11": {
|
||||
"relative_path": "cudnn/linux-x86_64/cudnn-linux-x86_64-8.9.7.29_cuda11-archive.tar.xz",
|
||||
"sha256": "a3e2509028cecda0117ce5a0f42106346e82e86d390f4bb9475afc976c77402e",
|
||||
"md5": "9ee28df53dc5f83d97f5406f880d3953",
|
||||
"size": "860967256"
|
||||
},
|
||||
"cuda12": {
|
||||
"relative_path": "cudnn/linux-x86_64/cudnn-linux-x86_64-8.9.7.29_cuda12-archive.tar.xz",
|
||||
"sha256": "475333625c7e42a7af3ca0b2f7506a106e30c93b1aa0081cd9c13efb6e21e3bb",
|
||||
"md5": "046e32d5ab0fdc56878e9b33f3a6883d",
|
||||
"size": "864984964"
|
||||
}
|
||||
},
|
||||
"linux-ppc64le": {
|
||||
"cuda11": {
|
||||
"relative_path": "cudnn/linux-ppc64le/cudnn-linux-ppc64le-8.9.7.29_cuda11-archive.tar.xz",
|
||||
"sha256": "f23fd7d59f9d4f743fa926f317dab0d37f6ea21edb2726ceb607bea45b0f9f36",
|
||||
"md5": "44d8f80a90b6ba44379727a49a75b1fc",
|
||||
"size": "863759764"
|
||||
},
|
||||
"cuda12": {
|
||||
"relative_path": "cudnn/linux-ppc64le/cudnn-linux-ppc64le-8.9.7.29_cuda12-archive.tar.xz",
|
||||
"sha256": "8574d291b299f9cc0134304473c9933bd098cc717e8d0876f4aba9f9eebe1b76",
|
||||
"md5": "7acbeb71d48373ea343c13028172c783",
|
||||
"size": "865846096"
|
||||
}
|
||||
},
|
||||
"linux-sbsa": {
|
||||
"cuda11": {
|
||||
"relative_path": "cudnn/linux-sbsa/cudnn-linux-sbsa-8.9.7.29_cuda11-archive.tar.xz",
|
||||
"sha256": "91c37cfb458f541419e98510f13aaf5975c0232c613e18b776385490074eea17",
|
||||
"md5": "b4ae46fb80f2f8ef283d038585bbb122",
|
||||
"size": "861355724"
|
||||
},
|
||||
"cuda12": {
|
||||
"relative_path": "cudnn/linux-sbsa/cudnn-linux-sbsa-8.9.7.29_cuda12-archive.tar.xz",
|
||||
"sha256": "e98b7c80010785e5d5ca01ee4ce9b5b0c8c73587ea6f8648be34d3f8d1d47bd1",
|
||||
"md5": "52a436f378d20b8e1e1a8a173a8bdeda",
|
||||
"size": "863497272"
|
||||
}
|
||||
},
|
||||
"windows-x86_64": {
|
||||
"cuda11": {
|
||||
"relative_path": "cudnn/windows-x86_64/cudnn-windows-x86_64-8.9.7.29_cuda11-archive.zip",
|
||||
"sha256": "5e45478efe71a96329e6c0d2a3a2f79c747c15b2a51fead4b84c89b02cbf1671",
|
||||
"md5": "7dddb764c0a608ac23e72761be4c92c0",
|
||||
"size": "704240064"
|
||||
},
|
||||
"cuda12": {
|
||||
"relative_path": "cudnn/windows-x86_64/cudnn-windows-x86_64-8.9.7.29_cuda12-archive.zip",
|
||||
"sha256": "94fc17af8e83a26cc5d231ed23981b28c29c3fc2e87b1844ea3f46486f481df5",
|
||||
"md5": "30f8a180be36451511306f7837270214",
|
||||
"size": "708408517"
|
||||
}
|
||||
}
|
||||
},
|
||||
"cudnn_samples": {
|
||||
"name": "NVIDIA cuDNN samples",
|
||||
"license": "cudnn",
|
||||
"license_path": "cudnn_samples/LICENSE.txt",
|
||||
"version": "8.9.7.29",
|
||||
"cuda_variant": [
|
||||
"11",
|
||||
"12"
|
||||
],
|
||||
"linux-x86_64": {
|
||||
"cuda11": {
|
||||
"relative_path": "cudnn_samples/linux-x86_64/cudnn_samples-linux-x86_64-8.9.7.29_cuda11-archive.tar.xz",
|
||||
"sha256": "8b17f56e9d654d9af3d7711645811fb6f240f53bc2d62c00c063a6d452d80091",
|
||||
"md5": "b5410e97c73ea206b3d8939ce6ff8832",
|
||||
"size": "1664448"
|
||||
},
|
||||
"cuda12": {
|
||||
"relative_path": "cudnn_samples/linux-x86_64/cudnn_samples-linux-x86_64-8.9.7.29_cuda12-archive.tar.xz",
|
||||
"sha256": "d3a9a4f3f74b04c393bb9152fe3a53ac1514da679ca57858d69f64243debb905",
|
||||
"md5": "348306c65eb4c865fba72332fa7a5f33",
|
||||
"size": "1665932"
|
||||
}
|
||||
},
|
||||
"linux-ppc64le": {
|
||||
"cuda11": {
|
||||
"relative_path": "cudnn_samples/linux-ppc64le/cudnn_samples-linux-ppc64le-8.9.7.29_cuda11-archive.tar.xz",
|
||||
"sha256": "29a18538f13a63ee54cd795c78f64a1ca45df2de0b140cf095281a16d1d4d4e3",
|
||||
"md5": "9f398a26a5c7913faf58e8ee3bd9c6ff",
|
||||
"size": "1665244"
|
||||
},
|
||||
"cuda12": {
|
||||
"relative_path": "cudnn_samples/linux-ppc64le/cudnn_samples-linux-ppc64le-8.9.7.29_cuda12-archive.tar.xz",
|
||||
"sha256": "80664b7a6abed08633e0dc238f47f26aaaa0add5571bf6f4f4e475686a702c8d",
|
||||
"md5": "6aa5e8e801b5f730a103aaf52c66485e",
|
||||
"size": "1668400"
|
||||
}
|
||||
},
|
||||
"linux-sbsa": {
|
||||
"cuda11": {
|
||||
"relative_path": "cudnn_samples/linux-sbsa/cudnn_samples-linux-sbsa-8.9.7.29_cuda11-archive.tar.xz",
|
||||
"sha256": "dd7b618f4af89fff9cdad9cd87dbc4380c7f6120460c174bd10fef6342099915",
|
||||
"md5": "841a6dde4037a39f7ddd0fb92f245c9d",
|
||||
"size": "1666176"
|
||||
},
|
||||
"cuda12": {
|
||||
"relative_path": "cudnn_samples/linux-sbsa/cudnn_samples-linux-sbsa-8.9.7.29_cuda12-archive.tar.xz",
|
||||
"sha256": "4d84211d62e636ad3728674e55e9ce91e29c78d071fcb78453f8a71902758836",
|
||||
"md5": "0e80992ee19918efd714199f41cbe24b",
|
||||
"size": "1664288"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,121 @@
|
||||
{
|
||||
"release_date": "2025-09-04",
|
||||
"release_label": "9.13.0",
|
||||
"release_product": "cudnn",
|
||||
"cudnn": {
|
||||
"name": "NVIDIA CUDA Deep Neural Network library",
|
||||
"license": "cudnn",
|
||||
"license_path": "cudnn/LICENSE.txt",
|
||||
"version": "9.13.0.50",
|
||||
"linux-x86_64": {
|
||||
"cuda12": {
|
||||
"relative_path": "cudnn/linux-x86_64/cudnn-linux-x86_64-9.13.0.50_cuda12-archive.tar.xz",
|
||||
"sha256": "28c5c59316464434eb7bafe75fc36285160c28d559a50056ded13394955d1f7d",
|
||||
"md5": "55633b6a710506c4e4d704aef42a5fdd",
|
||||
"size": "873944756"
|
||||
},
|
||||
"cuda13": {
|
||||
"relative_path": "cudnn/linux-x86_64/cudnn-linux-x86_64-9.13.0.50_cuda13-archive.tar.xz",
|
||||
"sha256": "02f47d9456773c80d97ed245efd9eb22bb985dcfdb74559213536035291e7a01",
|
||||
"md5": "ff33c2783f44d10874f93d32d13764ab",
|
||||
"size": "641302680"
|
||||
}
|
||||
},
|
||||
"cuda_variant": [
|
||||
"12",
|
||||
"13"
|
||||
],
|
||||
"linux-sbsa": {
|
||||
"cuda12": {
|
||||
"relative_path": "cudnn/linux-sbsa/cudnn-linux-sbsa-9.13.0.50_cuda12-archive.tar.xz",
|
||||
"sha256": "28f3f86aa102870c5d6804bca1bb1a0dcc1df69d0235c0a4120ae0aa6d14ffc7",
|
||||
"md5": "9bd5d15c46ccc94e7191e47446f22e62",
|
||||
"size": "873177712"
|
||||
},
|
||||
"cuda13": {
|
||||
"relative_path": "cudnn/linux-sbsa/cudnn-linux-sbsa-9.13.0.50_cuda13-archive.tar.xz",
|
||||
"sha256": "78931057322ab87b72b0a988462568412edfed5bdef1eaf717961351b53cb3d0",
|
||||
"md5": "aadbfef24b136ce2385170c19213c4fa",
|
||||
"size": "766735516"
|
||||
}
|
||||
},
|
||||
"windows-x86_64": {
|
||||
"cuda12": {
|
||||
"relative_path": "cudnn/windows-x86_64/cudnn-windows-x86_64-9.13.0.50_cuda12-archive.zip",
|
||||
"sha256": "827e294e13e352586a5bcf5b2188025fe5cba590df69b8010e97dd30b9a0266f",
|
||||
"md5": "e08c945fd8f570e6779ce9cb216bcf20",
|
||||
"size": "621840857"
|
||||
},
|
||||
"cuda13": {
|
||||
"relative_path": "cudnn/windows-x86_64/cudnn-windows-x86_64-9.13.0.50_cuda13-archive.zip",
|
||||
"sha256": "6314aa4ca21e727bc012fecf2bf7192276ac7b648e1f709f52947496d70808dd",
|
||||
"md5": "d5fa2f7d1596bc6bbabb6888f9139fd4",
|
||||
"size": "337906764"
|
||||
}
|
||||
},
|
||||
"linux-aarch64": {
|
||||
"cuda12": {
|
||||
"relative_path": "cudnn/linux-aarch64/cudnn-linux-aarch64-9.13.0.50_cuda12-archive.tar.xz",
|
||||
"sha256": "c4b47fe8b1f936aa5cbc312f2d0707990fe5f55693fb5640c7141d301aa7db4c",
|
||||
"md5": "4a00c0ae53ad6fdb761d7ab56993863c",
|
||||
"size": "940699120"
|
||||
},
|
||||
"cuda13": {
|
||||
"relative_path": "cudnn/linux-aarch64/cudnn-linux-aarch64-9.13.0.50_cuda13-archive.tar.xz",
|
||||
"sha256": "f18ee7d3fd7b1d136602eb2f5d5d59abe5445db384ad308d22fbeeee11ef151e",
|
||||
"md5": "365f06153492370374e5e7b20d4646eb",
|
||||
"size": "690215112"
|
||||
}
|
||||
}
|
||||
},
|
||||
"cudnn_jit": {
|
||||
"name": "NVIDIA CUDA Deep Neural Network Graph JIT library",
|
||||
"license": "cudnn",
|
||||
"license_path": "cudnn_jit/LICENSE.txt",
|
||||
"version": "9.13.0.50",
|
||||
"linux-x86_64": {
|
||||
"cuda12": {
|
||||
"relative_path": "cudnn_jit/linux-x86_64/cudnn_jit-linux-x86_64-9.13.0.50_cuda12-archive.tar.xz",
|
||||
"sha256": "eb22533f125e4315de501112e2d0f0c001ba50b8f872f2bf7a12d545f609cb59",
|
||||
"md5": "07023563efe85b7aab07b2982e541872",
|
||||
"size": "13417344"
|
||||
},
|
||||
"cuda13": {
|
||||
"relative_path": "cudnn_jit/linux-x86_64/cudnn_jit-linux-x86_64-9.13.0.50_cuda13-archive.tar.xz",
|
||||
"sha256": "cd76ea09574f3a1c07728c15fc794dbeed181b57b25e02768c5918b0353e658d",
|
||||
"md5": "e9cc3e0e19d4c88158ef48193905b94b",
|
||||
"size": "12722420"
|
||||
}
|
||||
},
|
||||
"cuda_variant": [
|
||||
"12",
|
||||
"13"
|
||||
],
|
||||
"linux-sbsa": {
|
||||
"cuda12": {
|
||||
"relative_path": "cudnn_jit/linux-sbsa/cudnn_jit-linux-sbsa-9.13.0.50_cuda12-archive.tar.xz",
|
||||
"sha256": "f8c91dca056c0128f0ed15296de4b9fcf1cd503241cec02f6e4b3e2965e1be9e",
|
||||
"md5": "adf011a7b7bcbd0c9be0abd546a49e22",
|
||||
"size": "13047884"
|
||||
},
|
||||
"cuda13": {
|
||||
"relative_path": "cudnn_jit/linux-sbsa/cudnn_jit-linux-sbsa-9.13.0.50_cuda13-archive.tar.xz",
|
||||
"sha256": "d5ce2fb281457eddf6319e517a797d009a4f5db55a565a753f0ba53e541163b2",
|
||||
"md5": "d61024f0d19e5bcdad3e969c79f21d62",
|
||||
"size": "12812136"
|
||||
}
|
||||
}
|
||||
},
|
||||
"cudnn_samples": {
|
||||
"name": "NVIDIA cuDNN samples",
|
||||
"license": "cudnn",
|
||||
"license_path": "cudnn_samples/LICENSE.txt",
|
||||
"version": "9.13.0.50",
|
||||
"source": {
|
||||
"relative_path": "cudnn_samples/source/cudnn_samples-source-9.13.0.50-archive.tar.xz",
|
||||
"sha256": "34dd694b6a1de34fca31a89b0b41f1f5edbf2dddb5822dda193332be6a2f508d",
|
||||
"md5": "2750b23e2e468d8f5ff0b429d1e60d32",
|
||||
"size": "1666920"
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,5 @@
|
||||
# cudss
|
||||
|
||||
Link: <https://developer.download.nvidia.com/compute/cudss/redist/>
|
||||
|
||||
Requirements: <https://docs.nvidia.com/cuda/cudss/release_notes.html>
|
||||
@@ -0,0 +1,46 @@
|
||||
{
|
||||
"release_date": "2025-06-16",
|
||||
"release_label": "0.6.0",
|
||||
"release_product": "cudss",
|
||||
"libcudss": {
|
||||
"name": "NVIDIA cuDSS library",
|
||||
"license": "cuDSS library",
|
||||
"license_path": "libcudss/LICENSE.txt",
|
||||
"version": "0.6.0.5",
|
||||
"linux-x86_64": {
|
||||
"cuda12": {
|
||||
"relative_path": "libcudss/linux-x86_64/libcudss-linux-x86_64-0.6.0.5_cuda12-archive.tar.xz",
|
||||
"sha256": "159ce1d4e3e4bba13b0bd15cf943e44b869c53b7a94f9bac980768c927f02e75",
|
||||
"md5": "4ac17f5b35a4ecc550c4d7c479a5c5b5",
|
||||
"size": "68957176"
|
||||
}
|
||||
},
|
||||
"cuda_variant": [
|
||||
"12"
|
||||
],
|
||||
"linux-sbsa": {
|
||||
"cuda12": {
|
||||
"relative_path": "libcudss/linux-sbsa/libcudss-linux-sbsa-0.6.0.5_cuda12-archive.tar.xz",
|
||||
"sha256": "b56cd0841c543bb81b2665063967f56cf3a3a22a445ddf1642c7f765f2059b42",
|
||||
"md5": "490582492aceea286eb6d961d1a55beb",
|
||||
"size": "68786208"
|
||||
}
|
||||
},
|
||||
"windows-x86_64": {
|
||||
"cuda12": {
|
||||
"relative_path": "libcudss/windows-x86_64/libcudss-windows-x86_64-0.6.0.5_cuda12-archive.zip",
|
||||
"sha256": "45319317d9f67fecc9af7e5cf162cb46111f5d35b06871c147fa8f030d7cecc5",
|
||||
"md5": "c1036a4cbadc7b201e08acaac13fcac6",
|
||||
"size": "50807624"
|
||||
}
|
||||
},
|
||||
"linux-aarch64": {
|
||||
"cuda12": {
|
||||
"relative_path": "libcudss/linux-aarch64/libcudss-linux-aarch64-0.6.0.5_cuda12-archive.tar.xz",
|
||||
"sha256": "e6f5d5122d735f9dbfd42c9eaba067a557a5613ee4a6001806935de11aff4b09",
|
||||
"md5": "34fd4b0843da02ebaa76f5711e1b63de",
|
||||
"size": "32347256"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,5 @@
|
||||
# cuquantum
|
||||
|
||||
Link: <https://developer.download.nvidia.com/compute/cuquantum/redist/>
|
||||
|
||||
Requirements: <https://docs.nvidia.com/cuda/cuquantum/latest/getting-started/index.html#dependencies>
|
||||
@@ -0,0 +1,43 @@
|
||||
{
|
||||
"release_date": "2025-09-08",
|
||||
"release_label": "25.09.0",
|
||||
"release_product": "cuquantum",
|
||||
"cuquantum": {
|
||||
"name": "NVIDIA cuQuantum",
|
||||
"license": "cuQuantum",
|
||||
"license_path": "cuquantum/LICENSE.txt",
|
||||
"version": "25.09.0.7",
|
||||
"linux-x86_64": {
|
||||
"cuda12": {
|
||||
"relative_path": "cuquantum/linux-x86_64/cuquantum-linux-x86_64-25.09.0.7_cuda12-archive.tar.xz",
|
||||
"sha256": "fec8fcceeb9b62f2dff37834a9cd44c6ab05486dec0ebc5ae3452dd8d6390ea0",
|
||||
"md5": "b57e63b14a2a83115fc11ebb0fa93f61",
|
||||
"size": "115548260"
|
||||
},
|
||||
"cuda13": {
|
||||
"relative_path": "cuquantum/linux-x86_64/cuquantum-linux-x86_64-25.09.0.7_cuda13-archive.tar.xz",
|
||||
"sha256": "3f1e706c0ee582341ec4f103d37c92d90ef16d1cfac42f502c44b2feb6861dd9",
|
||||
"md5": "9e11c71d25231c962b8df11adb4e570b",
|
||||
"size": "119378588"
|
||||
}
|
||||
},
|
||||
"cuda_variant": [
|
||||
"12",
|
||||
"13"
|
||||
],
|
||||
"linux-sbsa": {
|
||||
"cuda12": {
|
||||
"relative_path": "cuquantum/linux-sbsa/cuquantum-linux-sbsa-25.09.0.7_cuda12-archive.tar.xz",
|
||||
"sha256": "b63237e122a32f2576118297c291597815c9c3573daf5c9b4592ada7af13fc17",
|
||||
"md5": "5b7d6dcf44d2e80eb155a22574b71b3a",
|
||||
"size": "115171716"
|
||||
},
|
||||
"cuda13": {
|
||||
"relative_path": "cuquantum/linux-sbsa/cuquantum-linux-sbsa-25.09.0.7_cuda13-archive.tar.xz",
|
||||
"sha256": "629d3e6749ac49e96de4469477d3b0172581896c7273890bc355420b344fac87",
|
||||
"md5": "624ddcbf89311a43f2d7fb6671e37c6b",
|
||||
"size": "119179132"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,5 @@
|
||||
# cusolvermp
|
||||
|
||||
Link: <https://developer.download.nvidia.com/compute/cusolvermp/redist/>
|
||||
|
||||
Requirements: <https://docs.nvidia.com/cuda/cusolvermp/getting_started/index.html#hardware-and-software-requirements>
|
||||
@@ -0,0 +1,43 @@
|
||||
{
|
||||
"release_date": "2025-08-12",
|
||||
"release_label": "0.7.0",
|
||||
"release_product": "cusolvermp",
|
||||
"libcusolvermp": {
|
||||
"name": "NVIDIA libcusolvermp library",
|
||||
"license": "libcusolvermp library",
|
||||
"license_path": "libcusolvermp/LICENSE.txt",
|
||||
"version": "0.7.0.833",
|
||||
"linux-x86_64": {
|
||||
"cuda12": {
|
||||
"relative_path": "libcusolvermp/linux-x86_64/libcusolvermp-linux-x86_64-0.7.0.833_cuda12-archive.tar.xz",
|
||||
"sha256": "5383f35eefd45cc0a5cbd173a4a353941f02b912eb2f8d3a85c30345054df5e9",
|
||||
"md5": "f9cf72595e8ff6d72a68b4a23ccc9973",
|
||||
"size": "9293812"
|
||||
},
|
||||
"cuda13": {
|
||||
"relative_path": "libcusolvermp/linux-x86_64/libcusolvermp-linux-x86_64-0.7.0.833_cuda13-archive.tar.xz",
|
||||
"sha256": "4a4bf2d08dad3a276b33f9356f8cd8b5b2a70201257a277c83bb3cfdb7a7107a",
|
||||
"md5": "a95c2c6a6f8d9c07ee99ca1545a71967",
|
||||
"size": "8014464"
|
||||
}
|
||||
},
|
||||
"cuda_variant": [
|
||||
"12",
|
||||
"13"
|
||||
],
|
||||
"linux-sbsa": {
|
||||
"cuda12": {
|
||||
"relative_path": "libcusolvermp/linux-sbsa/libcusolvermp-linux-sbsa-0.7.0.833_cuda12-archive.tar.xz",
|
||||
"sha256": "a0012c5be7ac742a26cf8894bed3c703edea84eddf0d5dca42d35582622ffb9b",
|
||||
"md5": "626c1e35145fa495a7708c5fff007866",
|
||||
"size": "9214676"
|
||||
},
|
||||
"cuda13": {
|
||||
"relative_path": "libcusolvermp/linux-sbsa/libcusolvermp-linux-sbsa-0.7.0.833_cuda13-archive.tar.xz",
|
||||
"sha256": "51b80fc5cdeb197b3e9b1de393a8413943ccb2d0e7509c6a183816be83123260",
|
||||
"md5": "6338b4e581a076214581ec650f9eb92e",
|
||||
"size": "7605548"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,7 @@
|
||||
# cusparselt
|
||||
|
||||
Link: <https://developer.download.nvidia.com/compute/cusparselt/redist/>
|
||||
|
||||
Requirements: <https://docs.nvidia.com/cuda/cusparselt/getting_started.html#prerequisites>
|
||||
|
||||
NOTE: 0.7.1 only supports CUDA 12.8 and later.
|
||||
@@ -0,0 +1,35 @@
|
||||
{
|
||||
"release_date": "2024-10-15",
|
||||
"release_label": "0.6.3",
|
||||
"release_product": "cusparselt",
|
||||
"libcusparse_lt": {
|
||||
"name": "NVIDIA cuSPARSELt",
|
||||
"license": "cuSPARSELt",
|
||||
"license_path": "libcusparse_lt/LICENSE.txt",
|
||||
"version": "0.6.3.2",
|
||||
"linux-x86_64": {
|
||||
"relative_path": "libcusparse_lt/linux-x86_64/libcusparse_lt-linux-x86_64-0.6.3.2-archive.tar.xz",
|
||||
"sha256": "a2f856e78943f5c538bdef1c9edc64a5ed30bf8bb7d5fcb615c684ffe776cc31",
|
||||
"md5": "d074824e3dc6c382160873a8ef49c098",
|
||||
"size": "110698912"
|
||||
},
|
||||
"linux-sbsa": {
|
||||
"relative_path": "libcusparse_lt/linux-sbsa/libcusparse_lt-linux-sbsa-0.6.3.2-archive.tar.xz",
|
||||
"sha256": "3e420ddbff4eb9ac603f57c7aa8b3d5271112816e244eb55ef9f30c4eb6a04b7",
|
||||
"md5": "dd6b0dd464bb8596950ab761890e1ae1",
|
||||
"size": "109919332"
|
||||
},
|
||||
"windows-x86_64": {
|
||||
"relative_path": "libcusparse_lt/windows-x86_64/libcusparse_lt-windows-x86_64-0.6.3.2-archive.zip",
|
||||
"sha256": "6d276e33a399008c22ffefd707eefe2f57ff2ff8f1dc1929d9e3e75d3c83562d",
|
||||
"md5": "95de6b57ceceb199f9b86bfbe5d2d394",
|
||||
"size": "328143559"
|
||||
},
|
||||
"linux-aarch64": {
|
||||
"relative_path": "libcusparse_lt/linux-aarch64/libcusparse_lt-linux-aarch64-0.6.3.2-archive.tar.xz",
|
||||
"sha256": "91501d0c05d1ff0dd67399ecd7c1bf76a620e842dce54ae4c8a1f07cec0673e5",
|
||||
"md5": "7f00c8663678a97948bbd2e98b65a9fa",
|
||||
"size": "19000276"
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,71 @@
|
||||
{
|
||||
"release_date": "2025-09-04",
|
||||
"release_label": "0.8.1",
|
||||
"release_product": "cusparselt",
|
||||
"libcusparse_lt": {
|
||||
"name": "NVIDIA cuSPARSELt",
|
||||
"license": "cuSPARSELt",
|
||||
"license_path": "libcusparse_lt/LICENSE.txt",
|
||||
"version": "0.8.1.1",
|
||||
"linux-x86_64": {
|
||||
"cuda12": {
|
||||
"relative_path": "libcusparse_lt/linux-x86_64/libcusparse_lt-linux-x86_64-0.8.1.1_cuda12-archive.tar.xz",
|
||||
"sha256": "b34272e683e9f798435af05dc124657d1444cd0e13802c3d2f3152e31cd898a3",
|
||||
"md5": "8e6d6454a2ac514c592f18fe7e77f84c",
|
||||
"size": "311599728"
|
||||
},
|
||||
"cuda13": {
|
||||
"relative_path": "libcusparse_lt/linux-x86_64/libcusparse_lt-linux-x86_64-0.8.1.1_cuda13-archive.tar.xz",
|
||||
"sha256": "82dd3e5ebc199a27011f58857a80cd825e77bba634ab2286ba3d4e13115db89a",
|
||||
"md5": "90e40a8bffe304d14578eb8f2173dee1",
|
||||
"size": "317355908"
|
||||
}
|
||||
},
|
||||
"cuda_variant": [
|
||||
"12",
|
||||
"13"
|
||||
],
|
||||
"linux-sbsa": {
|
||||
"cuda12": {
|
||||
"relative_path": "libcusparse_lt/linux-sbsa/libcusparse_lt-linux-sbsa-0.8.1.1_cuda12-archive.tar.xz",
|
||||
"sha256": "e87c2e1a8615aa588864915a05c42309869c96d4046b07a50a7a729af2c1ff22",
|
||||
"md5": "504742ec72d48e75954d1a31e30aebcb",
|
||||
"size": "310432728"
|
||||
},
|
||||
"cuda13": {
|
||||
"relative_path": "libcusparse_lt/linux-sbsa/libcusparse_lt-linux-sbsa-0.8.1.1_cuda13-archive.tar.xz",
|
||||
"sha256": "d3ce9fb25961540291c6dc6c7292a1ea7cb886590bf896fdc2564cb2a261a3de",
|
||||
"md5": "032ce0fc1decdca18652ce3fcf05b14e",
|
||||
"size": "425143780"
|
||||
}
|
||||
},
|
||||
"windows-x86_64": {
|
||||
"cuda12": {
|
||||
"relative_path": "libcusparse_lt/windows-x86_64/libcusparse_lt-windows-x86_64-0.8.1.1_cuda12-archive.zip",
|
||||
"sha256": "1a1a4be5c2da47e242d3fbab35b66077916aeb2b8175bc6a0a6691e11972951c",
|
||||
"md5": "78f5c274b42ff56b1b74427d89372a14",
|
||||
"size": "223735080"
|
||||
},
|
||||
"cuda13": {
|
||||
"relative_path": "libcusparse_lt/windows-x86_64/libcusparse_lt-windows-x86_64-0.8.1.1_cuda13-archive.zip",
|
||||
"sha256": "d83c3a9c34df98aa999e6a64278a36eb837b411af593d41fe74746a2915e379d",
|
||||
"md5": "ece8942a85c1706547da6c11ed4e48b2",
|
||||
"size": "156765306"
|
||||
}
|
||||
},
|
||||
"linux-aarch64": {
|
||||
"cuda12": {
|
||||
"relative_path": "libcusparse_lt/linux-aarch64/libcusparse_lt-linux-aarch64-0.8.1.1_cuda12-archive.tar.xz",
|
||||
"sha256": "5426a897c73a9b98a83c4e132d15abc63dc4a00f7e38266e7b82c42cd58a01e1",
|
||||
"md5": "fffc61b32112a6c09046bfb3300c840f",
|
||||
"size": "127531168"
|
||||
},
|
||||
"cuda13": {
|
||||
"relative_path": "libcusparse_lt/linux-aarch64/libcusparse_lt-linux-aarch64-0.8.1.1_cuda13-archive.tar.xz",
|
||||
"sha256": "0fcf5808f66c71f755b4a73af2e955292e4334fec6a851eea1ac2e20878602b7",
|
||||
"md5": "eb6eb4a96f82ff42e0be38f8486fb5d7",
|
||||
"size": "124707896"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,5 @@
|
||||
# cutensor
|
||||
|
||||
Link: <https://developer.download.nvidia.com/compute/cutensor/redist/>
|
||||
|
||||
Requirements: <https://docs.nvidia.com/cuda/cutensor/latest/index.html#support>
|
||||
@@ -0,0 +1,57 @@
|
||||
{
|
||||
"release_date": "2025-09-04",
|
||||
"release_label": "2.3.1",
|
||||
"release_product": "cutensor",
|
||||
"libcutensor": {
|
||||
"name": "NVIDIA cuTENSOR",
|
||||
"license": "cuTensor",
|
||||
"license_path": "libcutensor/LICENSE.txt",
|
||||
"version": "2.3.1.0",
|
||||
"linux-x86_64": {
|
||||
"cuda12": {
|
||||
"relative_path": "libcutensor/linux-x86_64/libcutensor-linux-x86_64-2.3.1.0_cuda12-archive.tar.xz",
|
||||
"sha256": "b1d7ad37b24cd66a446ae76ac33bd5125aa58007a604cb64fc9c014a8d685940",
|
||||
"md5": "061f0d50d4642431d284bdd8ad9c45a4",
|
||||
"size": "428900080"
|
||||
},
|
||||
"cuda13": {
|
||||
"relative_path": "libcutensor/linux-x86_64/libcutensor-linux-x86_64-2.3.1.0_cuda13-archive.tar.xz",
|
||||
"sha256": "9cb1125f7de01ca319b5c72edeb7169b679b72beacc90354fb18a14056e24372",
|
||||
"md5": "e94ea98ca6e88961a39d52da1c9470c7",
|
||||
"size": "386539432"
|
||||
}
|
||||
},
|
||||
"cuda_variant": [
|
||||
"12",
|
||||
"13"
|
||||
],
|
||||
"linux-sbsa": {
|
||||
"cuda12": {
|
||||
"relative_path": "libcutensor/linux-sbsa/libcutensor-linux-sbsa-2.3.1.0_cuda12-archive.tar.xz",
|
||||
"sha256": "f3763cdc7b03ca08e348efb6faa35d461537390ce7d059e279e415b33dad8291",
|
||||
"md5": "048b99ec5a968df2dd2a3f6bd26d6f63",
|
||||
"size": "427395404"
|
||||
},
|
||||
"cuda13": {
|
||||
"relative_path": "libcutensor/linux-sbsa/libcutensor-linux-sbsa-2.3.1.0_cuda13-archive.tar.xz",
|
||||
"sha256": "2e4c24bd1621dac7497ca9edf90bfc5dbdcc38490dafd35821066f96f2934aef",
|
||||
"md5": "05e13cda907130e2f77cf86bba05fa11",
|
||||
"size": "385157096"
|
||||
}
|
||||
},
|
||||
"windows-x86_64": {
|
||||
"cuda12": {
|
||||
"relative_path": "libcutensor/windows-x86_64/libcutensor-windows-x86_64-2.3.1.0_cuda12-archive.zip",
|
||||
"sha256": "8df4c1b856c40e72f41d5b92efee5729bf11f00a0e1e3afd546b0d35a360a6cb",
|
||||
"md5": "3aae5e991b780b9c484a4f77883c84f8",
|
||||
"size": "218109706"
|
||||
},
|
||||
"cuda13": {
|
||||
"relative_path": "libcutensor/windows-x86_64/libcutensor-windows-x86_64-2.3.1.0_cuda13-archive.zip",
|
||||
"sha256": "8f933694164e310183fffa9cf27d4db43b6edb0fff53b5aa0ab23e705807ac12",
|
||||
"md5": "d4a958abf9ba2f10234364a37302e1ee",
|
||||
"size": "150006958"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
19
pkgs/development/cuda-modules/_cuda/manifests/default.nix
Normal file
19
pkgs/development/cuda-modules/_cuda/manifests/default.nix
Normal file
@@ -0,0 +1,19 @@
|
||||
{ lib }:
|
||||
lib.mapAttrs (
|
||||
redistName: _type:
|
||||
let
|
||||
redistManifestDir = ./. + "/${redistName}";
|
||||
in
|
||||
lib.concatMapAttrs (
|
||||
fileName: _type:
|
||||
let
|
||||
# Manifests all end in .json and are named "redistrib_<version>.json".
|
||||
version = lib.removePrefix "redistrib_" (lib.removeSuffix ".json" fileName);
|
||||
in
|
||||
# NOTE: We do not require that all files have this pattern, as manifest directories may contain documentation
|
||||
# and utility functions we should ignore.
|
||||
lib.optionalAttrs (version != fileName) {
|
||||
"${version}" = lib.importJSON (redistManifestDir + "/${fileName}");
|
||||
}
|
||||
) (builtins.readDir redistManifestDir)
|
||||
) (builtins.removeAttrs (builtins.readDir ./.) [ "default.nix" ])
|
||||
@@ -0,0 +1,5 @@
|
||||
# nppplus
|
||||
|
||||
Link: <https://developer.download.nvidia.com/compute/nppplus/redist/>
|
||||
|
||||
Requirements: <https://docs.nvidia.com/cuda/nppplus/>
|
||||
@@ -0,0 +1,71 @@
|
||||
{
|
||||
"release_date": "2025-04-18",
|
||||
"release_label": "0.10.0",
|
||||
"release_product": "nppplus",
|
||||
"libnpp_plus": {
|
||||
"name": "NVIDIA NPP PLUS library",
|
||||
"license": "NPP PLUS library",
|
||||
"license_path": "libnpp_plus/LICENSE.txt",
|
||||
"version": "0.10.0.0",
|
||||
"linux-x86_64": {
|
||||
"cuda11": {
|
||||
"relative_path": "libnpp_plus/linux-x86_64/libnpp_plus-linux-x86_64-0.10.0.0_cuda11-archive.tar.xz",
|
||||
"sha256": "dfd0995068504ab9cd14767036680222f73d01a0e38ab9a53f9968d53f9745f7",
|
||||
"md5": "210b430b3b047956a43564f6102664a1",
|
||||
"size": "365025464"
|
||||
},
|
||||
"cuda12": {
|
||||
"relative_path": "libnpp_plus/linux-x86_64/libnpp_plus-linux-x86_64-0.10.0.0_cuda12-archive.tar.xz",
|
||||
"sha256": "0a2f1138b941160863eb1ec75a9f5072b330b234c287504bc5ca06130c5342b9",
|
||||
"md5": "71c7a351c31df634bb9c504ff8d3f9c1",
|
||||
"size": "365024708"
|
||||
}
|
||||
},
|
||||
"cuda_variant": [
|
||||
"11",
|
||||
"12"
|
||||
],
|
||||
"linux-sbsa": {
|
||||
"cuda11": {
|
||||
"relative_path": "libnpp_plus/linux-sbsa/libnpp_plus-linux-sbsa-0.10.0.0_cuda11-archive.tar.xz",
|
||||
"sha256": "108a3126d07c7e4ce5ad0c85a9076bed6c2abeeec66b4c23a35d30d45ecf9110",
|
||||
"md5": "461dbe9d9dbdf1ed68b688634a3aaafa",
|
||||
"size": "363897792"
|
||||
},
|
||||
"cuda12": {
|
||||
"relative_path": "libnpp_plus/linux-sbsa/libnpp_plus-linux-sbsa-0.10.0.0_cuda12-archive.tar.xz",
|
||||
"sha256": "7aab2e7cab1fade883463bdb85a240f66d956395e3a90ca78b5bf413fa9a2fd9",
|
||||
"md5": "f0b042171c6c0290a9afaf1a5766994b",
|
||||
"size": "363891396"
|
||||
}
|
||||
},
|
||||
"windows-x86_64": {
|
||||
"cuda11": {
|
||||
"relative_path": "libnpp_plus/windows-x86_64/libnpp_plus-windows-x86_64-0.10.0.0_cuda11-archive.zip",
|
||||
"sha256": "333b9181526d8421b3445bc1c2b50ea8a0a8dd06412bf1c5dce3ed760659ec73",
|
||||
"md5": "13d9f5a50932e3c457fec8f38e0b914b",
|
||||
"size": "310918881"
|
||||
},
|
||||
"cuda12": {
|
||||
"relative_path": "libnpp_plus/windows-x86_64/libnpp_plus-windows-x86_64-0.10.0.0_cuda12-archive.zip",
|
||||
"sha256": "55f352478ce111187e2a1a2944f95eff009e156fc16793786f36ed6ed6e334d6",
|
||||
"md5": "5142a28d82bc8470d7a7eea698b10f56",
|
||||
"size": "310918881"
|
||||
}
|
||||
},
|
||||
"linux-aarch64": {
|
||||
"cuda11": {
|
||||
"relative_path": "libnpp_plus/linux-aarch64/libnpp_plus-linux-aarch64-0.10.0.0_cuda11-archive.tar.xz",
|
||||
"sha256": "fa09d1306eadd304913fa7e9904790e00c8acb05bdddd9832b2681d591449ecf",
|
||||
"md5": "0263b51a2cf5ca3b23464dfe5a688044",
|
||||
"size": "114297704"
|
||||
},
|
||||
"cuda12": {
|
||||
"relative_path": "libnpp_plus/linux-aarch64/libnpp_plus-linux-aarch64-0.10.0.0_cuda12-archive.tar.xz",
|
||||
"sha256": "570c4e0c871f9dd0a3e5959c1d144a53b232e8308b7d7f4d496df705f9aa2269",
|
||||
"md5": "44b1e1f7d5671aa08276bcabbe1cc458",
|
||||
"size": "114297660"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,5 @@
|
||||
# nvcomp
|
||||
|
||||
Link: <https://developer.download.nvidia.com/compute/nvcomp/redist/>
|
||||
|
||||
Requirements: <https://docs.nvidia.com/cuda/nvcomp/installation.html>
|
||||
@@ -0,0 +1,76 @@
|
||||
{
|
||||
"release_date": "2025-08-04",
|
||||
"release_label": "5.0.0.6",
|
||||
"release_product": "nvcomp",
|
||||
"nvcomp": {
|
||||
"name": "NVIDIA nvCOMP library",
|
||||
"license": "nvCOMP library",
|
||||
"license_path": "nvcomp/LICENSE.txt",
|
||||
"version": "5.0.0.6",
|
||||
"linux-x86_64": {
|
||||
"cuda11": {
|
||||
"relative_path": "nvcomp/linux-x86_64/nvcomp-linux-x86_64-5.0.0.6_cuda11-archive.tar.xz",
|
||||
"sha256": "64f5f7cc622f36006c503ee5a3f9d730b5c6cc49e4fab0fc0507c1272d5efa7b",
|
||||
"md5": "e7fcc75a1ed5c056211948c896dccf62",
|
||||
"size": "21128508"
|
||||
},
|
||||
"cuda12": {
|
||||
"relative_path": "nvcomp/linux-x86_64/nvcomp-linux-x86_64-5.0.0.6_cuda12-archive.tar.xz",
|
||||
"sha256": "40ac1d5f8c0a2719f11b21a4d31b6050343607dffd1401d1fe9a154800b56e46",
|
||||
"md5": "58436c6eb41b7a317ddf9131bfe7f92b",
|
||||
"size": "40211608"
|
||||
},
|
||||
"cuda13": {
|
||||
"relative_path": "nvcomp/linux-x86_64/nvcomp-linux-x86_64-5.0.0.6_cuda13-archive.tar.xz",
|
||||
"sha256": "4166e7c3825fa90139d50042154438ba06ea493985aeb7968fc1ad0d5fa5a22a",
|
||||
"md5": "abb06ec210645ce491d66fdf26f89a35",
|
||||
"size": "37072544"
|
||||
}
|
||||
},
|
||||
"cuda_variant": [
|
||||
"11",
|
||||
"12",
|
||||
"13"
|
||||
],
|
||||
"linux-sbsa": {
|
||||
"cuda11": {
|
||||
"relative_path": "nvcomp/linux-sbsa/nvcomp-linux-sbsa-5.0.0.6_cuda11-archive.tar.xz",
|
||||
"sha256": "e98a20570e56ad94709c5014960c9f1fa9b4b5a7eb132dede85dd8ffd6c5f3f8",
|
||||
"md5": "d80c48f270668e50e61e718b13643080",
|
||||
"size": "21312128"
|
||||
},
|
||||
"cuda12": {
|
||||
"relative_path": "nvcomp/linux-sbsa/nvcomp-linux-sbsa-5.0.0.6_cuda12-archive.tar.xz",
|
||||
"sha256": "e57e658e35f6b266399ca2286e9439e5e9c9f3db907a718c55a07e6338b1c5bf",
|
||||
"md5": "a32f79f2cef263a7b304d681a3076af5",
|
||||
"size": "40157632"
|
||||
},
|
||||
"cuda13": {
|
||||
"relative_path": "nvcomp/linux-sbsa/nvcomp-linux-sbsa-5.0.0.6_cuda13-archive.tar.xz",
|
||||
"sha256": "26f6189f302affba7a3df726164a809aa7a5118560283627bbaa9aaa860cbc96",
|
||||
"md5": "57c04016e2506a63947e5ccedb3a0be3",
|
||||
"size": "37327100"
|
||||
}
|
||||
},
|
||||
"windows-x86_64": {
|
||||
"cuda11": {
|
||||
"relative_path": "nvcomp/windows-x86_64/nvcomp-windows-x86_64-5.0.0.6_cuda11-archive.zip",
|
||||
"sha256": "5c2e1ee55398f47d28806eb7c53aca33b9e22d6d5b3acec86bbc4253c7e6d1d3",
|
||||
"md5": "40e5c7250ad930aeb1e34d5e01f3ada1",
|
||||
"size": "38916070"
|
||||
},
|
||||
"cuda12": {
|
||||
"relative_path": "nvcomp/windows-x86_64/nvcomp-windows-x86_64-5.0.0.6_cuda12-archive.zip",
|
||||
"sha256": "d851077cf6ebaea21a548c4e55db45f0dd45271e35e7cffda7dd9917603ab8d4",
|
||||
"md5": "da94be1539cf0f709bf9cd2b6881ebd1",
|
||||
"size": "61092352"
|
||||
},
|
||||
"cuda13": {
|
||||
"relative_path": "nvcomp/windows-x86_64/nvcomp-windows-x86_64-5.0.0.6_cuda13-archive.zip",
|
||||
"sha256": "d7d1397a438f6d16dbe9989a72ffcf9e7935f17c836316dd98403929c09c585a",
|
||||
"md5": "b3195c7bbe9775a9ddcaea1c1f43664e",
|
||||
"size": "43017930"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,5 @@
|
||||
# nvjpeg2000
|
||||
|
||||
Link: <https://developer.download.nvidia.com/compute/nvjpeg2000/redist/>
|
||||
|
||||
Requirements: <https://docs.nvidia.com/cuda/nvjpeg2000/userguide.html#platforms-supported>
|
||||
@@ -0,0 +1,35 @@
|
||||
{
|
||||
"release_date": "2025-08-05",
|
||||
"release_label": "0.9.0",
|
||||
"release_product": "nvjpeg2000",
|
||||
"libnvjpeg_2k": {
|
||||
"name": "NVIDIA nvJPEG 2000",
|
||||
"license": "nvJPEG 2K",
|
||||
"license_path": "libnvjpeg_2k/LICENSE.txt",
|
||||
"version": "0.9.0.43",
|
||||
"linux-x86_64": {
|
||||
"relative_path": "libnvjpeg_2k/linux-x86_64/libnvjpeg_2k-linux-x86_64-0.9.0.43-archive.tar.xz",
|
||||
"sha256": "1d26f62a7141e81c604342a610deb8ad8d10e1c08cb59598881dc201e59f21a3",
|
||||
"md5": "88e231d23f60bfc0effc871ccf465b3a",
|
||||
"size": "15407304"
|
||||
},
|
||||
"linux-sbsa": {
|
||||
"relative_path": "libnvjpeg_2k/linux-sbsa/libnvjpeg_2k-linux-sbsa-0.9.0.43-archive.tar.xz",
|
||||
"sha256": "03e37b8e0f2d67a1ee376e6ac54fa9d62284bbdbf9edf90ea7d0a05b4c45bce1",
|
||||
"md5": "1b6b06ec71a67e1a70b15e379cd8c9ca",
|
||||
"size": "15432332"
|
||||
},
|
||||
"windows-x86_64": {
|
||||
"relative_path": "libnvjpeg_2k/windows-x86_64/libnvjpeg_2k-windows-x86_64-0.9.0.43-archive.zip",
|
||||
"sha256": "272d409945cd1c00beaa7f5d4c8197139aa32394358ea76d83f2cd2b51faf0c7",
|
||||
"md5": "6828d19ce79b943705e22d5cd4dbe6c4",
|
||||
"size": "14206962"
|
||||
},
|
||||
"linux-aarch64": {
|
||||
"relative_path": "libnvjpeg_2k/linux-aarch64/libnvjpeg_2k-linux-aarch64-0.9.0.43-archive.tar.xz",
|
||||
"sha256": "b2285c1da026bd189e837800da2a26cb75ce45ccb23ea7e6ee65114b3dcee66c",
|
||||
"md5": "608ef4eb15da8fda26b5b649c340855d",
|
||||
"size": "5384888"
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,5 @@
|
||||
# nvpl
|
||||
|
||||
Link: <https://developer.download.nvidia.com/compute/nvpl/redist/>
|
||||
|
||||
Requirements: <https://docs.nvidia.com/nvpl/latest/introduction.html>
|
||||
@@ -0,0 +1,101 @@
|
||||
{
|
||||
"release_date": "2025-05-28",
|
||||
"release_label": "25.5",
|
||||
"release_product": "nvpl",
|
||||
"nvpl_blas": {
|
||||
"name": "NVPL BLAS",
|
||||
"license": "NVPL",
|
||||
"license_path": "nvpl_blas/LICENSE.txt",
|
||||
"version": "0.4.1.1",
|
||||
"linux-sbsa": {
|
||||
"relative_path": "nvpl_blas/linux-sbsa/nvpl_blas-linux-sbsa-0.4.1.1-archive.tar.xz",
|
||||
"sha256": "57704e2e211999c899bca26346b946b881b609554914245131b390410f7b93e8",
|
||||
"md5": "9e2a95925dcb4bbe0ac337550f317272",
|
||||
"size": "773716"
|
||||
}
|
||||
},
|
||||
"nvpl_common": {
|
||||
"name": "NVPL Common",
|
||||
"license": "NVPL",
|
||||
"license_path": "nvpl_common/LICENSE.txt",
|
||||
"version": "0.3.3",
|
||||
"linux-sbsa": {
|
||||
"relative_path": "nvpl_common/linux-sbsa/nvpl_common-linux-sbsa-0.3.3-archive.tar.xz",
|
||||
"sha256": "fe87ccd63817427c6c9b9e292447a4e8f256b9c9157065fba1a338719fa433c8",
|
||||
"md5": "9ae9f3253461a0565bc3c01a07c50fe3",
|
||||
"size": "9444"
|
||||
}
|
||||
},
|
||||
"nvpl_fft": {
|
||||
"name": "NVPL FFT",
|
||||
"license": "NVPL",
|
||||
"license_path": "nvpl_fft/LICENSE.txt",
|
||||
"version": "0.4.2.1",
|
||||
"linux-sbsa": {
|
||||
"relative_path": "nvpl_fft/linux-sbsa/nvpl_fft-linux-sbsa-0.4.2.1-archive.tar.xz",
|
||||
"sha256": "ebb9d98abc23ddee5c492e0bbf2c534570a38d7df1863a0630da2c6d7f5cca3d",
|
||||
"md5": "211ec34eef6b023f7af8e1fc40ae4ad1",
|
||||
"size": "24974464"
|
||||
}
|
||||
},
|
||||
"nvpl_lapack": {
|
||||
"name": "NVPL LAPACK",
|
||||
"license": "NVPL",
|
||||
"license_path": "nvpl_lapack/LICENSE.txt",
|
||||
"version": "0.3.1.1",
|
||||
"linux-sbsa": {
|
||||
"relative_path": "nvpl_lapack/linux-sbsa/nvpl_lapack-linux-sbsa-0.3.1.1-archive.tar.xz",
|
||||
"sha256": "f5b916aa36a8549946fc2262acebb082fe8c463bd1523a3c0cc2c93527231653",
|
||||
"md5": "ec0616b0be4616ac7050807a872343a2",
|
||||
"size": "4781368"
|
||||
}
|
||||
},
|
||||
"nvpl_rand": {
|
||||
"name": "NVPL RAND",
|
||||
"license": "NVPL",
|
||||
"license_path": "nvpl_rand/LICENSE.txt",
|
||||
"version": "0.5.2",
|
||||
"linux-sbsa": {
|
||||
"relative_path": "nvpl_rand/linux-sbsa/nvpl_rand-linux-sbsa-0.5.2-archive.tar.xz",
|
||||
"sha256": "1eb5c2a5e98390b2bc76c3218837916df64d33cce220169811e14ecead36933f",
|
||||
"md5": "fe72dcf4600cbd85f7249e95ebbcd363",
|
||||
"size": "34025772"
|
||||
}
|
||||
},
|
||||
"nvpl_scalapack": {
|
||||
"name": "NVPL SCALAPACK",
|
||||
"license": "NVPL",
|
||||
"license_path": "nvpl_scalapack/LICENSE.txt",
|
||||
"version": "0.2.2",
|
||||
"linux-sbsa": {
|
||||
"relative_path": "nvpl_scalapack/linux-sbsa/nvpl_scalapack-linux-sbsa-0.2.2-archive.tar.xz",
|
||||
"sha256": "20cf6c54a0352f2fb0060e6f5ef6b892c5d07a242f8aab31cd9bbceb58a7bd11",
|
||||
"md5": "8e3e728e8587cc4698beb2ab7ce162d6",
|
||||
"size": "4647440"
|
||||
}
|
||||
},
|
||||
"nvpl_sparse": {
|
||||
"name": "NVPL SPARSE",
|
||||
"license": "NVPL",
|
||||
"license_path": "nvpl_sparse/LICENSE.txt",
|
||||
"version": "0.4.1",
|
||||
"linux-sbsa": {
|
||||
"relative_path": "nvpl_sparse/linux-sbsa/nvpl_sparse-linux-sbsa-0.4.1-archive.tar.xz",
|
||||
"sha256": "fda868fe6619e94463a93efed1958e67588c607c170a4c658103a24295855e19",
|
||||
"md5": "851080b3e56db9bf6fa37cea198bcb33",
|
||||
"size": "556932"
|
||||
}
|
||||
},
|
||||
"nvpl_tensor": {
|
||||
"name": "NVPL TENSOR",
|
||||
"license": "NVPL",
|
||||
"license_path": "nvpl_tensor/LICENSE.txt",
|
||||
"version": "0.3.1",
|
||||
"linux-sbsa": {
|
||||
"relative_path": "nvpl_tensor/linux-sbsa/nvpl_tensor-linux-sbsa-0.3.1-archive.tar.xz",
|
||||
"sha256": "12e9293609b3726cf9e92c648f117412a98a5e54700c877518ec2991e51ab50f",
|
||||
"md5": "613b9a05d867667deae31bfea26688e8",
|
||||
"size": "2338804"
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,5 @@
|
||||
# nvtiff
|
||||
|
||||
Link: <https://developer.download.nvidia.com/compute/nvtiff/redist/>
|
||||
|
||||
Requirements: <https://docs.nvidia.com/cuda/nvtiff/releasenotes.html>
|
||||
@@ -0,0 +1,96 @@
|
||||
{
|
||||
"release_date": "2025-08-05",
|
||||
"release_label": "0.5.1",
|
||||
"release_product": "nvtiff",
|
||||
"libnvtiff": {
|
||||
"name": "NVIDIA nvTIFF",
|
||||
"license": "nvTIFF",
|
||||
"license_path": "libnvtiff/LICENSE.txt",
|
||||
"version": "0.5.1.75",
|
||||
"linux-x86_64": {
|
||||
"cuda11": {
|
||||
"relative_path": "libnvtiff/linux-x86_64/libnvtiff-linux-x86_64-0.5.1.75_cuda11-archive.tar.xz",
|
||||
"sha256": "5adabfe9c4eac59916bfc464b7325866da99752ade30bbc3ddd3cd9c852f69e7",
|
||||
"md5": "93e04a669a8dd4ff696950bac5e77e7d",
|
||||
"size": "1332668"
|
||||
},
|
||||
"cuda12": {
|
||||
"relative_path": "libnvtiff/linux-x86_64/libnvtiff-linux-x86_64-0.5.1.75_cuda12-archive.tar.xz",
|
||||
"sha256": "1f97778f92c938f5174fda74a913370d4ff200d77809570cecdafcd8aaff84b6",
|
||||
"md5": "1c8508be2791abbd5d78f059bbc3a2be",
|
||||
"size": "1922764"
|
||||
},
|
||||
"cuda13": {
|
||||
"relative_path": "libnvtiff/linux-x86_64/libnvtiff-linux-x86_64-0.5.1.75_cuda13-archive.tar.xz",
|
||||
"sha256": "5d63be4128daf28676ae01a81d2e69f828d2e7eda332c039079ff57b42915d20",
|
||||
"md5": "0aa016e6e9d70866dae1367ee5d8731a",
|
||||
"size": "1498124"
|
||||
}
|
||||
},
|
||||
"cuda_variant": [
|
||||
"11",
|
||||
"12",
|
||||
"13"
|
||||
],
|
||||
"linux-sbsa": {
|
||||
"cuda11": {
|
||||
"relative_path": "libnvtiff/linux-sbsa/libnvtiff-linux-sbsa-0.5.1.75_cuda11-archive.tar.xz",
|
||||
"sha256": "9274e74f58c2d85d13089ba3be3e3464c2cb34d2332c9f7a96ec42765bf2b034",
|
||||
"md5": "e174a596cf84862ca903f79a2db356c2",
|
||||
"size": "1236416"
|
||||
},
|
||||
"cuda12": {
|
||||
"relative_path": "libnvtiff/linux-sbsa/libnvtiff-linux-sbsa-0.5.1.75_cuda12-archive.tar.xz",
|
||||
"sha256": "d20309617df0bca6b373dfa33bac99703993a0e3759af70d2691d3b829df4d33",
|
||||
"md5": "1a25761b4bfeb2ff7016114701d132b6",
|
||||
"size": "1829060"
|
||||
},
|
||||
"cuda13": {
|
||||
"relative_path": "libnvtiff/linux-sbsa/libnvtiff-linux-sbsa-0.5.1.75_cuda13-archive.tar.xz",
|
||||
"sha256": "415c507443c026db501bd58d49428d6378f7d5e02e371f8f05d9cbe421565a90",
|
||||
"md5": "6fb9797d1cef94e17c0ed8c82e9b8bc8",
|
||||
"size": "1559296"
|
||||
}
|
||||
},
|
||||
"windows-x86_64": {
|
||||
"cuda11": {
|
||||
"relative_path": "libnvtiff/windows-x86_64/libnvtiff-windows-x86_64-0.5.1.75_cuda11-archive.zip",
|
||||
"sha256": "c140fb7c0cb40c8796cdc7f3cf604a7fbb85e5b6e0e3315d9c269cfa19caa46a",
|
||||
"md5": "1747419ee8df0434010a89014757065d",
|
||||
"size": "1157130"
|
||||
},
|
||||
"cuda12": {
|
||||
"relative_path": "libnvtiff/windows-x86_64/libnvtiff-windows-x86_64-0.5.1.75_cuda12-archive.zip",
|
||||
"sha256": "a3db5d37c61845d97aa5f1c1a93f9885239741c169a4c577f1f93293dd139a0d",
|
||||
"md5": "861f76739d7632b9ccf60f9bde2c2b36",
|
||||
"size": "1805927"
|
||||
},
|
||||
"cuda13": {
|
||||
"relative_path": "libnvtiff/windows-x86_64/libnvtiff-windows-x86_64-0.5.1.75_cuda13-archive.zip",
|
||||
"sha256": "0e75603c23eed4df4d04d8ddd08bc106df9a4423596f32d238fbf7bb623280b1",
|
||||
"md5": "83c07811a9ebcc9aaa3e8d296f018f4a",
|
||||
"size": "1178242"
|
||||
}
|
||||
},
|
||||
"linux-aarch64": {
|
||||
"cuda11": {
|
||||
"relative_path": "libnvtiff/linux-aarch64/libnvtiff-linux-aarch64-0.5.1.75_cuda11-archive.tar.xz",
|
||||
"sha256": "ef8f2f8472959be63e895997c0b13892db4e3c6bf3d06a4752c8e9292531c55a",
|
||||
"md5": "ee0034f50b9c348ce64c7358ebc16ea5",
|
||||
"size": "979744"
|
||||
},
|
||||
"cuda12": {
|
||||
"relative_path": "libnvtiff/linux-aarch64/libnvtiff-linux-aarch64-0.5.1.75_cuda12-archive.tar.xz",
|
||||
"sha256": "7d37821154aca7846695ccf12369eeb8c0f263d58b6dfb43e23bd12f4c114ef0",
|
||||
"md5": "0491eaec9a956a42a4450b546cc113d4",
|
||||
"size": "1279516"
|
||||
},
|
||||
"cuda13": {
|
||||
"relative_path": "libnvtiff/linux-aarch64/libnvtiff-linux-aarch64-0.5.1.75_cuda13-archive.tar.xz",
|
||||
"sha256": "8d1b07af6d8b68776d6a6533b4c33134af01d5cb6a0d9c5bcc7a866559de600a",
|
||||
"md5": "94e8fbcbeca5b26e177b3c9c71f18214",
|
||||
"size": "1104224"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,30 @@
|
||||
# tensorrt
|
||||
|
||||
Requirements: <https://docs.nvidia.com/deeplearning/tensorrt/latest/getting-started/support-matrix.html#support-matrix>
|
||||
|
||||
These redistributable manifests are made by hand to allow TensorRT to be packaged with the same functionality the other NVIDIA redistributable libraries are packaged with.
|
||||
|
||||
Only available from 10.0.0 and onwards, which is when NVIDIA stopped putting them behind a login wall.
|
||||
|
||||
You can find them at: <https://github.com/NVIDIA/TensorRT?tab=readme-ov-file#optional---if-not-using-tensorrt-container-specify-the-tensorrt-ga-release-build-path>.
|
||||
|
||||
Construct entries using the provider `helper.bash` script.
|
||||
|
||||
As an example:
|
||||
|
||||
```console
|
||||
$ ./tensorrt/helper.bash 12.5 10.2.0.19 windows-x86_64
|
||||
main: storePath: /nix/store/l2hq83ihj3bcm4z836cz2dw3ilkhwrpy-TensorRT-10.2.0.19.Windows.win10.cuda-12.5.zip
|
||||
{
|
||||
"windows-x86_64": {
|
||||
"cuda12": {
|
||||
"md5": "70282ec501c9e395a3ffdd0d2baf9d95",
|
||||
"relative_path": "tensorrt/10.2.0/zip/TensorRT-10.2.0.19.Windows.win10.cuda-12.5.zip",
|
||||
"sha256": "4a9c6e279fd36559551a6d88e37d835db5ebdc950246160257b0b538960e57fa",
|
||||
"size": "1281366141"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
I set the `release_date` to the date of the corresponding release on their GitHub: <https://github.com/NVIDIA/TensorRT/releases>.
|
||||
131
pkgs/development/cuda-modules/_cuda/manifests/tensorrt/helper.bash
Executable file
131
pkgs/development/cuda-modules/_cuda/manifests/tensorrt/helper.bash
Executable file
@@ -0,0 +1,131 @@
|
||||
#!/usr/bin/env nix-shell
|
||||
#! nix-shell -i bash -p nix jq
|
||||
|
||||
# shellcheck shell=bash
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
mkRedistUrlRelativePath() {
|
||||
local -r cudaMajorMinorVersion=${1:?}
|
||||
local -r tensorrtMajorMinorPatchBuildVersion=${2:?}
|
||||
local -r redistSystem=${3:?}
|
||||
|
||||
local -r tensorrtMajorMinorPatchVersion="$(echo "$tensorrtMajorMinorPatchBuildVersion" | cut -d. -f1-3)"
|
||||
local -r tensorrtMinorVersion="$(echo "$tensorrtMajorMinorPatchVersion" | cut -d. -f2)"
|
||||
|
||||
local archiveDir=""
|
||||
local archiveExtension=""
|
||||
local osName=""
|
||||
local platformName=""
|
||||
case "$redistSystem" in
|
||||
linux-aarch64) archiveDir="tars" && archiveExtension="tar.gz" && osName="l4t" && platformName="aarch64-gnu" ;;
|
||||
linux-sbsa)
|
||||
archiveDir="tars" && archiveExtension="tar.gz" && platformName="aarch64-gnu"
|
||||
# 10.0-10.3 use Ubuntu 22.40
|
||||
# 10.4-10.6 use Ubuntu 24.04
|
||||
# 10.7+ use Linux
|
||||
case "$tensorrtMinorVersion" in
|
||||
0 | 1 | 2 | 3) osName="Ubuntu-22.04" ;;
|
||||
4 | 5 | 6) osName="Ubuntu-24.04" ;;
|
||||
*) osName="Linux" ;;
|
||||
esac
|
||||
;;
|
||||
linux-x86_64) archiveDir="tars" && archiveExtension="tar.gz" && osName="Linux" && platformName="x86_64-gnu" ;;
|
||||
windows-x86_64)
|
||||
archiveExtension="zip" && platformName="win10"
|
||||
# Windows info is different for 10.0.*
|
||||
case "$tensorrtMinorVersion" in
|
||||
0) archiveDir="zips" && osName="Windows10" ;;
|
||||
*) archiveDir="zip" && osName="Windows" ;;
|
||||
esac
|
||||
;;
|
||||
*)
|
||||
echo "mkRedistUrlRelativePath: Unsupported redistSystem: $redistSystem" >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
local -r relativePath="tensorrt/$tensorrtMajorMinorPatchVersion/$archiveDir/TensorRT-${tensorrtMajorMinorPatchBuildVersion}.${osName}.${platformName}.cuda-${cudaMajorMinorVersion}.${archiveExtension}"
|
||||
echo "$relativePath"
|
||||
}
|
||||
|
||||
getNixStorePath() {
|
||||
local -r relativePath=${1:?}
|
||||
local -r jsonBlob="$(nix store prefetch-file --json "https://developer.nvidia.com/downloads/compute/machine-learning/$relativePath")"
|
||||
if [[ -z $jsonBlob ]]; then
|
||||
echo "getNixStorePath: Failed to fetch jsonBlob for relativePath: $relativePath" >&2
|
||||
exit 1
|
||||
fi
|
||||
local -r storePath="$(echo "$jsonBlob" | jq -cr '.storePath')"
|
||||
echo "$storePath"
|
||||
}
|
||||
|
||||
printOutput() {
|
||||
local -r cudaMajorMinorVersion=${1:?}
|
||||
local -r redistSystem=${2:?}
|
||||
local -r md5Hash=${3:?}
|
||||
local -r relativePath=${4:?}
|
||||
local -r sha256Hash=${5:?}
|
||||
local -r size=${6:?}
|
||||
|
||||
local -r cudaVariant="cuda$(echo "$cudaMajorMinorVersion" | cut -d. -f1)"
|
||||
|
||||
# Echo everything to stdout using JQ to format the output as JSON
|
||||
jq \
|
||||
--raw-output \
|
||||
--sort-keys \
|
||||
--null-input \
|
||||
--arg redistSystem "$redistSystem" \
|
||||
--arg cudaVariant "$cudaVariant" \
|
||||
--arg md5Hash "$md5Hash" \
|
||||
--arg relativePath "$relativePath" \
|
||||
--arg sha256Hash "$sha256Hash" \
|
||||
--arg size "$size" \
|
||||
'{
|
||||
$redistSystem: {
|
||||
$cudaVariant: {
|
||||
md5: $md5Hash,
|
||||
relative_path: $relativePath,
|
||||
sha256: $sha256Hash,
|
||||
size: $size
|
||||
}
|
||||
}
|
||||
}'
|
||||
}
|
||||
|
||||
main() {
|
||||
local -r cudaMajorMinorVersion=${1:?}
|
||||
if [[ ! $cudaMajorMinorVersion =~ ^[0-9]+\.[0-9]+$ ]]; then
|
||||
echo "main: Invalid cudaMajorMinorVersion: $cudaMajorMinorVersion" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
local -r tensorrtMajorMinorPatchBuildVersion=${2:?}
|
||||
if [[ ! $tensorrtMajorMinorPatchBuildVersion =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
|
||||
echo "main: Invalid tensorrtMajorMinorPatchBuildVersion: $tensorrtMajorMinorPatchBuildVersion" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
local -r redistSystem=${3:?}
|
||||
case "$redistSystem" in
|
||||
linux-aarch64) ;;
|
||||
linux-sbsa) ;;
|
||||
linux-x86_64) ;;
|
||||
windows-x86_64) ;;
|
||||
*)
|
||||
echo "main: Unsupported redistSystem: $redistSystem" >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
local -r relativePath="$(mkRedistUrlRelativePath "$cudaMajorMinorVersion" "$tensorrtMajorMinorPatchBuildVersion" "$redistSystem")"
|
||||
local -r storePath="$(getNixStorePath "$relativePath")"
|
||||
echo "main: storePath: $storePath" >&2
|
||||
local -r md5Hash="$(nix hash file --type md5 --base16 "$storePath")"
|
||||
local -r sha256Hash="$(nix hash file --type sha256 --base16 "$storePath")"
|
||||
local -r size="$(du -b "$storePath" | cut -f1)"
|
||||
|
||||
printOutput "$cudaMajorMinorVersion" "$redistSystem" "$md5Hash" "$relativePath" "$sha256Hash" "$size"
|
||||
}
|
||||
|
||||
main "$@"
|
||||
@@ -0,0 +1,37 @@
|
||||
{
|
||||
"release_date": "2024-12-02",
|
||||
"release_label": "10.7.0",
|
||||
"release_product": "tensorrt",
|
||||
"tensorrt": {
|
||||
"name": "NVIDIA TensorRT",
|
||||
"license": "TensorRT",
|
||||
"version": "10.7.0.23",
|
||||
"cuda_variant": [
|
||||
"12"
|
||||
],
|
||||
"linux-aarch64": {
|
||||
"cuda12": {
|
||||
"md5": "ec486c783455bf31a2561f2b7874585e",
|
||||
"relative_path": "tensorrt/10.7.0/tars/TensorRT-10.7.0.23.l4t.aarch64-gnu.cuda-12.6.tar.gz",
|
||||
"sha256": "b3028a82818a9daf6296f43d0cdecfa51eaea4552ffb6fe6fad5e6e1aea44da6",
|
||||
"size": "655777784"
|
||||
}
|
||||
},
|
||||
"linux-sbsa": {
|
||||
"cuda12": {
|
||||
"md5": "5b49557b4dc47641242a2bfb29e1cff1",
|
||||
"relative_path": "tensorrt/10.7.0/tars/TensorRT-10.7.0.23.Linux.aarch64-gnu.cuda-12.6.tar.gz",
|
||||
"sha256": "6b304cf014f2977e845bd44fdb343f0e7af2d9cded997bc9cfea3949d9e84dcb",
|
||||
"size": "2469927296"
|
||||
}
|
||||
},
|
||||
"linux-x86_64": {
|
||||
"cuda12": {
|
||||
"md5": "925c98fbe9abe82058814159727732a2",
|
||||
"relative_path": "tensorrt/10.7.0/tars/TensorRT-10.7.0.23.Linux.x86_64-gnu.cuda-12.6.tar.gz",
|
||||
"sha256": "d7f16520457caaf97ad8a7e94d802f89d77aedf9f361a255f2c216e2a3a40a11",
|
||||
"size": "4446480887"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,29 @@
|
||||
{
|
||||
"release_date": "2025-03-11",
|
||||
"release_label": "10.9.0",
|
||||
"release_product": "tensorrt",
|
||||
"tensorrt": {
|
||||
"name": "NVIDIA TensorRT",
|
||||
"license": "TensorRT",
|
||||
"version": "10.9.0.34",
|
||||
"cuda_variant": [
|
||||
"12"
|
||||
],
|
||||
"linux-sbsa": {
|
||||
"cuda12": {
|
||||
"md5": "e56a9f9d7327c65d9b95996d3008ed44",
|
||||
"relative_path": "tensorrt/10.9.0/tars/TensorRT-10.9.0.34.Linux.aarch64-gnu.cuda-12.8.tar.gz",
|
||||
"sha256": "b81ec2a067f67f082c13caec2dcef54385b42a9de6a4ecae6f318aa2d41964f2",
|
||||
"size": "4123115318"
|
||||
}
|
||||
},
|
||||
"linux-x86_64": {
|
||||
"cuda12": {
|
||||
"md5": "ee49e3e6e00b21274907956216b6769f",
|
||||
"relative_path": "tensorrt/10.9.0/tars/TensorRT-10.9.0.34.Linux.x86_64-gnu.cuda-12.8.tar.gz",
|
||||
"sha256": "33be0e61e3bf177bbbcabb4892bf013f0c8ac71d2be73f2803848a382cb14272",
|
||||
"size": "6917032417"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -9,20 +9,4 @@ in
|
||||
final: _:
|
||||
builtins.mapAttrs mkRenamed {
|
||||
# A comment to prevent empty { } from collapsing into a single line
|
||||
|
||||
cudaFlags = {
|
||||
path = "cudaPackages.flags";
|
||||
package = final.flags;
|
||||
};
|
||||
|
||||
cudaVersion = {
|
||||
path = "cudaPackages.cudaMajorMinorVersion";
|
||||
package = final.cudaMajorMinorVersion;
|
||||
};
|
||||
|
||||
cudatoolkit-legacy-runfile = {
|
||||
path = "cudaPackages.cudatoolkit";
|
||||
package = final.cudatoolkit;
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
174
pkgs/development/cuda-modules/buildRedist/buildRedistHook.bash
Normal file
174
pkgs/development/cuda-modules/buildRedist/buildRedistHook.bash
Normal file
@@ -0,0 +1,174 @@
|
||||
# shellcheck shell=bash
|
||||
|
||||
if [[ -n ${strictDeps:-} && ${hostOffset:-0} -ne -1 ]]; then
|
||||
nixLog "skipping sourcing buildRedistHook.bash (hostOffset=${hostOffset:-0}) (targetOffset=${targetOffset:-0})"
|
||||
return 0
|
||||
fi
|
||||
nixLog "sourcing buildRedistHook.bash (hostOffset=${hostOffset:-0}) (targetOffset=${targetOffset:-0})"
|
||||
|
||||
buildRedistHookRegistration() {
|
||||
postUnpackHooks+=(unpackCudaLibSubdir)
|
||||
nixLog "added unpackCudaLibSubdir to postUnpackHooks"
|
||||
|
||||
postUnpackHooks+=(unpackCudaPkgConfigDirs)
|
||||
nixLog "added unpackCudaPkgConfigDirs to postUnpackHooks"
|
||||
|
||||
prePatchHooks+=(patchCudaPkgConfig)
|
||||
nixLog "added patchCudaPkgConfig to prePatchHooks"
|
||||
|
||||
if [[ -z ${allowFHSReferences-} ]]; then
|
||||
postInstallCheckHooks+=(checkCudaFhsRefs)
|
||||
nixLog "added checkCudaFhsRefs to postInstallCheckHooks"
|
||||
fi
|
||||
|
||||
postInstallCheckHooks+=(checkCudaNonEmptyOutputs)
|
||||
nixLog "added checkCudaNonEmptyOutputs to postInstallCheckHooks"
|
||||
|
||||
preFixupHooks+=(fixupPropagatedBuildOutputsForMultipleOutputs)
|
||||
nixLog "added fixupPropagatedBuildOutputsForMultipleOutputs to preFixupHooks"
|
||||
|
||||
postFixupHooks+=(fixupCudaPropagatedBuildOutputsToOut)
|
||||
nixLog "added fixupCudaPropagatedBuildOutputsToOut to postFixupHooks"
|
||||
}
|
||||
|
||||
buildRedistHookRegistration
|
||||
|
||||
unpackCudaLibSubdir() {
|
||||
local -r cudaLibDir="${NIX_BUILD_TOP:?}/${sourceRoot:?}/lib"
|
||||
local -r versionedCudaLibDir="$cudaLibDir/${cudaMajorVersion:?}"
|
||||
|
||||
if [[ ! -d $versionedCudaLibDir ]]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
nixLog "found versioned CUDA lib dir: $versionedCudaLibDir"
|
||||
|
||||
mv \
|
||||
--verbose \
|
||||
--no-clobber \
|
||||
"$versionedCudaLibDir" \
|
||||
"${cudaLibDir}-new"
|
||||
rm --verbose --recursive "$cudaLibDir" || {
|
||||
nixErrorLog "could not delete $cudaLibDir: $(ls -laR "$cudaLibDir")"
|
||||
exit 1
|
||||
}
|
||||
mv \
|
||||
--verbose \
|
||||
--no-clobber \
|
||||
"${cudaLibDir}-new" \
|
||||
"$cudaLibDir"
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Pkg-config's setup hook expects configuration files in $out/share/pkgconfig
|
||||
unpackCudaPkgConfigDirs() {
|
||||
local path
|
||||
local -r pkgConfigDir="${NIX_BUILD_TOP:?}/${sourceRoot:?}/share/pkgconfig"
|
||||
|
||||
for path in "${NIX_BUILD_TOP:?}/${sourceRoot:?}"/{pkg-config,pkgconfig}; do
|
||||
[[ -d $path ]] || continue
|
||||
mkdir -p "$pkgConfigDir"
|
||||
mv \
|
||||
--verbose \
|
||||
--no-clobber \
|
||||
--target-directory "$pkgConfigDir" \
|
||||
"$path"/*
|
||||
rm --recursive --dir "$path" || {
|
||||
nixErrorLog "$path contains non-empty directories: $(ls -laR "$path")"
|
||||
exit 1
|
||||
}
|
||||
done
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
patchCudaPkgConfig() {
|
||||
local pc
|
||||
|
||||
for pc in "${NIX_BUILD_TOP:?}/${sourceRoot:?}"/share/pkgconfig/*.pc; do
|
||||
nixLog "patching $pc"
|
||||
sed -i \
|
||||
-e "s|^cudaroot\s*=.*\$|cudaroot=${!outputDev:?}|" \
|
||||
-e "s|^libdir\s*=.*/lib\$|libdir=${!outputLib:?}/lib|" \
|
||||
-e "s|^includedir\s*=.*/include\$|includedir=${!outputInclude:?}/include|" \
|
||||
"$pc"
|
||||
done
|
||||
|
||||
for pc in "${NIX_BUILD_TOP:?}/${sourceRoot:?}"/share/pkgconfig/*-"${cudaMajorMinorVersion:?}.pc"; do
|
||||
nixLog "creating unversioned symlink for $pc"
|
||||
ln -s "$(basename "$pc")" "${pc%-"${cudaMajorMinorVersion:?}".pc}".pc
|
||||
done
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
checkCudaFhsRefs() {
|
||||
nixLog "checking for FHS references..."
|
||||
local -a outputPaths=()
|
||||
local firstMatches
|
||||
|
||||
mapfile -t outputPaths < <(for o in $(getAllOutputNames); do echo "${!o}"; done)
|
||||
firstMatches="$(grep --max-count=5 --recursive --exclude=LICENSE /usr/ "${outputPaths[@]}")" || true
|
||||
if [[ -n $firstMatches ]]; then
|
||||
nixErrorLog "detected references to /usr: $firstMatches"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
checkCudaNonEmptyOutputs() {
|
||||
local output
|
||||
local dirs
|
||||
local -a failingOutputs=()
|
||||
|
||||
for output in $(getAllOutputNames); do
|
||||
[[ ${!output:?} == "out" || ${!output:?} == "${!outputDev:?}" ]] && continue
|
||||
dirs="$(find "${!output:?}" -mindepth 1 -maxdepth 1)" || true
|
||||
if [[ -z $dirs || $dirs == "${!output:?}/nix-support" ]]; then
|
||||
failingOutputs+=("$output")
|
||||
fi
|
||||
done
|
||||
|
||||
if ((${#failingOutputs[@]})); then
|
||||
nixErrorLog "detected empty (excluding nix-support) outputs: ${failingOutputs[*]}"
|
||||
nixErrorLog "this typically indicates a failure in packaging or moveToOutput ordering"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# TODO(@connorbaker): https://github.com/NixOS/nixpkgs/issues/323126.
|
||||
# _multioutPropagateDev() currently expects a space-separated string rather than an array.
|
||||
# NOTE: Because _multioutPropagateDev is a postFixup hook, we correct it in preFixup.
|
||||
fixupPropagatedBuildOutputsForMultipleOutputs() {
|
||||
nixLog "converting propagatedBuildOutputs to a space-separated string"
|
||||
# shellcheck disable=SC2124
|
||||
export propagatedBuildOutputs="${propagatedBuildOutputs[@]}"
|
||||
return 0
|
||||
}
|
||||
|
||||
# The multiple outputs setup hook only propagates build outputs to dev.
|
||||
# We want to propagate them to out as well, in case the user interpolates
|
||||
# the package into a string -- in such a case, the dev output is not selected
|
||||
# and no propagation occurs.
|
||||
# NOTE: This must run in postFixup because fixupPhase nukes the propagated dependency files.
|
||||
fixupCudaPropagatedBuildOutputsToOut() {
|
||||
local output
|
||||
|
||||
# The `out` output should largely be empty save for nix-support/propagated-build-inputs.
|
||||
# In effect, this allows us to make `out` depend on all the other components.
|
||||
# NOTE: It may have been deleted if it was empty, which is why we must recreate it.
|
||||
mkdir -p "${out:?}/nix-support"
|
||||
|
||||
# NOTE: We must use printWords to ensure the output is a single line.
|
||||
for output in $propagatedBuildOutputs; do
|
||||
# Propagate the other components to the out output
|
||||
nixLog "adding ${!output:?} to propagatedBuildInputs of ${out:?}"
|
||||
printWords "${!output:?}" >>"${out:?}/nix-support/propagated-build-inputs"
|
||||
done
|
||||
|
||||
return 0
|
||||
}
|
||||
480
pkgs/development/cuda-modules/buildRedist/default.nix
Normal file
480
pkgs/development/cuda-modules/buildRedist/default.nix
Normal file
@@ -0,0 +1,480 @@
|
||||
# NOTE: buildRedist should never take manifests or fixups as callPackage-provided arguments,
|
||||
# since we want to provide the flexibility to call it directly with a different fixup or manifest.
|
||||
{
|
||||
_cuda,
|
||||
autoAddCudaCompatRunpath,
|
||||
autoAddDriverRunpath,
|
||||
autoPatchelfHook,
|
||||
backendStdenv,
|
||||
config,
|
||||
cudaMajorMinorVersion,
|
||||
cudaMajorVersion,
|
||||
cudaNamePrefix,
|
||||
fetchurl,
|
||||
lib,
|
||||
manifests,
|
||||
markForCudatoolkitRootHook,
|
||||
setupCudaHook,
|
||||
srcOnly,
|
||||
stdenv,
|
||||
stdenvNoCC,
|
||||
}:
|
||||
let
|
||||
inherit (backendStdenv) hostRedistSystem;
|
||||
inherit (_cuda.lib) getNixSystems _mkCudaVariant mkRedistUrl;
|
||||
inherit (lib.attrsets)
|
||||
foldlAttrs
|
||||
hasAttr
|
||||
isAttrs
|
||||
attrNames
|
||||
optionalAttrs
|
||||
;
|
||||
inherit (lib.customisation) extendMkDerivation;
|
||||
inherit (lib.lists)
|
||||
naturalSort
|
||||
concatMap
|
||||
unique
|
||||
;
|
||||
inherit (lib.trivial) mapNullable pipe;
|
||||
inherit (_cuda.lib) _mkMetaBadPlatforms _mkMetaBroken _redistSystemIsSupported;
|
||||
inherit (lib)
|
||||
licenses
|
||||
sourceTypes
|
||||
teams
|
||||
;
|
||||
inherit (lib.asserts) assertMsg;
|
||||
inherit (lib.lists)
|
||||
elem
|
||||
findFirst
|
||||
findFirstIndex
|
||||
foldl'
|
||||
intersectLists
|
||||
map
|
||||
subtractLists
|
||||
tail
|
||||
;
|
||||
inherit (lib.strings)
|
||||
concatMapStringsSep
|
||||
toUpper
|
||||
stringLength
|
||||
substring
|
||||
;
|
||||
inherit (lib.trivial) flip;
|
||||
|
||||
mkOutputNameVar =
|
||||
output:
|
||||
assert assertMsg (output != "") "mkOutputNameVar: output name variable must not be empty";
|
||||
"output" + toUpper (substring 0 1 output) + substring 1 (stringLength output - 1) output;
|
||||
|
||||
getSupportedReleases =
|
||||
let
|
||||
desiredCudaVariant = _mkCudaVariant cudaMajorVersion;
|
||||
in
|
||||
release:
|
||||
# Always show preference to the "source", then "linux-all" redistSystem if they are available, as they are
|
||||
# the most general.
|
||||
if release ? source then
|
||||
{
|
||||
inherit (release) source;
|
||||
}
|
||||
else if release ? linux-all then
|
||||
{
|
||||
inherit (release) linux-all;
|
||||
}
|
||||
else
|
||||
let
|
||||
hasCudaVariants = release ? cuda_variant;
|
||||
in
|
||||
foldlAttrs (
|
||||
acc: name: value:
|
||||
acc
|
||||
# If the value is an attribute, and when hasCudaVariants is true it has the relevant CUDA variant,
|
||||
# then add it to the set.
|
||||
// optionalAttrs (isAttrs value && (hasCudaVariants -> hasAttr desiredCudaVariant value)) {
|
||||
${name} = value.${desiredCudaVariant} or value;
|
||||
}
|
||||
) { } release;
|
||||
|
||||
getPreferredRelease =
|
||||
supportedReleases:
|
||||
supportedReleases.source or supportedReleases.linux-all or supportedReleases.${hostRedistSystem}
|
||||
or null;
|
||||
in
|
||||
extendMkDerivation {
|
||||
constructDrv = backendStdenv.mkDerivation;
|
||||
# These attributes are moved to passthru to avoid changing derivation hashes.
|
||||
excludeDrvArgNames = [
|
||||
# Core
|
||||
"redistName"
|
||||
"release"
|
||||
|
||||
# Misc
|
||||
"brokenAssertions"
|
||||
"platformAssertions"
|
||||
"expectedOutputs"
|
||||
"outputToPatterns"
|
||||
"outputNameVarFallbacks"
|
||||
];
|
||||
extendDrvArgs =
|
||||
finalAttrs:
|
||||
{
|
||||
# Core
|
||||
redistName,
|
||||
pname,
|
||||
release ? manifests.${finalAttrs.passthru.redistName}.${finalAttrs.pname} or null,
|
||||
|
||||
# Outputs
|
||||
outputs ? [ "out" ],
|
||||
propagatedBuildOutputs ? [ ],
|
||||
|
||||
# Inputs
|
||||
nativeBuildInputs ? [ ],
|
||||
propagatedBuildInputs ? [ ],
|
||||
buildInputs ? [ ],
|
||||
|
||||
# Checking
|
||||
doInstallCheck ? true,
|
||||
allowFHSReferences ? false,
|
||||
|
||||
# Fixups
|
||||
appendRunpaths ? [ ],
|
||||
|
||||
# Extra
|
||||
passthru ? { },
|
||||
meta ? { },
|
||||
|
||||
# Misc
|
||||
brokenAssertions ? [ ],
|
||||
platformAssertions ? [ ],
|
||||
|
||||
# Order is important here so we use a list.
|
||||
expectedOutputs ? [
|
||||
"out"
|
||||
"doc"
|
||||
"samples"
|
||||
"python"
|
||||
"bin"
|
||||
"dev"
|
||||
"include"
|
||||
"lib"
|
||||
"static"
|
||||
"stubs"
|
||||
],
|
||||
|
||||
# Traversed in the order of the outputs speficied in outputs;
|
||||
# entries are skipped if they don't exist in outputs.
|
||||
# NOTE: The nil LSP gets angry if we do not parenthesize the default attrset.
|
||||
outputToPatterns ? {
|
||||
bin = [ "bin" ];
|
||||
dev = [
|
||||
"**/*.pc"
|
||||
"**/*.cmake"
|
||||
];
|
||||
include = [ "include" ];
|
||||
lib = [
|
||||
"lib"
|
||||
"lib64"
|
||||
];
|
||||
static = [ "**/*.a" ];
|
||||
samples = [ "samples" ];
|
||||
python = [ "**/*.whl" ];
|
||||
stubs = [
|
||||
"stubs"
|
||||
"lib/stubs"
|
||||
];
|
||||
},
|
||||
|
||||
# Defines a list of fallbacks for each potential output.
|
||||
# The last fallback is the out output.
|
||||
# Taken and modified from:
|
||||
# https://github.com/NixOS/nixpkgs/blob/fe5e11faed6241aacf7220436088789287507494/pkgs/build-support/setup-hooks/multiple-outputs.sh#L45-L62
|
||||
outputNameVarFallbacks ? {
|
||||
outputBin = [ "bin" ];
|
||||
outputDev = [ "dev" ];
|
||||
outputDoc = [ "doc" ];
|
||||
outputInclude = [
|
||||
"include"
|
||||
"dev"
|
||||
];
|
||||
outputLib = [ "lib" ];
|
||||
outputOut = [ "out" ];
|
||||
outputPython = [ "python" ];
|
||||
outputSamples = [ "samples" ];
|
||||
outputStatic = [ "static" ];
|
||||
outputStubs = [ "stubs" ];
|
||||
},
|
||||
...
|
||||
}:
|
||||
{
|
||||
__structuredAttrs = true;
|
||||
strictDeps = true;
|
||||
|
||||
# NOTE: `release` may be null if a redistributable isn't available.
|
||||
version = finalAttrs.passthru.release.version or "0-unsupported";
|
||||
|
||||
# Name should be prefixed by cudaNamePrefix to create more descriptive path names.
|
||||
name = "${cudaNamePrefix}-${finalAttrs.pname}-${finalAttrs.version}";
|
||||
|
||||
# We should only have the output `out` when `src` is null.
|
||||
# lists.intersectLists iterates over the second list, checking if the elements are in the first list.
|
||||
# As such, the order of the output is dictated by the order of the second list.
|
||||
outputs =
|
||||
if finalAttrs.src == null then
|
||||
[ "out" ]
|
||||
else
|
||||
intersectLists outputs finalAttrs.passthru.expectedOutputs;
|
||||
|
||||
# NOTE: Because the `dev` output is special in Nixpkgs -- make-derivation.nix uses it as the default if
|
||||
# it is present -- we must ensure that it brings in the expected dependencies. For us, this means that `dev`
|
||||
# should include `bin`, `include`, and `lib` -- `static` is notably absent because it is quite large.
|
||||
# We do not include `stubs`, as a number of packages contain stubs for libraries they already ship with!
|
||||
# Only a few, like cuda_cudart, actually provide stubs for libraries we're missing.
|
||||
# As such, these packages should override propagatedBuildOutputs to add `stubs`.
|
||||
propagatedBuildOutputs =
|
||||
intersectLists [
|
||||
"bin"
|
||||
"include"
|
||||
"lib"
|
||||
] finalAttrs.outputs
|
||||
++ propagatedBuildOutputs;
|
||||
|
||||
# src :: null | Derivation
|
||||
src = mapNullable (
|
||||
{ relative_path, sha256, ... }:
|
||||
srcOnly {
|
||||
__structuredAttrs = true;
|
||||
strictDeps = true;
|
||||
stdenv = stdenvNoCC;
|
||||
inherit (finalAttrs) pname version;
|
||||
src = fetchurl {
|
||||
url = mkRedistUrl finalAttrs.passthru.redistName relative_path;
|
||||
inherit sha256;
|
||||
};
|
||||
}
|
||||
) (getPreferredRelease finalAttrs.passthru.supportedReleases);
|
||||
|
||||
# Required for the hook.
|
||||
inherit cudaMajorMinorVersion cudaMajorVersion;
|
||||
|
||||
# We do need some other phases, like configurePhase, so the multiple-output setup hook works.
|
||||
dontBuild = true;
|
||||
|
||||
nativeBuildInputs = [
|
||||
./buildRedistHook.bash
|
||||
autoPatchelfHook
|
||||
# This hook will make sure libcuda can be found
|
||||
# in typically /lib/opengl-driver by adding that
|
||||
# directory to the rpath of all ELF binaries.
|
||||
# Check e.g. with `patchelf --print-rpath path/to/my/binary
|
||||
autoAddDriverRunpath
|
||||
markForCudatoolkitRootHook
|
||||
]
|
||||
# autoAddCudaCompatRunpath depends on cuda_compat and would cause
|
||||
# infinite recursion if applied to `cuda_compat` itself (beside the fact
|
||||
# that it doesn't make sense in the first place)
|
||||
++ lib.optionals (finalAttrs.pname != "cuda_compat" && autoAddCudaCompatRunpath.enableHook) [
|
||||
# autoAddCudaCompatRunpath must appear AFTER autoAddDriverRunpath.
|
||||
# See its documentation in ./setup-hooks/extension.nix.
|
||||
autoAddCudaCompatRunpath
|
||||
]
|
||||
++ nativeBuildInputs;
|
||||
|
||||
propagatedBuildInputs = [ setupCudaHook ] ++ propagatedBuildInputs;
|
||||
|
||||
buildInputs = [
|
||||
# autoPatchelfHook will search for a libstdc++ and we're giving it
|
||||
# one that is compatible with the rest of nixpkgs, even when
|
||||
# nvcc forces us to use an older gcc
|
||||
# NB: We don't actually know if this is the right thing to do
|
||||
# NOTE: Not all packages actually need this, but it's easier to just add it than create overrides for nearly all
|
||||
# of them.
|
||||
(lib.getLib stdenv.cc.cc)
|
||||
]
|
||||
++ buildInputs;
|
||||
|
||||
# Picked up by autoPatchelf
|
||||
# Needed e.g. for libnvrtc to locate (dlopen) libnvrtc-builtins
|
||||
appendRunpaths = [ "$ORIGIN" ] ++ appendRunpaths;
|
||||
|
||||
# NOTE: We don't need to check for dev or doc, because those outputs are handled by
|
||||
# the multiple-outputs setup hook.
|
||||
# NOTE: moveToOutput operates on all outputs:
|
||||
# https://github.com/NixOS/nixpkgs/blob/2920b6fc16a9ed5d51429e94238b28306ceda79e/pkgs/build-support/setup-hooks/multiple-outputs.sh#L105-L107
|
||||
# NOTE: installPhase is not moved into the builder hook because we do a lot of Nix templating.
|
||||
installPhase =
|
||||
let
|
||||
mkMoveToOutputCommand =
|
||||
output:
|
||||
let
|
||||
template = pattern: ''
|
||||
moveToOutput "${pattern}" "${"$" + output}"
|
||||
'';
|
||||
patterns = finalAttrs.passthru.outputToPatterns.${output} or [ ];
|
||||
in
|
||||
concatMapStringsSep "\n" template patterns;
|
||||
in
|
||||
# Pre-install hook
|
||||
''
|
||||
runHook preInstall
|
||||
''
|
||||
# Create the primary output, out, and move the other outputs into it.
|
||||
+ ''
|
||||
mkdir -p "$out"
|
||||
nixLog "moving tree to output out"
|
||||
mv * "$out"
|
||||
''
|
||||
# Move the outputs into their respective outputs.
|
||||
+ ''
|
||||
${concatMapStringsSep "\n" mkMoveToOutputCommand (tail finalAttrs.outputs)}
|
||||
''
|
||||
# Post-install hook
|
||||
+ ''
|
||||
runHook postInstall
|
||||
'';
|
||||
|
||||
inherit doInstallCheck;
|
||||
inherit allowFHSReferences;
|
||||
|
||||
passthru = passthru // {
|
||||
inherit redistName release;
|
||||
|
||||
supportedReleases =
|
||||
passthru.supportedReleases
|
||||
# NOTE: `release` may be null, so we must use `lib.defaultTo`
|
||||
or (getSupportedReleases (lib.defaultTo { } finalAttrs.passthru.release));
|
||||
|
||||
supportedNixSystems =
|
||||
passthru.supportedNixSystems or (pipe finalAttrs.passthru.supportedReleases [
|
||||
attrNames
|
||||
(concatMap getNixSystems)
|
||||
naturalSort
|
||||
unique
|
||||
]);
|
||||
|
||||
supportedRedistSystems =
|
||||
passthru.supportedRedistSystems or (naturalSort (attrNames finalAttrs.passthru.supportedReleases));
|
||||
|
||||
# NOTE: Downstream may expand this to include other outputs, but they must remember to set the appropriate
|
||||
# outputNameVarFallbacks!
|
||||
inherit expectedOutputs;
|
||||
|
||||
# Traversed in the order of the outputs speficied in outputs;
|
||||
# entries are skipped if they don't exist in outputs.
|
||||
inherit outputToPatterns;
|
||||
|
||||
# Defines a list of fallbacks for each potential output.
|
||||
# The last fallback is the out output.
|
||||
# Taken and modified from:
|
||||
# https://github.com/NixOS/nixpkgs/blob/fe5e11faed6241aacf7220436088789287507494/pkgs/build-support/setup-hooks/multiple-outputs.sh#L45-L62
|
||||
inherit outputNameVarFallbacks;
|
||||
|
||||
# brokenAssertions :: [Attrs]
|
||||
# Used by mkMetaBroken to set `meta.broken`.
|
||||
# Example: Broken on a specific version of CUDA or when a dependency has a specific version.
|
||||
# NOTE: Do not use this when a broken assertion means evaluation will fail! For example, if
|
||||
# a package is missing and is required for the build -- that should go in platformAssertions,
|
||||
# because attempts to access attributes on the package will cause evaluation errors.
|
||||
brokenAssertions = [
|
||||
{
|
||||
message = "CUDA support is enabled by config.cudaSupport";
|
||||
assertion = config.cudaSupport;
|
||||
}
|
||||
{
|
||||
message = "lib output precedes static output";
|
||||
assertion =
|
||||
let
|
||||
libIndex = findFirstIndex (x: x == "lib") null finalAttrs.outputs;
|
||||
staticIndex = findFirstIndex (x: x == "static") null finalAttrs.outputs;
|
||||
in
|
||||
libIndex == null || staticIndex == null || libIndex < staticIndex;
|
||||
}
|
||||
{
|
||||
# NOTE: We cannot (easily) check that all expected outputs have a corresponding outputNameVar attribute in
|
||||
# finalAttrs because of the presence of attributes which use the "output" prefix but are not outputNameVars
|
||||
# (e.g., outputChecks and outputName).
|
||||
message = "outputNameVarFallbacks is a super set of expectedOutputs";
|
||||
assertion =
|
||||
subtractLists (map mkOutputNameVar finalAttrs.passthru.expectedOutputs) (
|
||||
attrNames finalAttrs.passthru.outputNameVarFallbacks
|
||||
) == [ ];
|
||||
}
|
||||
{
|
||||
message = "outputToPatterns is a super set of expectedOutputs";
|
||||
assertion =
|
||||
subtractLists finalAttrs.passthru.expectedOutputs (attrNames finalAttrs.passthru.outputToPatterns)
|
||||
== [ ];
|
||||
}
|
||||
{
|
||||
message = "propagatedBuildOutputs is a subset of outputs";
|
||||
assertion = subtractLists finalAttrs.outputs finalAttrs.propagatedBuildOutputs == [ ];
|
||||
}
|
||||
]
|
||||
++ brokenAssertions;
|
||||
|
||||
# platformAssertions :: [Attrs]
|
||||
# Used by mkMetaBadPlatforms to set `meta.badPlatforms`.
|
||||
# Example: Broken on a specific system when some condition is met, like targeting Jetson or
|
||||
# a required package missing.
|
||||
# NOTE: Use this when a failed assertion means evaluation can fail!
|
||||
platformAssertions =
|
||||
let
|
||||
isSupportedRedistSystem = _redistSystemIsSupported hostRedistSystem finalAttrs.passthru.supportedRedistSystems;
|
||||
in
|
||||
[
|
||||
{
|
||||
message = "src is null if and only if hostRedistSystem is unsupported";
|
||||
assertion = (finalAttrs.src == null) == !isSupportedRedistSystem;
|
||||
}
|
||||
{
|
||||
message = "hostRedistSystem (${hostRedistSystem}) is supported (${builtins.toJSON finalAttrs.passthru.supportedRedistSystems})";
|
||||
assertion = isSupportedRedistSystem;
|
||||
}
|
||||
]
|
||||
++ platformAssertions;
|
||||
};
|
||||
|
||||
meta = meta // {
|
||||
longDescription = meta.longDescription or "" + ''
|
||||
By downloading and using this package you accept the terms and conditions of the associated license(s).
|
||||
'';
|
||||
sourceProvenance = meta.sourceProvenance or [ sourceTypes.binaryNativeCode ];
|
||||
platforms = finalAttrs.passthru.supportedNixSystems;
|
||||
broken = _mkMetaBroken finalAttrs;
|
||||
badPlatforms = _mkMetaBadPlatforms finalAttrs;
|
||||
downloadPage =
|
||||
meta.downloadPage
|
||||
or "https://developer.download.nvidia.com/compute/${finalAttrs.passthru.redistName}/redist/${finalAttrs.pname}";
|
||||
# NOTE:
|
||||
# Every redistributable should set its own license; since that's a lot of manual work, we default to
|
||||
# nvidiaCudaRedist if the redistributable is from the CUDA redistributables and nvidiaCuda otherwise.
|
||||
# Despite calling them "redistributable" and the download URL containing "redist", a number of these
|
||||
# packages are not licensed such that redistribution is allowed.
|
||||
license =
|
||||
if meta ? license then
|
||||
lib.toList meta.license
|
||||
else if finalAttrs.passthru.redistName == "cuda" then
|
||||
[ licenses.nvidiaCudaRedist ]
|
||||
else
|
||||
[ licenses.nvidiaCuda ];
|
||||
teams = meta.teams or [ ] ++ [ teams.cuda ];
|
||||
};
|
||||
}
|
||||
# Setup the outputNameVar variables to gracefully handle missing outputs.
|
||||
# NOTE: We cannot use expectedOutputs from finalAttrs.passthru because we will infinitely recurse: presence of
|
||||
# attributes in finalAttrs cannot depend on finalAttrs.
|
||||
// foldl' (
|
||||
acc: output:
|
||||
let
|
||||
outputNameVar = mkOutputNameVar output;
|
||||
in
|
||||
acc
|
||||
// {
|
||||
${outputNameVar} =
|
||||
findFirst (flip elem finalAttrs.outputs) "out"
|
||||
finalAttrs.passthru.outputNameVarFallbacks.${outputNameVar};
|
||||
}
|
||||
) { } expectedOutputs;
|
||||
|
||||
# Don't inherit any of stdenv.mkDerivation's arguments.
|
||||
inheritFunctionArgs = false;
|
||||
}
|
||||
@@ -1,16 +0,0 @@
|
||||
{ lib, stdenv }:
|
||||
let
|
||||
inherit (stdenv) hostPlatform;
|
||||
|
||||
# Samples are built around the CUDA Toolkit, which is not available for
|
||||
# aarch64. Check for both CUDA version and platform.
|
||||
platformIsSupported = hostPlatform.isx86_64 && hostPlatform.isLinux;
|
||||
|
||||
# Build our extension
|
||||
extension =
|
||||
final: _:
|
||||
lib.attrsets.optionalAttrs platformIsSupported {
|
||||
cuda-library-samples = final.callPackage ./generic.nix { };
|
||||
};
|
||||
in
|
||||
extension
|
||||
@@ -1,70 +0,0 @@
|
||||
{ cudaMajorMinorVersion, lib }:
|
||||
let
|
||||
inherit (lib) attrsets modules trivial;
|
||||
redistName = "cuda";
|
||||
|
||||
# Manifest files for CUDA redistributables (aka redist). These can be found at
|
||||
# https://developer.download.nvidia.com/compute/cuda/redist/
|
||||
# Maps a cuda version to the specific version of the manifest.
|
||||
cudaVersionMap = {
|
||||
"12.6" = "12.6.3";
|
||||
"12.8" = "12.8.1";
|
||||
"12.9" = "12.9.1";
|
||||
};
|
||||
|
||||
# Check if the current CUDA version is supported.
|
||||
cudaVersionMappingExists = builtins.hasAttr cudaMajorMinorVersion cudaVersionMap;
|
||||
|
||||
# fullCudaVersion : String
|
||||
fullCudaVersion = cudaVersionMap.${cudaMajorMinorVersion};
|
||||
|
||||
evaluatedModules = modules.evalModules {
|
||||
modules = [
|
||||
../modules
|
||||
# We need to nest the manifests in a config.cuda.manifests attribute so the
|
||||
# module system can evaluate them.
|
||||
{
|
||||
cuda.manifests = {
|
||||
redistrib = trivial.importJSON (./manifests + "/redistrib_${fullCudaVersion}.json");
|
||||
feature = trivial.importJSON (./manifests + "/feature_${fullCudaVersion}.json");
|
||||
};
|
||||
}
|
||||
];
|
||||
};
|
||||
|
||||
# Generally we prefer to do things involving getting attribute names with feature_manifest instead
|
||||
# of redistrib_manifest because the feature manifest will have *only* the redist system
|
||||
# names as the keys, whereas the redistrib manifest will also have things like version, name, license,
|
||||
# and license_path.
|
||||
featureManifest = evaluatedModules.config.cuda.manifests.feature;
|
||||
redistribManifest = evaluatedModules.config.cuda.manifests.redistrib;
|
||||
|
||||
# Builder function which builds a single redist package for a given platform.
|
||||
# buildRedistPackage : callPackage -> PackageName -> Derivation
|
||||
buildRedistPackage =
|
||||
callPackage: pname:
|
||||
callPackage ../generic-builders/manifest.nix {
|
||||
inherit pname redistName;
|
||||
# We pass the whole release to the builder because it has logic to handle
|
||||
# the case we're trying to build on an unsupported platform.
|
||||
redistribRelease = redistribManifest.${pname};
|
||||
featureRelease = featureManifest.${pname};
|
||||
};
|
||||
|
||||
# Build all the redist packages given final and prev.
|
||||
redistPackages =
|
||||
final: _prev:
|
||||
# Wrap the whole thing in an optionalAttrs so we can return an empty set if the CUDA version
|
||||
# is not supported.
|
||||
# NOTE: We cannot include the call to optionalAttrs *in* the pipe as we would strictly evaluate the
|
||||
# attrNames before we check if the CUDA version is supported.
|
||||
attrsets.optionalAttrs cudaVersionMappingExists (
|
||||
trivial.pipe featureManifest [
|
||||
# Get all the package names
|
||||
builtins.attrNames
|
||||
# Build the redist packages
|
||||
(trivial.flip attrsets.genAttrs (buildRedistPackage final.callPackage))
|
||||
]
|
||||
);
|
||||
in
|
||||
redistPackages
|
||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@@ -1,112 +0,0 @@
|
||||
# NOTE: Check the following URLs for support matrices:
|
||||
# v8 -> https://docs.nvidia.com/deeplearning/cudnn/archives/index.html
|
||||
# v9 -> https://docs.nvidia.com/deeplearning/cudnn/frontend/latest/reference/support-matrix.html
|
||||
# Version policy is to keep the latest minor release for each major release.
|
||||
# https://developer.download.nvidia.com/compute/cudnn/redist/
|
||||
{
|
||||
cudnn.releases = {
|
||||
# jetson
|
||||
linux-aarch64 = [
|
||||
{
|
||||
version = "8.9.5.30";
|
||||
minCudaVersion = "12.0";
|
||||
maxCudaVersion = "12.8";
|
||||
url = "https://developer.download.nvidia.com/compute/cudnn/redist/cudnn/linux-aarch64/cudnn-linux-aarch64-8.9.5.30_cuda12-archive.tar.xz";
|
||||
hash = "sha256-BJH3sC9VwiB362eL8xTB+RdSS9UHz1tlgjm/mKRyM6E=";
|
||||
}
|
||||
{
|
||||
version = "9.7.1.26";
|
||||
minCudaVersion = "12.0";
|
||||
maxCudaVersion = "12.8";
|
||||
url = "https://developer.download.nvidia.com/compute/cudnn/redist/cudnn/linux-aarch64/cudnn-linux-aarch64-9.7.1.26_cuda12-archive.tar.xz";
|
||||
hash = "sha256-jDPWAXKOiJYpblPwg5FUSh7F0Dgg59LLnd+pX9y7r1w=";
|
||||
}
|
||||
{
|
||||
version = "9.8.0.87";
|
||||
minCudaVersion = "12.0";
|
||||
maxCudaVersion = "12.8";
|
||||
url = "https://developer.download.nvidia.com/compute/cudnn/redist/cudnn/linux-aarch64/cudnn-linux-aarch64-9.8.0.87_cuda12-archive.tar.xz";
|
||||
hash = "sha256-8D7OP/B9FxnwYhiXOoeXzsG+OHzDF7qrW7EY3JiBmec=";
|
||||
}
|
||||
];
|
||||
# powerpc
|
||||
linux-ppc64le = [ ];
|
||||
# server-grade arm
|
||||
linux-sbsa = [
|
||||
{
|
||||
version = "8.9.7.29";
|
||||
minCudaVersion = "12.0";
|
||||
maxCudaVersion = "12.8";
|
||||
url = "https://developer.download.nvidia.com/compute/cudnn/redist/cudnn/linux-sbsa/cudnn-linux-sbsa-8.9.7.29_cuda12-archive.tar.xz";
|
||||
hash = "sha256-6Yt8gAEHheXVygHuTOm1sMjHNYfqb4ZIvjTT+NHUe9E=";
|
||||
}
|
||||
{
|
||||
version = "9.3.0.75";
|
||||
minCudaVersion = "12.0";
|
||||
maxCudaVersion = "12.6";
|
||||
url = "https://developer.download.nvidia.com/compute/cudnn/redist/cudnn/linux-sbsa/cudnn-linux-sbsa-9.3.0.75_cuda12-archive.tar.xz";
|
||||
hash = "sha256-Eibdm5iciYY4VSlj0ACjz7uKCgy5uvjLCear137X1jk=";
|
||||
}
|
||||
{
|
||||
version = "9.7.1.26";
|
||||
minCudaVersion = "12.0";
|
||||
maxCudaVersion = "12.8";
|
||||
url = "https://developer.download.nvidia.com/compute/cudnn/redist/cudnn/linux-sbsa/cudnn-linux-sbsa-9.7.1.26_cuda12-archive.tar.xz";
|
||||
hash = "sha256-koJFUKlesnWwbJCZhBDhLOBRQOBQjwkFZExlTJ7Xp2Q=";
|
||||
}
|
||||
{
|
||||
version = "9.8.0.87";
|
||||
minCudaVersion = "12.0";
|
||||
maxCudaVersion = "12.8";
|
||||
url = "https://developer.download.nvidia.com/compute/cudnn/redist/cudnn/linux-sbsa/cudnn-linux-sbsa-9.8.0.87_cuda12-archive.tar.xz";
|
||||
hash = "sha256-IvYvR08MuzW+9UCtsdhB2mPJzT33azxOQwEPQ2ss2Fw=";
|
||||
}
|
||||
{
|
||||
version = "9.11.0.98";
|
||||
minCudaVersion = "12.0";
|
||||
maxCudaVersion = "12.9";
|
||||
url = "https://developer.download.nvidia.com/compute/cudnn/redist/cudnn/linux-sbsa/cudnn-linux-sbsa-9.11.0.98_cuda12-archive.tar.xz";
|
||||
hash = "sha256-X81kUdiKnTt/rLwASB+l4rsV8sptxvhuCysgG8QuzVY=";
|
||||
}
|
||||
|
||||
];
|
||||
# x86_64
|
||||
linux-x86_64 = [
|
||||
{
|
||||
version = "8.9.7.29";
|
||||
minCudaVersion = "12.0";
|
||||
maxCudaVersion = "12.8";
|
||||
url = "https://developer.download.nvidia.com/compute/cudnn/redist/cudnn/linux-x86_64/cudnn-linux-x86_64-8.9.7.29_cuda12-archive.tar.xz";
|
||||
hash = "sha256-R1MzYlx+QqevPKCy91BqEG4wyTsaoAgc2cE++24h47s=";
|
||||
}
|
||||
{
|
||||
version = "9.3.0.75";
|
||||
minCudaVersion = "12.0";
|
||||
maxCudaVersion = "12.6";
|
||||
url = "https://developer.download.nvidia.com/compute/cudnn/redist/cudnn/linux-x86_64/cudnn-linux-x86_64-9.3.0.75_cuda12-archive.tar.xz";
|
||||
hash = "sha256-PW7xCqBtyTOaR34rBX4IX/hQC73ueeQsfhNlXJ7/LCY=";
|
||||
}
|
||||
{
|
||||
version = "9.7.1.26";
|
||||
minCudaVersion = "12.0";
|
||||
maxCudaVersion = "12.8";
|
||||
url = "https://developer.download.nvidia.com/compute/cudnn/redist/cudnn/linux-x86_64/cudnn-linux-x86_64-9.7.1.26_cuda12-archive.tar.xz";
|
||||
hash = "sha256-EJpeXGvN9Dlub2Pz+GLtLc8W7pPuA03HBKGxG98AwLE=";
|
||||
}
|
||||
{
|
||||
version = "9.8.0.87";
|
||||
minCudaVersion = "12.0";
|
||||
maxCudaVersion = "12.8";
|
||||
url = "https://developer.download.nvidia.com/compute/cudnn/redist/cudnn/linux-x86_64/cudnn-linux-x86_64-9.8.0.87_cuda12-archive.tar.xz";
|
||||
hash = "sha256-MhubM7sSh0BNk9VnLTUvFv6rxLIgrGrguG5LJ/JX3PQ=";
|
||||
}
|
||||
{
|
||||
version = "9.11.0.98";
|
||||
minCudaVersion = "12.0";
|
||||
maxCudaVersion = "12.9";
|
||||
url = "https://developer.download.nvidia.com/compute/cudnn/redist/cudnn/linux-x86_64/cudnn-linux-x86_64-9.11.0.98_cuda12-archive.tar.xz";
|
||||
hash = "sha256-tgyPrQH6FSHS5x7TiIe5BHjX8Hs9pJ/WirEYqf7k2kg=";
|
||||
}
|
||||
];
|
||||
};
|
||||
}
|
||||
@@ -1,21 +0,0 @@
|
||||
# Shims to mimic the shape of ../modules/generic/manifests/{feature,redistrib}/release.nix
|
||||
{
|
||||
package,
|
||||
# redistSystem :: String
|
||||
# String is "unsupported" if the given architecture is unsupported.
|
||||
redistSystem,
|
||||
}:
|
||||
{
|
||||
featureRelease = {
|
||||
inherit (package) minCudaVersion maxCudaVersion;
|
||||
${redistSystem}.outputs = {
|
||||
lib = true;
|
||||
static = true;
|
||||
dev = true;
|
||||
};
|
||||
};
|
||||
redistribRelease = {
|
||||
name = "NVIDIA CUDA Deep Neural Network library (cuDNN)";
|
||||
inherit (package) hash url version;
|
||||
};
|
||||
}
|
||||
@@ -1,96 +0,0 @@
|
||||
# Support matrix can be found at
|
||||
# https://docs.nvidia.com/deeplearning/cudnn/archives/cudnn-880/support-matrix/index.html
|
||||
{
|
||||
cudaLib,
|
||||
lib,
|
||||
redistSystem,
|
||||
}:
|
||||
let
|
||||
inherit (lib)
|
||||
attrsets
|
||||
lists
|
||||
modules
|
||||
trivial
|
||||
;
|
||||
|
||||
redistName = "cusparselt";
|
||||
pname = "libcusparse_lt";
|
||||
|
||||
cusparseltVersions = [
|
||||
"0.7.1"
|
||||
];
|
||||
|
||||
# Manifests :: { redistrib, feature }
|
||||
|
||||
# Each release of cusparselt gets mapped to an evaluated module for that release.
|
||||
# From there, we can get the min/max CUDA versions supported by that release.
|
||||
# listOfManifests :: List Manifests
|
||||
listOfManifests =
|
||||
let
|
||||
configEvaluator =
|
||||
fullCusparseltVersion:
|
||||
modules.evalModules {
|
||||
modules = [
|
||||
../modules
|
||||
# We need to nest the manifests in a config.cusparselt.manifests attribute so the
|
||||
# module system can evaluate them.
|
||||
{
|
||||
cusparselt.manifests = {
|
||||
redistrib = trivial.importJSON (./manifests + "/redistrib_${fullCusparseltVersion}.json");
|
||||
feature = trivial.importJSON (./manifests + "/feature_${fullCusparseltVersion}.json");
|
||||
};
|
||||
}
|
||||
];
|
||||
};
|
||||
# Un-nest the manifests attribute set.
|
||||
releaseGrabber = evaluatedModules: evaluatedModules.config.cusparselt.manifests;
|
||||
in
|
||||
lists.map (trivial.flip trivial.pipe [
|
||||
configEvaluator
|
||||
releaseGrabber
|
||||
]) cusparseltVersions;
|
||||
|
||||
# platformIsSupported :: Manifests -> Boolean
|
||||
platformIsSupported =
|
||||
{ feature, redistrib, ... }:
|
||||
(attrsets.attrByPath [
|
||||
pname
|
||||
redistSystem
|
||||
] null feature) != null;
|
||||
|
||||
# TODO(@connorbaker): With an auxiliary file keeping track of the CUDA versions each release supports,
|
||||
# we could filter out releases that don't support our CUDA version.
|
||||
# However, we don't have that currently, so we make a best-effort to try to build TensorRT with whatever
|
||||
# libPath corresponds to our CUDA version.
|
||||
# supportedManifests :: List Manifests
|
||||
supportedManifests = builtins.filter platformIsSupported listOfManifests;
|
||||
|
||||
# Compute versioned attribute name to be used in this package set
|
||||
# Patch version changes should not break the build, so we only use major and minor
|
||||
# computeName :: RedistribRelease -> String
|
||||
computeName =
|
||||
{ version, ... }: cudaLib.mkVersionedName redistName (lib.versions.majorMinor version);
|
||||
in
|
||||
final: _:
|
||||
let
|
||||
# buildCusparseltPackage :: Manifests -> AttrSet Derivation
|
||||
buildCusparseltPackage =
|
||||
{ redistrib, feature }:
|
||||
let
|
||||
drv = final.callPackage ../generic-builders/manifest.nix {
|
||||
inherit pname redistName;
|
||||
redistribRelease = redistrib.${pname};
|
||||
featureRelease = feature.${pname};
|
||||
};
|
||||
in
|
||||
attrsets.nameValuePair (computeName redistrib.${pname}) drv;
|
||||
|
||||
extension =
|
||||
let
|
||||
nameOfNewest = computeName (lists.last supportedManifests).redistrib.${pname};
|
||||
drvs = builtins.listToAttrs (lists.map buildCusparseltPackage supportedManifests);
|
||||
containsDefault = attrsets.optionalAttrs (drvs != { }) { cusparselt = drvs.${nameOfNewest}; };
|
||||
in
|
||||
drvs // containsDefault;
|
||||
in
|
||||
extension
|
||||
@@ -1,44 +0,0 @@
|
||||
{
|
||||
"libcusparse_lt": {
|
||||
"linux-aarch64": {
|
||||
"outputs": {
|
||||
"bin": false,
|
||||
"dev": true,
|
||||
"doc": false,
|
||||
"lib": true,
|
||||
"sample": false,
|
||||
"static": true
|
||||
}
|
||||
},
|
||||
"linux-sbsa": {
|
||||
"outputs": {
|
||||
"bin": false,
|
||||
"dev": true,
|
||||
"doc": false,
|
||||
"lib": true,
|
||||
"sample": false,
|
||||
"static": true
|
||||
}
|
||||
},
|
||||
"linux-x86_64": {
|
||||
"outputs": {
|
||||
"bin": false,
|
||||
"dev": true,
|
||||
"doc": false,
|
||||
"lib": true,
|
||||
"sample": false,
|
||||
"static": true
|
||||
}
|
||||
},
|
||||
"windows-x86_64": {
|
||||
"outputs": {
|
||||
"bin": false,
|
||||
"dev": true,
|
||||
"doc": false,
|
||||
"lib": false,
|
||||
"sample": false,
|
||||
"static": false
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,35 +0,0 @@
|
||||
{
|
||||
"release_date": "2025-02-25",
|
||||
"release_label": "0.7.1",
|
||||
"release_product": "cusparselt",
|
||||
"libcusparse_lt": {
|
||||
"name": "NVIDIA cuSPARSELt",
|
||||
"license": "cuSPARSELt",
|
||||
"license_path": "libcusparse_lt/LICENSE.txt",
|
||||
"version": "0.7.1.0",
|
||||
"linux-x86_64": {
|
||||
"relative_path": "libcusparse_lt/linux-x86_64/libcusparse_lt-linux-x86_64-0.7.1.0-archive.tar.xz",
|
||||
"sha256": "a0d885837887c73e466a31b4e86aaae2b7d0cc9c5de0d40921dbe2a15dbd6a88",
|
||||
"md5": "b2e5f3c9b9d69e1e0b55b16de33fdc6e",
|
||||
"size": "353151840"
|
||||
},
|
||||
"linux-sbsa": {
|
||||
"relative_path": "libcusparse_lt/linux-sbsa/libcusparse_lt-linux-sbsa-0.7.1.0-archive.tar.xz",
|
||||
"sha256": "4a131d0a54728e53ba536b50bb65380603456f1656e7df8ee52e285618a0b57c",
|
||||
"md5": "612a712c7da6e801ee773687e99af87e",
|
||||
"size": "352406784"
|
||||
},
|
||||
"windows-x86_64": {
|
||||
"relative_path": "libcusparse_lt/windows-x86_64/libcusparse_lt-windows-x86_64-0.7.1.0-archive.zip",
|
||||
"sha256": "004bcb1b700c24ca8d60a8ddd2124640f61138a6c29914d2afaa0bfa0d0e3cf2",
|
||||
"md5": "a1d8df8dc8ff4b3bd0e859f992f8f392",
|
||||
"size": "268594665"
|
||||
},
|
||||
"linux-aarch64": {
|
||||
"relative_path": "libcusparse_lt/linux-aarch64/libcusparse_lt-linux-aarch64-0.7.1.0-archive.tar.xz",
|
||||
"sha256": "d3b0a660fd552e0bd9a4491b15299d968674833483d5f164cfea35e70646136c",
|
||||
"md5": "54e3f3b28c94118991ce54ec38f531fb",
|
||||
"size": "5494380"
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,124 +0,0 @@
|
||||
# Support matrix can be found at
|
||||
# https://docs.nvidia.com/deeplearning/cudnn/archives/cudnn-880/support-matrix/index.html
|
||||
#
|
||||
# TODO(@connorbaker):
|
||||
# This is a very similar strategy to CUDA/CUDNN:
|
||||
#
|
||||
# - Get all versions supported by the current release of CUDA
|
||||
# - Build all of them
|
||||
# - Make the newest the default
|
||||
#
|
||||
# Unique twists:
|
||||
#
|
||||
# - Instead of providing different releases for each version of CUDA, CuTensor has multiple subdirectories in `lib`
|
||||
# -- one for each version of CUDA.
|
||||
{
|
||||
cudaLib,
|
||||
cudaMajorMinorVersion,
|
||||
lib,
|
||||
redistSystem,
|
||||
}:
|
||||
let
|
||||
inherit (lib)
|
||||
attrsets
|
||||
lists
|
||||
modules
|
||||
versions
|
||||
trivial
|
||||
;
|
||||
|
||||
redistName = "cutensor";
|
||||
pname = "libcutensor";
|
||||
|
||||
cutensorVersions = [
|
||||
"2.0.2"
|
||||
"2.1.0"
|
||||
];
|
||||
|
||||
# Manifests :: { redistrib, feature }
|
||||
|
||||
# Each release of cutensor gets mapped to an evaluated module for that release.
|
||||
# From there, we can get the min/max CUDA versions supported by that release.
|
||||
# listOfManifests :: List Manifests
|
||||
listOfManifests =
|
||||
let
|
||||
configEvaluator =
|
||||
fullCutensorVersion:
|
||||
modules.evalModules {
|
||||
modules = [
|
||||
../modules
|
||||
# We need to nest the manifests in a config.cutensor.manifests attribute so the
|
||||
# module system can evaluate them.
|
||||
{
|
||||
cutensor.manifests = {
|
||||
redistrib = trivial.importJSON (./manifests + "/redistrib_${fullCutensorVersion}.json");
|
||||
feature = trivial.importJSON (./manifests + "/feature_${fullCutensorVersion}.json");
|
||||
};
|
||||
}
|
||||
];
|
||||
};
|
||||
# Un-nest the manifests attribute set.
|
||||
releaseGrabber = evaluatedModules: evaluatedModules.config.cutensor.manifests;
|
||||
in
|
||||
lists.map (trivial.flip trivial.pipe [
|
||||
configEvaluator
|
||||
releaseGrabber
|
||||
]) cutensorVersions;
|
||||
|
||||
# Our cudaMajorMinorVersion tells us which version of CUDA we're building against.
|
||||
# The subdirectories in lib/ tell us which versions of CUDA are supported.
|
||||
# Typically the names will look like this:
|
||||
#
|
||||
# - 11
|
||||
# - 12
|
||||
|
||||
# libPath :: String
|
||||
libPath = versions.major cudaMajorMinorVersion;
|
||||
|
||||
# A release is supported if it has a libPath that matches our CUDA version for our platform.
|
||||
# LibPath are not constant across the same release -- one platform may support fewer
|
||||
# CUDA versions than another.
|
||||
# platformIsSupported :: Manifests -> Boolean
|
||||
platformIsSupported =
|
||||
{ feature, redistrib, ... }:
|
||||
(attrsets.attrByPath [
|
||||
pname
|
||||
redistSystem
|
||||
] null feature) != null;
|
||||
|
||||
# TODO(@connorbaker): With an auxiliary file keeping track of the CUDA versions each release supports,
|
||||
# we could filter out releases that don't support our CUDA version.
|
||||
# However, we don't have that currently, so we make a best-effort to try to build TensorRT with whatever
|
||||
# libPath corresponds to our CUDA version.
|
||||
# supportedManifests :: List Manifests
|
||||
supportedManifests = builtins.filter platformIsSupported listOfManifests;
|
||||
|
||||
# Compute versioned attribute name to be used in this package set
|
||||
# Patch version changes should not break the build, so we only use major and minor
|
||||
# computeName :: RedistribRelease -> String
|
||||
computeName =
|
||||
{ version, ... }: cudaLib.mkVersionedName redistName (lib.versions.majorMinor version);
|
||||
in
|
||||
final: _:
|
||||
let
|
||||
# buildCutensorPackage :: Manifests -> AttrSet Derivation
|
||||
buildCutensorPackage =
|
||||
{ redistrib, feature }:
|
||||
let
|
||||
drv = final.callPackage ../generic-builders/manifest.nix {
|
||||
inherit pname redistName libPath;
|
||||
redistribRelease = redistrib.${pname};
|
||||
featureRelease = feature.${pname};
|
||||
};
|
||||
in
|
||||
attrsets.nameValuePair (computeName redistrib.${pname}) drv;
|
||||
|
||||
extension =
|
||||
let
|
||||
nameOfNewest = computeName (lists.last supportedManifests).redistrib.${pname};
|
||||
drvs = builtins.listToAttrs (lists.map buildCutensorPackage supportedManifests);
|
||||
containsDefault = attrsets.optionalAttrs (drvs != { }) { cutensor = drvs.${nameOfNewest}; };
|
||||
in
|
||||
drvs // containsDefault;
|
||||
in
|
||||
extension
|
||||
@@ -1,44 +0,0 @@
|
||||
{
|
||||
"libcutensor": {
|
||||
"linux-ppc64le": {
|
||||
"outputs": {
|
||||
"bin": false,
|
||||
"dev": true,
|
||||
"doc": false,
|
||||
"lib": true,
|
||||
"sample": false,
|
||||
"static": true
|
||||
}
|
||||
},
|
||||
"linux-sbsa": {
|
||||
"outputs": {
|
||||
"bin": false,
|
||||
"dev": true,
|
||||
"doc": false,
|
||||
"lib": true,
|
||||
"sample": false,
|
||||
"static": true
|
||||
}
|
||||
},
|
||||
"linux-x86_64": {
|
||||
"outputs": {
|
||||
"bin": false,
|
||||
"dev": true,
|
||||
"doc": false,
|
||||
"lib": true,
|
||||
"sample": false,
|
||||
"static": true
|
||||
}
|
||||
},
|
||||
"windows-x86_64": {
|
||||
"outputs": {
|
||||
"bin": false,
|
||||
"dev": true,
|
||||
"doc": false,
|
||||
"lib": false,
|
||||
"sample": false,
|
||||
"static": false
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,34 +0,0 @@
|
||||
{
|
||||
"libcutensor": {
|
||||
"linux-sbsa": {
|
||||
"outputs": {
|
||||
"bin": false,
|
||||
"dev": true,
|
||||
"doc": false,
|
||||
"lib": true,
|
||||
"sample": false,
|
||||
"static": true
|
||||
}
|
||||
},
|
||||
"linux-x86_64": {
|
||||
"outputs": {
|
||||
"bin": false,
|
||||
"dev": true,
|
||||
"doc": false,
|
||||
"lib": true,
|
||||
"sample": false,
|
||||
"static": true
|
||||
}
|
||||
},
|
||||
"windows-x86_64": {
|
||||
"outputs": {
|
||||
"bin": false,
|
||||
"dev": true,
|
||||
"doc": false,
|
||||
"lib": false,
|
||||
"sample": false,
|
||||
"static": false
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,35 +0,0 @@
|
||||
{
|
||||
"release_date": "2024-06-24",
|
||||
"release_label": "2.0.2",
|
||||
"release_product": "cutensor",
|
||||
"libcutensor": {
|
||||
"name": "NVIDIA cuTENSOR",
|
||||
"license": "cuTensor",
|
||||
"license_path": "libcutensor/LICENSE.txt",
|
||||
"version": "2.0.2.4",
|
||||
"linux-x86_64": {
|
||||
"relative_path": "libcutensor/linux-x86_64/libcutensor-linux-x86_64-2.0.2.4-archive.tar.xz",
|
||||
"sha256": "957b04ef6343aca404fe5f4a3f1f1d3ac0bd04ceb3acecc93e53f4d63bd91157",
|
||||
"md5": "2b994ecba434e69ee55043cf353e05b4",
|
||||
"size": "545271628"
|
||||
},
|
||||
"linux-ppc64le": {
|
||||
"relative_path": "libcutensor/linux-ppc64le/libcutensor-linux-ppc64le-2.0.2.4-archive.tar.xz",
|
||||
"sha256": "db2c05e231a26fb5efee470e1d8e11cb1187bfe0726b665b87cbbb62a9901ba0",
|
||||
"md5": "6b00e29407452333946744c4084157e8",
|
||||
"size": "543070992"
|
||||
},
|
||||
"linux-sbsa": {
|
||||
"relative_path": "libcutensor/linux-sbsa/libcutensor-linux-sbsa-2.0.2.4-archive.tar.xz",
|
||||
"sha256": "9712b54aa0988074146867f9b6f757bf11a61996f3b58b21e994e920b272301b",
|
||||
"md5": "c9bb31a92626a092d0c7152b8b3eaa18",
|
||||
"size": "540299376"
|
||||
},
|
||||
"windows-x86_64": {
|
||||
"relative_path": "libcutensor/windows-x86_64/libcutensor-windows-x86_64-2.0.2.4-archive.zip",
|
||||
"sha256": "ab2fca16d410863d14f2716cec0d07fb21d20ecd24ee47d309e9970c9c01ed4a",
|
||||
"md5": "f6cfdb29a9a421a1ee4df674dd54028c",
|
||||
"size": "921154033"
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,29 +0,0 @@
|
||||
{
|
||||
"release_date": "2025-01-27",
|
||||
"release_label": "2.1.0",
|
||||
"release_product": "cutensor",
|
||||
"libcutensor": {
|
||||
"name": "NVIDIA cuTENSOR",
|
||||
"license": "cuTensor",
|
||||
"license_path": "libcutensor/LICENSE.txt",
|
||||
"version": "2.1.0.9",
|
||||
"linux-x86_64": {
|
||||
"relative_path": "libcutensor/linux-x86_64/libcutensor-linux-x86_64-2.1.0.9-archive.tar.xz",
|
||||
"sha256": "ee59fcb4e8d59fc0d8cebf5f7f23bf2a196a76e6bcdcaa621aedbdcabd20a759",
|
||||
"md5": "ed15120c512dfb3e32b49103850bb9dd",
|
||||
"size": "814871140"
|
||||
},
|
||||
"linux-sbsa": {
|
||||
"relative_path": "libcutensor/linux-sbsa/libcutensor-linux-sbsa-2.1.0.9-archive.tar.xz",
|
||||
"sha256": "cef7819c4ecf3120d4f99b08463b8db1a8591be25147d1688371024885b1d2f0",
|
||||
"md5": "fec00a1a825a05c0166eda6625dc587d",
|
||||
"size": "782008004"
|
||||
},
|
||||
"windows-x86_64": {
|
||||
"relative_path": "libcutensor/windows-x86_64/libcutensor-windows-x86_64-2.1.0.9-archive.zip",
|
||||
"sha256": "ed835ba7fd617000f77e1dff87403d123edf540bd99339e3da2eaab9d32a4040",
|
||||
"md5": "9efcbc0c9c372b0e71e11d4487aa5ffa",
|
||||
"size": "1514752712"
|
||||
}
|
||||
}
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user