New software installed: CUTLASS version 3.4.0-CUDA-12.1.1

Dear users, we have installed a new software: CUTLASS 3.4.0-CUDA-12.1.1:


---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  CUTLASS: CUTLASS/3.4.0-CUDA-12.1.1
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    Description:
      CUTLASS is a collection of CUDA C++ template abstractions for implementing high-performance matrix-matrix multiplication (GEMM) and related computations at all levels and scales within CUDA. It
      incorporates strategies for hierarchical decomposition and data movement similar to those used to implement cuBLAS and cuDNN. CUTLASS decomposes these "moving parts" into reusable, modular software
      components abstracted by C++ template classes. Primitives for different levels of a conceptual parallelization hierarchy can be specialized and tuned via custom tiling sizes, data types, and other
      algorithmic policy. The resulting flexibility simplifies their use as building blocks within custom kernels and applications.


    You will need to load all module(s) on any one of the lines below before the "CUTLASS/3.4.0-CUDA-12.1.1" module is available to load.

      GCC/12.3.0  OpenMPI/4.1.5
 
    Help:
      Description
      ===========
      CUTLASS is a collection of CUDA C++ template
      abstractions for implementing high-performance matrix-matrix
      multiplication (GEMM) and related computations at all levels and scales
      within CUDA. It incorporates strategies for hierarchical decomposition
      and data movement similar to those used to implement cuBLAS and cuDNN.
      CUTLASS decomposes these "moving parts" into reusable, modular software
      components abstracted by C++ template classes. Primitives for different
      levels of a conceptual parallelization hierarchy can be specialized and
      tuned via custom tiling sizes, data types, and other algorithmic policy.
      The resulting flexibility simplifies their use as building blocks within
      custom kernels and applications.
      
      
      More information
      ================
       - Homepage: https://github.com/NVIDIA/cutlass
      


 

Best,
HPC team