1. 27 Mar, 2022 1 commit
  2. 31 Dec, 2021 1 commit
  3. 20 Dec, 2021 1 commit
  4. 29 Jun, 2021 1 commit
  5. 25 Jun, 2021 1 commit
    • Implement softmax kernels via warp reduce · 654febe3
      Summary:
      This commit adds extra CUDA softmax kernels using warp reduce.
      Warp reduce leads to better performance when dimension <= 256,
      which is preferred for the recent vision transformers.
      Ting PAN committed
  6. 22 Jun, 2021 1 commit
  7. 19 Jun, 2021 1 commit
  8. 08 Jun, 2021 1 commit
    • Enhance transpose operators · 936c351b
      Summary:
      This commit allows transpose to compute in-place by leveraging buffer.
      We also adds CRD mode for space-depth transpose (i.e., pixel shuffle).
      Ting PAN committed
  9. 31 May, 2021 1 commit
  10. 13 May, 2021 1 commit
  11. 07 May, 2021 1 commit
  12. 01 May, 2021 1 commit
  13. 28 Apr, 2021 1 commit
  14. 21 Apr, 2021 1 commit
  15. 14 Apr, 2021 1 commit
  16. 08 Apr, 2021 1 commit
    • Update with the new frontend API · f431756f
      Summary:
      The new frontend makes an union of two execution modes, while starts from
      a single tensor class. Besides, it emits the operator execution through
      a common path that works both for dragon and torch.
      Ting PAN committed
  17. 04 Feb, 2021 1 commit
  18. 25 Jan, 2021 1 commit
    • Remove support for CUDNN v6 · 73ed1b96
      Summary:
      For the purpose of consistency on getting CUDNN convolution algorithms,
      CUDNN v6 (mainly relied by CUDA 8.0) is now dropped.
      Ting PAN committed
  19. 20 Jan, 2021 1 commit
    • Add sysconfig module · bbfecf22
      Summary:
      This commit adds the sysconfig module to get the build information.
      Build information is helpful to select tests or report issues.
      Ting PAN committed
  20. 16 Jan, 2021 1 commit
  21. 29 Dec, 2020 1 commit
  22. 23 Dec, 2020 1 commit
  23. 15 Dec, 2020 1 commit
  24. 11 Dec, 2020 1 commit
  25. 10 Dec, 2020 1 commit
  26. 09 Dec, 2020 1 commit
  27. 03 Dec, 2020 1 commit
  28. 02 Dec, 2020 1 commit
  29. 29 Nov, 2020 1 commit
  30. 05 Nov, 2020 1 commit
  31. 24 Oct, 2020 1 commit
  32. 20 Oct, 2020 1 commit
  33. 14 Oct, 2020 1 commit
  34. 13 Oct, 2020 1 commit
    • Add LinSpace Operator · e83c407a
      Summary:
      This commit adds the linspace op for dragon, torch and tensorflow.
      And, a workaround for truncated int interval is made to range/linspace (up to 2**57).
      Ting PAN committed
  35. 08 Oct, 2020 1 commit
  36. 07 Oct, 2020 1 commit
  37. 27 Sep, 2020 1 commit
    • Use local workspace for Context · fdf26ef2
      Summary:
      This commit uses local(thread or stream) workspace for Context,
      which provides a more elegant way to dispatch kernels requiring scratch.
      Besides, TF32 math type is provided as a cuDNN option for Ampere device.
      Ting PAN committed
  38. 10 Sep, 2020 1 commit
    • Add Unique Operator · 1dd8aeef
      Summary:
      This commit adds the unique op for dragon, torch, tensorflow and onnx.
      Besides, fixes the bug that gets the wrong workspace size in cached cudnn convolution.
      Ting PAN committed
  39. 05 Sep, 2020 1 commit
    • Use sequential sampling as the default shuffle policy · 80267d8f
      Summary:
      This commit reimplements the default shuffle policy of data reader with
      sequential sampling (be consistent with DALI) instead of chunk permutation (MXNet solution).
      Sequential sampling is tuned by argument ``initial_fill`` only, and works both for HDD and SSD.
      Ting PAN committed
  40. 30 Aug, 2020 1 commit