Files
pytorch/test/cpp/jit
Christian Sarofeen e551bfc8de New CUDA Fuser code lowering refactor (#36199)
Summary:
This PR completely refactors the code lowering process from our IR to CUDA. Before we had one giant step that would go from a relatively high level IR straight to CUDA, now we're lowering this first into concepts like ForLoop, IfThenElse, TensorIndex, Allocate. This lowering will allow us to do more complex code lowering like reductions and unrolling. Unrolling will quickly follow this PR.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36199

Reviewed By: dzhulgakov

Differential Revision: D20925220

Pulled By: soumith

fbshipit-source-id: 8f621c694c68a1aad8653e625d7287fe2d8b35dc
2020-04-09 14:27:05 -07:00
..
2019-08-18 16:49:56 -07:00
2019-08-18 16:49:56 -07:00
2020-02-27 13:02:51 -08:00
2020-04-01 01:55:29 -07:00
2020-04-03 14:36:20 -07:00

JIT C++ Tests

How to add a new test

First, create a new test file. Test files should have be placed in this directory, with a name that starts with test_, like test_foo.cpp.

Here is an example test file you can copy-paste.

#include <test/cpp/jit/test_base.h>

// Tests go in torch::jit
namespace torch {
namespace jit {

// 1. Test cases are void() functions.
// 2. They start with the prefix `test`
void testCaseOne() {
    // ...
}

void testCaseTwo() {
    // ...
}
}
}

Then, register your test in tests.h:

// Add to TH_FORALL_TESTS_CUDA instead for CUDA-requiring tests
#define TH_FORALL_TESTS(_)             \
  _(ADFormulas)                        \
  _(Attributes)                        \
  ...
  _(CaseOne)  // note that the `test` prefix is omitted.
  _(CaseTwo)

We glob all the test files together in CMakeLists.txt so that you don't have to edit it every time you add a test. Unfortunately, this means that in order to get the build to pick up your new test file, you need to re-run cmake:

python setup.py build --cmake

Why do we have two different test runners?

We have two different ways of running our cpp tests:

  1. With gtest, from a standalone binary.
  2. With Python, from TestJit.test_cpp and TestJit.test_cpp_cuda (in test/test_jit.py)

We want both because we need to test things from a pure-C++ environment and with all our various Python patch-points enabled.

How do I run the tests?

The following commands assume you are in PyTorch root.

  1. With gtest:
    # (re)build the test binary
    ninja build/bin/test_jit
    # run
    build/bin/test_jit --gtest_filter='glob_style_filter*'
    
  2. With Python:
    python test/test_jit.py TestJit.test_cpp TestJit.test_cpp_cuda