Create your own GitHub profile
Sign up for your own profile on GitHub, the best place to host code, manage projects, and build software alongside 50 million developers.
Sign up- New York, NY
- Sign in to view email
- http://ezyang.com
Pinned
3,296 contributions in the last year
Contribution activity
May 2020
Created a pull request in pytorch/pytorch that received 5 comments
Move all torch.nn.modules type annotations inline
Stack from ghstack: #38211 Move all torch.nn.modules type annotations inline #38173 Device and torch._C function cleanup Just because the annotat…
+978
−2,072
•
5
comments
- [pytorch][PR] Remove datatype from Storage and StorageImpl
- Remove device guard codegen on TypeDefault.
- Don't generate DeviceGuard for CPU wrapping code.
- Remove supports_named_tensor from codegen entirely.
- Enforce that named_tensor_meta_ is non-null only if there is a non-wildcard name
- [TESTING] just testing
- [WIP] Meta functions
- Move torch/autograd/grad_mode.pyi stubs inline
- Move torch/autograd/grad_mode.pyi stubs inline
- Device and torch._C function cleanup
- Delete torch/__init__.pyi, deferring to direct extension stubs
- Bind VariableFunctions as a module, not a class with static methods.
- Add minimal skeleton for _C type stubs, delete torch.autograd stub
- Fix typo: TupleUnpack.
- Get rid of javasphinx dependency.
- Give _VariableFunctions class a different name, so pickling works
- Fix lint
- Back out "Revert D21171334: [pytorch][PR] Change StorageImpl to track byte count rather than element count"
- Revert "Revert D21337640: [pytorch][PR] Split up documentation into subpages and clean up some warnings"
- Change StorageImpl to track byte count rather than element count (#37028)
- Stop defining static data in Vec256
- Split up documentation into subpages and clean up some warnings
- Add "batching rule" for torch.sum(tensor, dims)
- Better handling for msvc env when compiling cpp extensions
- Fix cpp extension build failure if path contains space
- Remove __future__ imports from many files
- [ROCm] Set correct tolerance values for bfloat16 div tests
- Fixup: rename BatchedTensorKey to Batched
- [ROCm] HIP version guard for occupancy API compatibility
- Fix incorrect __torch_function__ handling in einsum
- Added OpenCL DispatchKey, DeviceType, Backend
- For jobs need a merge, merge with origin/master for ghstack PRs.
- Document `torch.utils.cmake_prefix_path`
- CPU/CUDA unification of normal, cauchy, log_normal, geometric and exponential distributions
- Add BatchedTensorImpl
- Overload bitwise NOT, AND, OR, XOR operators for `at::Tensor`
- Fix find_first_set for x86 MSVC (Updated)
- Update On "check-doxygen.sh must be run from docs/cpp/source director…
- Fix target determination file diffing
- Add tests for complex
- Assert that kernels are called with the right signature
- Make find_first_set works on x86 MSVC
- Add `torch.utils.cmake_prefix_path` pointing to `share/cmake` folder
- Use `jit_core_sources` from build_varliables.bzl
- .circleci: Move ecr gc build job to ecr gc workflow
- skip test_torchbind_no_init on rocm
- Add message to static_assert
- Some pull request reviews not shown.
Created an issue in pytorch/pytorch that received 8 comments
Better testing on CPUs without AVX capabilities
#37577 is the latest occurrence of cases where we broke users with non-AVX CPUs because we accidentally let AVX instructions sneak in to code that …
8
comments
- test_stream_event_nogil fails on my devfair
- RuntimeError: arg_types.size() == param_names.size() - (moduleSelf_ ? 1 : 0) INTERNAL ASSERT FAILED
- torch._C._jit_tree_views.SourceRange should print more descriptively by default
- Parameter is not a valid type annotation on TorchScripted modules
- When TorchScripted module has bad type annotation you get bad error message
- Functions in torch._C._nn and torch._C._onnx are not pickleable
- test_cpp_warnings_have_python_context_cpu fails under some build configurations
- DISABLED test_profiler_with_async_rpc_udf (__main__.RpcTestWithSpawn)
- Stop redundantly registering schema for manually boxed wrappers
- Add Compound key, make custom ops default to it (but keep internal users using CatchAll)
- test issue