Create your own GitHub profile
Sign up for your own profile on GitHub, the best place to host code, manage projects, and build software alongside 50 million developers.
Sign up
Pro
Popular repositories
290 contributions in the last year
Activity overview
Contributed to
pytorch/pytorch,
pytorch/nestedtensor,
python/psf-infra-meta
and 5 other
repositories
Contribution activity
May 2020
Created a pull request in pytorch/pytorch that received 5 comments
[CPU] addmv for complex tensors
Stack from ghstack: #37940 [CUDA] addmv for complex tensors #37924 [CPU] addmv for complex tensors Differential Revision: D21429384
+31
−6
•
5
comments
- Add more autograd tests for complex
- [CUDA] addcmul and addcdiv for complex dtypes
- [CUDA] torch.roll for complex dtypes
- Add autograd tests for complex
- Add tests for complex
- Add tan_cuda for complex dtypes
- Added autograd tests, disabled jit autograd tests for complex and added a separate list for tests for complex dtype only
- tan_cuda for complex dtypes
- Added more tests and a separate list for tests for complex dtype only
- Have Device available in torch namespace
- Have DeviceType available in torch namespace
- Fix complex tensor printing
- Fix torch.tensor dtype inference
- sum and roll on cuda for complex dtypes
- [CUDA] addmv for complex tensors
- Use torch.ne instead of torch.nonzero in gradcheck
- Added more autograd tests for C->C complex functions
- Fixed gradcheck for complex
- updated create input and add test methods and added a whitelist for complex
- Run storage tests only on CPU and CUDA (not xla)
- Add tanh_cuda support for complex types
- support complex types for tanh_backward_cpu
- Refactor c10::complex and cleanup c10::Scalar
- `torch.pow` Add type promotion support and fix issue with __rpow__
- Add complex support for torch.sum
- Kill AT_DISPATCH_ALL_TYPES_AND_C10_COMPLEX_AND2
- Migrate CPU min max to c10::complex
- Migrate CPU clamp to c10::complex
- Add arcosh, arcsinh and arctanh to unary ops
- port `scatter_add` to ATen (CUDA)
- Kill AT_DISPATCH_ALL_TYPES_AND_C10_COMPLEX
- Add std::log1p for c10::complex
- Implements torch.pow for complex on cuda and enables complex values as exponents for pow
- Remove asserEqualIgnoreType from test_complex
- Migrate CPU reduction to c10::complex
- Migrate CPU cross and some elementwise to c10::complex
- Migrate CPU tensor factories to c10::complex
- Refactor native/cpu/zmath.h
- Add `torch.logcumsumexp`
- [Resubmit] Migrate AT_DISPATCH_ALL_TYPES_AND_COMPLEX_AND3 to c10::complex
- Migrate CPU fill kernel to c10::complex
- Get rid of javasphinx dependency.
- Batchnorm now always updates var and mean inplace
- Migrate CPU tril, triu, masked_fill to c10::complex
- Migrate CUDA where, tril, triu to c10::complex
- Some pull request reviews not shown.
Created an issue in pytorch/pytorch that received 3 comments
torch.addmv can't take as input tensors with different dtypes
Is this expected?
>>> vec = torch.randn(3, dtype=torch.half)
>>> M = torch.randn(2, dtype=torch.float)
>>> mat = torch.randn(2, 3, dtype=torch.floa…
3
comments