pytorch
Here are 14,641 public repositories matching this topic...
-
Updated
Feb 9, 2021 - Jupyter Notebook
Add volume Bar
some recordings have low volume so the output can be sometimes really quiet. how about we add a volume bar so we can make the output louder/quieter?
-
Updated
Mar 7, 2021 - Jupyter Notebook
-
Updated
Dec 21, 2020 - Python
-
Updated
Mar 2, 2021 - Python
-
Updated
Feb 20, 2021 - Jupyter Notebook
We keep this issue open to collect feature requests from users and hear your voice. Our monthly release plan is also available here.
You can either:
- Suggest a new feature by leaving a comment.
- Vote for a feature request with
👍 or be against with👎 . (Remember that developers are busy and cannot respond to all feature requests, so vote for your most favorable one!) - Tell us that
-
Updated
Mar 8, 2021 - JavaScript
Currently, we rely on AllGatherGrad to compute gather for GPUs.
TODO:
- [] Extend this class to support TPU
- [] Add tests
Change tensor.data to tensor.detach() due to
pytorch/pytorch#6990 (comment)
tensor.detach() is more robust than tensor.data.
-
Updated
Feb 28, 2021
-
Updated
Mar 6, 2021 - Python
-
Updated
Jan 26, 2021 - Jupyter Notebook
-
Updated
Mar 6, 2021 - C++
-
Updated
Mar 6, 2021 - Python
-
Updated
Mar 3, 2021 - Python
-
Updated
Mar 7, 2021 - Python
-
Updated
Mar 7, 2021 - Python
Add a new API for converting a model to external data. Today the conversion happens in 2 steps
external_data_helper.convert_model_to_external_data(<model>, <all_tensors_to_one_file>, <size_threshold>) save_model(model, output_path)
We want to add another api which combines the 2 steps
`
save_model_to_external_data(, <output_
While setting train_parameters to False very often we also may consider disabling dropout/batchnorm, in other words, to run the pretrained model in eval mode.
We've done a little modification to PretrainedTransformerEmbedder that allows providing whether the token embedder should be forced to eval mode during the training phase.
Do you this feature might be handy? Should I open a PR?
I'm using mxnet to do some work, but there is nothing when I search the mxnet trial and example.
-
Updated
Mar 8, 2021 - Python
Current pytorch implementation ignores the argument split_f in the function train_batch_ch13 as shown below.
def train_batch_ch13(net, X, y, loss, trainer, devices):
if isinstance(X, list):
# Required for BERT Fine-tuning (to be covered later)
X = [x.to(devices[0]) for x in X]
else:
X = X.to(devices[0])
...Todo: Define the argument `
-
Updated
Feb 10, 2021 - Jupyter Notebook
-
Updated
Oct 20, 2020 - Jupyter Notebook
-
Updated
Feb 24, 2021
Please can you train ghostnet.
(i don't have the imagenet dataset)
cuda requirement
Is it possible to run this on a (recent) Mac, which does not support CUDA? I would have guessed setting --GPU 0 would not attempt to call CUDA, but it fails.
File "/Users/../Desktop/bopbtl/venv/lib/python3.7/site-packages/torch/cuda/__init__.py", line 61, in _check_driver
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enable
Improve this page
Add a description, image, and links to the pytorch topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the pytorch topic, visit your repo's landing page and select "manage topics."
Hi, I am interested in using the DeBERTa model that was recently implemented here and incorporating it into FARM so that it can also be used in open-domain QA settings through Haystack.
Just wondering why there's only a Slow Tokenizer implemented for DeBERTa and wondering if there are plans to create the Fast Tokeni