Computer vision
Computer vision is an interdisciplinary field that deals with how computers can be made to gain high-level understanding of digital images and videos.
Here are 17,642 public repositories matching this topic...
-
Updated
Jun 16, 2022
-
Updated
Jun 17, 2022 - Python
-
Updated
Jun 15, 2022 - C++
-
Updated
Jun 17, 2022
-
Updated
Jun 17, 2022 - C
-
Updated
Apr 28, 2022 - Python
I figured out a way to get the (x,y,z) data points for each frame from one hand previously. but im not sure how to do that for the new holistic model that they released. I am trying to get the all landmark data points for both hands as well as parts of the chest and face. does anyone know how to extract the holistic landmark data/print it to a text file? or at least give me some directions as to h
-
Updated
Jun 16, 2022 - Jupyter Notebook
Change tensor.data to tensor.detach() due to
pytorch/pytorch#6990 (comment)
tensor.detach() is more robust than tensor.data.
Although the results look nice and ideal in all TensorFlow plots and are consistent across all frameworks, there is a small difference (more of a consistency issue). The result training loss/accuracy plots look like they are sampling on a lesser number of points. It looks more straight and smooth and less wiggly as compared to PyTorch or MXNet.
It can be clearly seen in chapter 6([CNN Lenet](ht
As mentioned in huggingface/datasets#2552 it would be nice to improve the error message when a dataset fails to build because there are duplicate example keys.
The current one is
datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: 48
Keys should be unique and deterministic in natureand we could have something
-
Updated
Jun 16, 2022 - C++
-
Updated
Apr 28, 2022
-
Updated
May 26, 2022 - Python
Currently, there are following warnings when running tests:
test/test_models.py::test_quantized_classification_model[googlenet]
/root/project/torchvision/models/googlenet.py:47: FutureWarning: The default weight initialization of GoogleNet will be changed in future releases of torchvision. If you wish to keep the old behavior (which leads to long initialization times due to scipy/scipy#11
-
Updated
Aug 3, 2020 - Lua
-
Updated
Jun 16, 2022 - C++
-
Updated
Jun 3, 2022 - Python
-
Updated
Apr 10, 2022 - HTML
-
Updated
Jun 13, 2022 - Python
-
Updated
May 23, 2022 - Go
Describe the bug
When exporting a brush annotation as a PNG, the output is not mapped by the background colors specified in (Settings > Labeling Interface). In addition, when exporting as a JSON, the background colors for the attributes are not specified anywhere, leaving the values that were selected in the interface as arbitrary and as not linked to any of the outputs.
To Reproduce
-
Updated
May 22, 2022
-
Updated
Jun 6, 2021 - Lua
-
Updated
Jun 8, 2022 - C++
-
Updated
May 31, 2022
-
Updated
Jun 13, 2022
If there is a hot key (move the image from left to right) can get the feature when I was annotating the polygon.
The scroll of mouse can achieve the up and down direction of image, but if I need the move the image from left to right, I have to drag the bottom bar.
If there is a hotkey to drag the whole image or to move the image horizontal?
Thank you!