Computer vision
Computer vision is an interdisciplinary field that deals with how computers can be made to gain high-level understanding of digital images and videos.
Here are 17,568 public repositories matching this topic...
-
Updated
Jun 7, 2022
-
Updated
Jun 10, 2022 - Python
-
Updated
May 26, 2022 - C++
-
Updated
May 26, 2022
-
Updated
Jun 8, 2022 - C
-
Updated
Apr 28, 2022 - Python
I figured out a way to get the (x,y,z) data points for each frame from one hand previously. but im not sure how to do that for the new holistic model that they released. I am trying to get the all landmark data points for both hands as well as parts of the chest and face. does anyone know how to extract the holistic landmark data/print it to a text file? or at least give me some directions as to h
-
Updated
Jun 8, 2022 - Jupyter Notebook
Change tensor.data to tensor.detach() due to
pytorch/pytorch#6990 (comment)
tensor.detach() is more robust than tensor.data.
Although the results look nice and ideal in all TensorFlow plots and are consistent across all frameworks, there is a small difference (more of a consistency issue). The result training loss/accuracy plots look like they are sampling on a lesser number of points. It looks more straight and smooth and less wiggly as compared to PyTorch or MXNet.
It can be clearly seen in chapter 6([CNN Lenet](ht
Describe the bug
Streaming Datasets can't be pickled, so any interaction between them and multiprocessing results in a crash.
Steps to reproduce the bug
import transformers
from transformers import Trainer, AutoModelForCausalLM, TrainingArguments
import datasets
ds = datasets.load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True).with_format("-
Updated
Jun 8, 2022 - C++
-
Updated
Apr 28, 2022
-
Updated
May 26, 2022 - Python
🚀 The feature
specify the dep typing_extensions is only needed for python < 3.8
Motivation, pitch
specify the dep typing_extensions is only needed for python < 3.8
Alternatives
No response
Additional context
According to https://github.com/pytorch/vision/search?q=typing_extensions, torchvision only uses typing_extensions.Literal, which is available since python 3.8, see
-
Updated
Jun 7, 2022 - C++
-
Updated
Aug 3, 2020 - Lua
-
Updated
Jun 3, 2022 - Python
-
Updated
Apr 10, 2022 - HTML
-
Updated
Mar 21, 2022 - Python
-
Updated
May 23, 2022 - Go
-
Updated
May 22, 2022
Describe the bug
When exporting a brush annotation as a PNG, the output is not mapped by the background colors specified in (Settings > Labeling Interface). In addition, when exporting as a JSON, the background colors for the attributes are not specified anywhere, leaving the values that were selected in the interface as arbitrary and as not linked to any of the outputs.
To Reproduce
-
Updated
Jun 6, 2021 - Lua
-
Updated
Jun 8, 2022 - C++
-
Updated
May 31, 2022
If there is a hot key (move the image from left to right) can get the feature when I was annotating the polygon.
The scroll of mouse can achieve the up and down direction of image, but if I need the move the image from left to right, I have to drag the bottom bar.
If there is a hotkey to drag the whole image or to move the image horizontal?
Thank you!
-
Updated
May 24, 2022