-
Updated
Aug 13, 2020 - Python
Computer vision
Computer vision is an interdisciplinary field that deals with how computers can be made to gain high-level understanding of digital images and videos.
Here are 4,193 public repositories matching this topic...
-
Updated
Aug 24, 2020 - Python
-
Updated
Jun 26, 2020 - Python
-
Updated
Jul 22, 2020 - Python
-
Updated
Aug 24, 2020 - Python
-
Updated
Aug 24, 2020 - Python
-
Updated
Oct 31, 2018 - Python
-
Updated
Jul 15, 2020 - Python
-
Updated
Aug 21, 2020 - Python
-
Updated
Aug 24, 2020 - Python
-
Updated
Aug 24, 2020 - Python
-
Updated
Jun 25, 2020 - Python
-
Updated
Apr 21, 2020 - Python
My actions before raising this issue
- Read/searched the docs
- Searched past issues
Current version of pyAV is 6.2.0. Need to update it till 8.x.
Also PyAV d
Description
Consider this very short piece of code:
testimg = np.array([0, 1, 2, 3, 4, 3.5, 4, 5, 4, 3, 2, 1, 0])
morphology.h_maxima(testimg, 1)
morphology.h_maxima(testimg, 2)
morphology.h_maxima(testimg, 3)The results show that the element with value 5 is always picked up by this function, even though it is not a local maximum with height h for any h; rather it i
-
Updated
Apr 27, 2020 - Python
-
Updated
Aug 5, 2020 - Python
test dataset
I'm new to tracking, and I have a question about the test data.
In test, OTB2015 VOT16/17/18 are supported, but in train, COCO DET VID are supported.
What's the connection between these?
If I want to run train, what data should I use?
Test?
Thank you!
-
Updated
Aug 9, 2020 - Python
(siammask) [liqiang@inspur siammask]$ bash test_mask_refine.sh config_vot.json SiamMask_VOT.pth VOT2016 0
[2019-03-14 19:42:16,619-rk0-test.py#551] Namespace(arch='Custom', config='config_vot.json', dataset='VOT2016', gt=False, log='log_test.txt', mask=True, refine=True, resume='SiamMask_VOT.pth', save_mask=False, visualization=False)
[2019-03-14 19:42:17,087-rk0-load_helper.py# 31] load pretrai
Input and output are described as:
Input: (∗,3) where * means, any number of dimensions
Output: (∗,4)
While the example shows:
angle_axis = torch.rand(2, 4) # Nx4 quaternion = kornia.angle_axis_to_quaternion(angle_axis) # Nx3
The example indicates that in
-
Updated
Dec 26, 2019 - Python
-
Updated
Aug 3, 2020 - Python
-
Updated
Aug 24, 2020 - Python
-
Updated
Aug 23, 2020 - Python
-
Updated
May 28, 2019 - Python
-
Updated
Feb 19, 2020 - Python
-
Updated
Jun 11, 2020 - Python
Feature Request
I want to obtain the evaluation of different datasets of a single job
lumi eval -h && lumi eval -c config_ssd.yml --split valid --split train --split test --watch --from-global-step 0
I understand that it is not so trivial because it was being kept as a summary of work scalars like this
![image](https://user-images.githubusercontent.com/7362688/48442210-8be6d080-e75b
Hi, thanks for the great code!
I wonder do you have plans to support resuming from checkpoints for classification? As we all know, in terms of training ImageNet, the training process is really long and it can be interrupted somehow, but I haven't notice any code related to "resume" in
scripts/classification/train_imagenet.py.Maybe @hetong007 ? Thanks in advance.