-
Updated
Mar 16, 2020 - Python
datasets
Here are 211 public repositories matching this topic...
I have set up Postgres in Kubernetes and also setup Doccano in Kubernetes however it's working well but wants to know the mount point for Kubernetes to attach Persistence volume.
My deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: doccano
name: doccano
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 1
revisi
Documentation is incomplete see tensorflow/datasets#704 (comment).
-
Updated
Jul 13, 2020 - Python
-
Updated
Jul 12, 2020 - Python
-
Updated
Mar 1, 2020 - Python
I would be useful to implement support for various photoreceptor models so that it is possible to generate custom cone fundamentals down the road. I have started with Caroll et al. (2000), Stockman and Sharpe (2000), Lamb (1995) photoreceptor models in that notebook: https://colab.research.google.com/drive/1snhtUdUxUrTnw_B0kagvfz015Co9p-xv
We will obviously need support for various pre-receptor
The starter datasets came from R's samples, so their html documentation includes R examples on how to use the data. Would it be considered worthwhile to translate the usage information to Python3?
-
Updated
Mar 1, 2020 - Python
-
Updated
Nov 16, 2019 - Python
flake8 testing of https://github.com/juand-r/entity-recognition-datasets on Python 3.7.0
$ flake8 . --count --select=E901,E999,F821,F822,F823 --show-source --statistics
./data/NIST_IEER/CONLL-format/utils/quick_comma_fix.py:41:37: E999 SyntaxError: invalid syntax
print annotations
^
./data/NIST_IEER/
-
Updated
Jul 7, 2020 - Python
-
Updated
Mar 10, 2020 - Python
-
Updated
Jul 18, 2019 - Python
-
Updated
Mar 27, 2019 - Python
After installation of retriever for the first time, the scripts need to be updated. This step seems to be missing from the "Quick Start" guide for retriever ( https://www.data-retriever.org/#quickstart ), as well as the landing page for rdataretriever.
In these cases, I think the commands to be added (in between the installation step and before the first example for installing/downloading a
-
Updated
Jan 12, 2019 - Python
-
Updated
Jul 8, 2020 - Python
-
Updated
Aug 13, 2019 - Python
-
Updated
Jul 2, 2020 - Python
-
Updated
Aug 8, 2019 - Python
-
Updated
Sep 9, 2018 - Python
-
Updated
Oct 28, 2019 - Python
Code in 'README.md' works well in 'multi_dsprites_colored_on_colored.tfrecords'.
However, it doesn't work well in 'multi_dsprites_binarized.tfrecords' and 'multi_dsprites_colored_on_grayscale.tfrecords'.
In the code
from multi_object_datasets import multi_dsprites
import tensorflow as tf
tf_records_path = 'path/to/multi_dsprites_binarized.tfrecords'
batch_size = 32
Retire swagger auto-gen in favor of simpler implementation using requests.
Start by getting rid of:
- .swagger-codegen
- _swagger
- .swagger-codegen-ignore
- swagger-codegen-config.json
- swagger-swapi-def.json
- Makefile (update_swagger_codegen target)
The pattern I would like to encourage is one where:
- Each module (*.py) file represents a section of the data.world api (e.g. pro
-
Updated
Mar 3, 2020 - Python
-
Updated
Jan 30, 2020 - Python
-
Updated
Oct 14, 2018 - Python
Improve this page
Add a description, image, and links to the datasets topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the datasets topic, visit your repo's landing page and select "manage topics."
While it's sometimes valuable to know how a project has developed, there is usually little justification for including this information in the README, and certainly not immediately after other key information such as "what does this package do, and who might want to use it?"
Might I recommend that the feature history is migrated to an Appendix in the documentation?