Skip to content
#

datasets

Here are 211 public repositories matching this topic...

datasette
labstersteve
labstersteve commented Jan 18, 2020

While it's sometimes valuable to know how a project has developed, there is usually little justification for including this information in the README, and certainly not immediately after other key information such as "what does this package do, and who might want to use it?"

Might I recommend that the feature history is migrated to an Appendix in the documentation?

harshmanvar
harshmanvar commented Mar 27, 2020

I have set up Postgres in Kubernetes and also setup Doccano in Kubernetes however it's working well but wants to know the mount point for Kubernetes to attach Persistence volume.

My deployment.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: doccano
  name: doccano
  namespace: default
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisi
colour
KelSolaar
KelSolaar commented May 2, 2020

I would be useful to implement support for various photoreceptor models so that it is possible to generate custom cone fundamentals down the road. I have started with Caroll et al. (2000), Stockman and Sharpe (2000), Lamb (1995) photoreceptor models in that notebook: https://colab.research.google.com/drive/1snhtUdUxUrTnw_B0kagvfz015Co9p-xv

We will obviously need support for various pre-receptor

ha0ye
ha0ye commented Mar 24, 2020

After installation of retriever for the first time, the scripts need to be updated. This step seems to be missing from the "Quick Start" guide for retriever ( https://www.data-retriever.org/#quickstart ), as well as the landing page for rdataretriever.

In these cases, I think the commands to be added (in between the installation step and before the first example for installing/downloading a

IdenProf dataset is a collection of images of identifiable professionals. It is been collected to enable the development of AI systems that can serve by identifying people and the nature of their job by simply looking at an image, just like humans can do.

  • Updated Aug 13, 2019
  • Python
maestrojeong
maestrojeong commented Dec 6, 2019

Code in 'README.md' works well in 'multi_dsprites_colored_on_colored.tfrecords'.
However, it doesn't work well in 'multi_dsprites_binarized.tfrecords' and 'multi_dsprites_colored_on_grayscale.tfrecords'.
In the code

  from multi_object_datasets import multi_dsprites
  import tensorflow as tf

  tf_records_path = 'path/to/multi_dsprites_binarized.tfrecords'
  batch_size = 32

 
rflprr
rflprr commented Apr 10, 2018

Retire swagger auto-gen in favor of simpler implementation using requests.

Start by getting rid of:

  • .swagger-codegen
  • _swagger
  • .swagger-codegen-ignore
  • swagger-codegen-config.json
  • swagger-swapi-def.json
  • Makefile (update_swagger_codegen target)

The pattern I would like to encourage is one where:

  1. Each module (*.py) file represents a section of the data.world api (e.g. pro

Improve this page

Add a description, image, and links to the datasets topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the datasets topic, visit your repo's landing page and select "manage topics."

Learn more

You can’t perform that action at this time.