Skip to content
TFDS is a collection of datasets ready to use with TensorFlow, Jax, ...
Python Other
  1. Python 99.6%
  2. Other 0.4%
Branch: master
Clone or download

Latest commit

TensorFlow Datasets Team Copybara-Service
TensorFlow Datasets Team and Copybara-Service Automated documentation update.
PiperOrigin-RevId: 315540238
Latest commit 1465d97 Jun 10, 2020

Files

Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.github Update documentation-issue.md Apr 30, 2020
docs Automated documentation update. Jun 9, 2020
oss_scripts Move docs scripts to scripts/documentation/ May 27, 2020
tensorflow_datasets move qa datasets to dedicated folder Jun 8, 2020
.gitignore Merge pull request #739 from facaiy:BLD/add_idea Jul 8, 2019
AUTHORS Initial commit Sep 12, 2018
CONTRIBUTING.md Migrate more compat.v1 to compat.v2. Explicitly import tf.compat.v2 Mar 10, 2020
LICENSE Initial commit Sep 12, 2018
README.md Merge pull request #1733 from vijayphoenix:readme Apr 30, 2020
setup.py Update TFDV version executed Jun 3, 2020

README.md

TensorFlow Datasets

TensorFlow Datasets provides many public datasets as tf.data.Datasets.

Kokoro PyPI version Documentation

Note: tf.data is a builtin library in TensorFlow which builds efficient data pipelines. TFDS (this library) uses tf.data to build an input pipeline when you load a dataset.

Table of Contents

Installation

pip install tensorflow-datasets

# Requires TF 1.15+ to be installed.
# Some datasets require additional libraries; see setup.py extras_require
pip install tensorflow
# or:
pip install tensorflow-gpu

Join our Google group to receive updates on the project.

Usage

import tensorflow_datasets as tfds
import tensorflow as tf

# Here we assume Eager mode is enabled (TF2), but tfds also works in Graph mode.

# Construct a tf.data.Dataset
ds_train = tfds.load('mnist', split='train', shuffle_files=True)

# Build your input pipeline
ds_train = ds_train.shuffle(1000).batch(128).prefetch(10)
for features in ds_train.take(1):
  image, label = features['image'], features['label']

Try it interactively in a Colab notebook.

DatasetBuilder

All datasets are implemented as subclasses of tfds.core.DatasetBuilder. TFDS has two entry points:

  • tfds.builder: Returns the tfds.core.DatasetBuilder instance, giving control over builder.download_and_prepare() and builder.as_dataset().
  • tfds.load: Convenience wrapper which hides the download_and_prepare and as_dataset calls, and directly returns the tf.data.Dataset.
import tensorflow_datasets as tfds

# The following is the equivalent of the `load` call above.

# You can fetch the DatasetBuilder class by string
mnist_builder = tfds.builder('mnist')

# Download the dataset
mnist_builder.download_and_prepare()

# Construct a tf.data.Dataset
ds = mnist_builder.as_dataset(split='train')

# Get the `DatasetInfo` object, which contains useful information about the
# dataset and its features
info = mnist_builder.info
print(info)

This will print the dataset info content:

tfds.core.DatasetInfo(
    name='mnist',
    version=3.0.1,
    description='The MNIST database of handwritten digits.',
    homepage='http://yann.lecun.com/exdb/mnist/',
    features=FeaturesDict({
        'image': Image(shape=(28, 28, 1), dtype=tf.uint8),
        'label': ClassLabel(shape=(), dtype=tf.int64, num_classes=10),
    }),
    total_num_examples=70000,
    splits={
        'test': 10000,
        'train': 60000,
    },
    supervised_keys=('image', 'label'),
    citation="""@article{lecun2010mnist,
      title={MNIST handwritten digit database},
      author={LeCun, Yann and Cortes, Corinna and Burges, CJ},
      journal={ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist},
      volume={2},
      year={2010}
    }""",
    redistribution_info=,
)

You can also get details about the classes (number of classes and their names).

info = tfds.builder('cats_vs_dogs').info

info.features['label'].num_classes  # 2
info.features['label'].names  # ['cat', 'dog']
info.features['label'].int2str(1)  # "dog"
info.features['label'].str2int('cat')  # 0

NumPy Usage with tfds.as_numpy

As a convenience for users that want simple NumPy arrays in their programs, you can use tfds.as_numpy to return a generator that yields NumPy array records out of a tf.data.Dataset. This allows you to build high-performance input pipelines with tf.data but use whatever you'd like for your model components.

train_ds = tfds.load("mnist", split="train")
train_ds = train_ds.shuffle(1024).batch(128).repeat(5).prefetch(10)
for example in tfds.as_numpy(train_ds):
  numpy_images, numpy_labels = example["image"], example["label"]

You can also use tfds.as_numpy in conjunction with batch_size=-1 to get the full dataset in NumPy arrays from the returned tf.Tensor object:

train_ds = tfds.load("mnist", split=tfds.Split.TRAIN, batch_size=-1)
numpy_ds = tfds.as_numpy(train_ds)
numpy_images, numpy_labels = numpy_ds["image"], numpy_ds["label"]

Note that the library still requires tensorflow as an internal dependency.

Citation

Please include the following citation when using tensorflow-datasets for a paper, in addition to any citation specific to the used datasets.

@misc{TFDS,
  title = {{TensorFlow Datasets}, A collection of ready-to-use datasets},
  howpublished = {\url{https://www.tensorflow.org/datasets}},
}

Want a certain dataset?

Adding a dataset is really straightforward by following our guide.

Request a dataset by opening a Dataset request GitHub issue.

And vote on the current set of requests by adding a thumbs-up reaction to the issue.

Disclaimers

This is a utility library that downloads and prepares public datasets. We do not host or distribute these datasets, vouch for their quality or fairness, or claim that you have license to use the dataset. It is your responsibility to determine whether you have permission to use the dataset under the dataset's license.

If you're a dataset owner and wish to update any part of it (description, citation, etc.), or do not want your dataset to be included in this library, please get in touch through a GitHub issue. Thanks for your contribution to the ML community!

If you're interested in learning more about responsible AI practices, including fairness, please see Google AI's Responsible AI Practices.

tensorflow/datasets is Apache 2.0 licensed. See the LICENSE file.

You can’t perform that action at this time.