Skip to content
A fast and simple framework for building and running distributed applications. Ray is packaged with RLlib, a scalable reinforcement learning library, and Tune, a scalable hyperparameter tuning library.
Python C++ Java Starlark TypeScript Shell Other
Branch: master
Clone or download

Files

Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.github Various CI fixes and cleanup (#8289) May 5, 2020
bazel Various CI fixes and cleanup (#8289) May 5, 2020
ci Windows wheels for multiple Python versions (#8369) May 13, 2020
cpp Support multiple core workers in one process (#7623) Apr 7, 2020
deploy/ray-operator Replace all instances of ray.readthedocs.io with ray.io (#7994) Apr 13, 2020
doc less important (#8439) May 14, 2020
docker Add ipython as dependency for autoscaler container (#8297) May 5, 2020
java fix java UT about multi-threading (#8014) Apr 27, 2020
python [Serve] Fix SKLearn example against newest version (#8428) May 13, 2020
rllib [RLlib] Make PyTorch Model forward pass faster in vf-case. (#8422) May 14, 2020
src Add redis store client AsyncGetAll/AsyncBatchDelete/AsyncDeleteByInde… May 14, 2020
streaming Implement named actors using the GCS service (#8328) May 9, 2020
thirdparty Various CI fixes and cleanup (#8289) May 5, 2020
.bazelrc Windows wheels for multiple Python versions (#8369) May 13, 2020
.clang-format Remove legacy Ray code. (#3121) Oct 26, 2018
.editorconfig Improve .editorconfig entries (#7344) Feb 27, 2020
.gitignore [Serve] Refactor Metric System: Counter + Measure Support (#8114) May 7, 2020
.style.yapf YAPF, take 3 (#2098) May 19, 2018
.travis.yml Make Travis clone the full repo and the exact commit requested (#8331) May 12, 2020
BUILD.bazel Patch redis-py bug for Windows (#8386) May 12, 2020
CONTRIBUTING.rst Add linting pre-push hook (#5154) Jul 10, 2019
LICENSE [rllib] add augmented random search (#2714) Aug 25, 2018
README.rst Update README to say that python 2 is deprecated (#8404) May 11, 2020
WORKSPACE Use GRCP and Bazel 1.0 (#6002) Nov 8, 2019
build-docker.sh Find bazel even if it isn't in the PATH. (#4729) May 2, 2019
build.sh Small improvements on build.sh (#8418) May 14, 2020
pylintrc adding pylint (#233) Jul 8, 2016
scripts Lint script link broken, also lint filter was broken for generated py… Feb 23, 2019
setup_hooks.sh Make sure pre-push is executable. (#7079) Feb 7, 2020

README.rst

https://github.com/ray-project/ray/raw/master/doc/source/images/ray_header_logo.png

https://travis-ci.com/ray-project/ray.svg?branch=master https://readthedocs.org/projects/ray/badge/?version=latest

Ray is a fast and simple framework for building and running distributed applications.

Ray is packaged with the following libraries for accelerating machine learning workloads:

  • Tune: Scalable Hyperparameter Tuning
  • RLlib: Scalable Reinforcement Learning
  • RaySGD: Distributed Training Wrappers

Install Ray with: pip install ray. For nightly wheels, see the Installation page.

NOTE: As of Ray 0.8.1, Python 2 is no longer supported.

Quick Start

Execute Python functions in parallel.

import ray
ray.init()

@ray.remote
def f(x):
    return x * x

futures = [f.remote(i) for i in range(4)]
print(ray.get(futures))

To use Ray's actor model:

import ray
ray.init()

@ray.remote
class Counter(object):
    def __init__(self):
        self.n = 0

    def increment(self):
        self.n += 1

    def read(self):
        return self.n

counters = [Counter.remote() for i in range(4)]
[c.increment.remote() for c in counters]
futures = [c.read.remote() for c in counters]
print(ray.get(futures))

Ray programs can run on a single machine, and can also seamlessly scale to large clusters. To execute the above Ray script in the cloud, just download this configuration file, and run:

ray submit [CLUSTER.YAML] example.py --start

Read more about launching clusters.

Tune Quick Start

https://github.com/ray-project/ray/raw/master/doc/source/images/tune-wide.png

Tune is a library for hyperparameter tuning at any scale.

To run this example, you will need to install the following:

$ pip install ray[tune] torch torchvision filelock

This example runs a parallel grid search to train a Convolutional Neural Network using PyTorch.

import torch.optim as optim
from ray import tune
from ray.tune.examples.mnist_pytorch import (
    get_data_loaders, ConvNet, train, test)


def train_mnist(config):
    train_loader, test_loader = get_data_loaders()
    model = ConvNet()
    optimizer = optim.SGD(model.parameters(), lr=config["lr"])
    for i in range(10):
        train(model, optimizer, train_loader)
        acc = test(model, test_loader)
        tune.track.log(mean_accuracy=acc)


analysis = tune.run(
    train_mnist, config={"lr": tune.grid_search([0.001, 0.01, 0.1])})

print("Best config: ", analysis.get_best_config(metric="mean_accuracy"))

# Get a dataframe for analyzing trial results.
df = analysis.dataframe()

If TensorBoard is installed, automatically visualize all trial results:

tensorboard --logdir ~/ray_results

RLlib Quick Start

https://github.com/ray-project/ray/raw/master/doc/source/images/rllib-wide.jpg

RLlib is an open-source library for reinforcement learning built on top of Ray that offers both high scalability and a unified API for a variety of applications.

pip install tensorflow  # or tensorflow-gpu
pip install ray[rllib]  # also recommended: ray[debug]
import gym
from gym.spaces import Discrete, Box
from ray import tune

class SimpleCorridor(gym.Env):
    def __init__(self, config):
        self.end_pos = config["corridor_length"]
        self.cur_pos = 0
        self.action_space = Discrete(2)
        self.observation_space = Box(0.0, self.end_pos, shape=(1, ))

    def reset(self):
        self.cur_pos = 0
        return [self.cur_pos]

    def step(self, action):
        if action == 0 and self.cur_pos > 0:
            self.cur_pos -= 1
        elif action == 1:
            self.cur_pos += 1
        done = self.cur_pos >= self.end_pos
        return [self.cur_pos], 1 if done else 0, done, {}

tune.run(
    "PPO",
    config={
        "env": SimpleCorridor,
        "num_workers": 4,
        "env_config": {"corridor_length": 5}})

More Information

Getting Involved

You can’t perform that action at this time.