Skip to content
#

gpu

Here are 1,854 public repositories matching this topic...

qysnn
qysnn commented Oct 1, 2020

It seems the current implementation will cut the tensor to min or max according to their values, which might be a problem when debugging logical errors or typos. Maybe it would be better to just throw an exception? Also it would be nice to note that in the documentation.

>>> import torch
>>> print(torch.__version__)
1.6.0
>>> a

Open Source Fast Scalable Machine Learning Platform For Smarter Applications: Deep Learning, Gradient Boosting & XGBoost, Random Forest, Generalized Linear Modeling (Logistic Regression, Elastic Net), K-Means, PCA, Stacked Ensembles, Automatic Machine Learning (AutoML), etc.

  • Updated Oct 3, 2020
  • Jupyter Notebook
rsn870
rsn870 commented Aug 21, 2020

Hi ,

I have tried out both loss.backward() and model_engine.backward(loss) for my code. There are several subtle differences that I have observed , for one retain_graph = True does not work for model_engine.backward(loss) . This is creating a problem since buffers are not being retained every time I run the code for some reason.

Please look into this if you could.

jankrynauw
jankrynauw commented Jun 6, 2019

We would like to forward a particular 'key' column which is part of the features to appear alongside the predictions - this is to be able to identify to which set of features a particular prediction belongs to. Here is an example of predictions output using the tensorflow.contrib.estimator.multi_class_head:

{"classes": ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"],
 "scores": [0.068196
andrewcorrigan
andrewcorrigan commented Apr 13, 2015

Would it make sense to add a variadic overload of make_zip_iterator that composes the existing make_zip_iterator with make_tuple? I have this in my own code, and I find that it reduces syntactic overhead.

template<typename... Iterators>
__host__ __device__
  zip_iterator<thrust::tuple<Iterators...>> make_zip_iterator(thrust::tuple<Iterators...> t)
{
    return zip_iterator<thrust::tupl
xmnlab
xmnlab commented Mar 19, 2019

Hey everyone!

mapd-core-cpu is already available on conda-forge (https://anaconda.org/conda-forge/omniscidb-cpu)

now we should add some instructions on the documentation.

at this moment it is available for linux and osx.

some additional information about the configuration:

  1. for now, always install omniscidb-cpu inside a conda environment (also it is a good practice), eg:

Improve this page

Add a description, image, and links to the gpu topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the gpu topic, visit your repo's landing page and select "manage topics."

Learn more

You can’t perform that action at this time.