Skip to content
#

ml

Machine learning is the practice of teaching a computer to learn. The concept uses pattern recognition, as well as other forms of predictive algorithms, to make judgments on incoming data. This field is closely related to artificial intelligence and computational statistics.

Here are 4,261 public repositories matching this topic...

chan4cc
chan4cc commented Apr 26, 2021

New Operator

Describe the operator

Why is this operator necessary? What does it accomplish?

This is a frequently used operator in tensorflow/keras

Can this operator be constructed using existing onnx operators?

If so, why not add it as a function?

I don't know.

Is this operator used by any model currently? Which one?

Are you willing to contribute it?

metaflow
romain-intel
romain-intel commented Feb 13, 2022

Currently, you can do something like this: Task(Flow/RunID/StepName) and this will not result in an error but then the resulting Task object behaves in a bizarre manner where things like t.data will work but t.data.my_artifact will not for example.

We should validate the format of the pathspec passed in to each object and verify that the following are the only possible cases:

  • Metaflo
davidbuniat
davidbuniat commented Jan 15, 2022

🚨🚨 Feature Request

If your feature will improve HUB

To explore the structure of a dataset it is convenient to have nicer and more informative prints of dataset objects and samples

Description of the possible solution

1) show ds

now

> ds
Dataset(path='hub://activeloop/abalone_full_dataset', tensors=['length', 'diameter', 'height', 'weight'])
SynapseML
brunocous
brunocous commented Sep 2, 2020

I have a simple regression task (using a LightGBMRegressor) where I want to penalize negative predictions more than positive ones. Is there a way to achieve this with the default regression LightGBM objectives (see https://lightgbm.readthedocs.io/en/latest/Parameters.html)? If not, is it somehow possible to define (many example for default LightGBM model) and pass a custom regression objective?

oneflow
dangkai4u
dangkai4u commented Dec 31, 2021

在oneflow里,交叉熵损失有以下几种:

  • binary_cross_entropy_loss
  • binary_cross_entropy_with_logits_loss
  • sparse_cross_entropy
  • distributed_sparse_cross_entropy
  • cross_entropy
  • sparse_softmax_cross_entropy
  • softmax_cross_entropy

在pytorch里,交叉熵损失有以下几种:

  • binary_cross_entropy
  • binary_cross_entropy_with_logits
  • cross_entropy

由此可见,oneflow中交叉熵损失存在API冗余,重复,容易让用户疑惑,因此,这里应该精简一下。除此之外,label smooth

Wikipedia
Wikipedia