automl
Here are 667 public repositories matching this topic...
Describe the issue:
During computing Channel Dependencies reshape_break_channel_dependency does following code to ensure that the number of input channels equals the number of output channels:
in_shape = op_node.auxiliary['in_shape']
out_shape = op_node.auxiliary['out_shape']
in_channel = in_shape[1]
out_channel = out_shape[1]
return in_channel != out_channel
This is correct
-
Updated
May 8, 2022 - Python
-
Updated
Mar 3, 2022 - Python
Feature Description
We want to enable the users to specify the value ranges for any argument in the blocks.
The following code example shows a typical use case.
The users can specify the number of units in a DenseBlock to be either 10 or 20.
Code Example
import auIs there an existing issue for this?
- I have searched the existing issues
Is your feature request related to a problem? Please describe.
I think would be helpful supporting elasticsearch because is one of the most used search engines
Describe the solution you'd like.
No response
Describe an alternate solution.
No response
Anything else? (Additional Context)
_N
Can Autosklearn handle Multi-Class/Multi-Label Classification and which classifiers will it use?
I have been trying to use AutoSklearn with Multi-class classification
so my labels are like this
0 1 2 3 4 ... 200
1 0 1 1 1 ... 1
0 1 0 0 1 ... 0
1 0 0 1 0 ... 0
1 1 0 1 0 ... 1
0 1 1 0 1 ... 0
1 1 1 0 0 ... 1
1 0 1 0 1 ... 0
I used this code
`
y = y[:, (65,67,54,133,122,63,102
- As a user, I wish featuretools
dfswould take a string as cutoff_time aswell as a datetime object
Code Example
fm, features = ft.dfs(entityset=es,
target_dataframe_name='customers',
cutoff_time="2014-1-1 05:00",
instance_ids=[1],
cutoff_time_in_index=True)as well as
-
Updated
May 9, 2022 - Jupyter Notebook
-
Updated
May 3, 2022 - Jupyter Notebook
Related: awslabs/autogluon#1479
Add a scikit-learn compatible API wrapper of TabularPredictor:
- TabularClassifier
- TabularRegressor
Required functionality (may need more than listed):
- init API
- fit API
- predict API
- works in sklearn pipelines
-
Updated
Jan 3, 2021 - Python
We would like to forward a particular 'key' column which is part of the features to appear alongside the predictions - this is to be able to identify to which set of features a particular prediction belongs to. Here is an example of predictions output using the tensorflow.contrib.estimator.multi_class_head:
{"classes": ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"],
"scores": [0.068196
-
Updated
Jan 3, 2022
Hello everyone,
First of all, I want to take a moment to thank all contributors and people who supported this project in any way ;) you are awesome!
If you like the project and have any interest in contributing/maintaining it, you can contact me here or send me a msg privately:
- Email: nidhalbacc@gmail.com
PS: You need to be familiar with python and machine learning
-
Updated
Jan 15, 2021 - Python
-
Updated
May 2, 2022 - Python
-
Updated
Feb 10, 2022 - Python
Problem
Some of our transformers & estimators are not thoroughly tested or not tested at all.
Solution
Use OpTransformerSpec and OpEstimatorSpec base test specs to provide tests for all existing transformers & estimators.
Contact Details [Optional]
Describe the feature you'd like
Currently our CLI offers a way to install the python packages that are required for a given integration. However, some of our integrations also have system requirements that are necessary to make them work (graphviz, kubectl, etc. ).
All system requirements should be listed on an integration level, just
-
Updated
Mar 30, 2022 - Python
When using r2 as eval metric for regression task (with 'Explain' mode) the metric values reported in Leaderboard (at README.md file) are multiplied by -1.
For instance, the metric value for some model shown in the Leaderboard is -0.41, while when clicking the model name leads to the detailed results page - and there the value of r2 is 0.41.
I've noticed that when one of R2 metric values in the L
@HuangChiEn From the console msg, it is stuck at the step of building the ensemble model (sorry for not making that explicit in the msg). You can verify it by removing "ensemble": True from the settings.
Originally posted by @sonichi in microsoft/FLAML#536 (comment)
Suggestion: Modify https://github.com/microsoft/FLAML/blob/c1e1299855dcea378591628a
-
Updated
Oct 22, 2019 - Python
-
Updated
Feb 10, 2021 - Python
-
Updated
Mar 23, 2022 - Python
-
Updated
Jun 16, 2021 - Python
-
Updated
Nov 11, 2019 - Jupyter Notebook
Improve this page
Add a description, image, and links to the automl topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the automl topic, visit your repo's landing page and select "manage topics."
Description
Per https://discuss.ray.io/t/how-do-i-sample-from-a-ray-datasets/5308, we should add a
random_sample(N)API that returns N records from a Dataset. This can be implemented via amap_batches()followed by a take().cc @simon-mo @clarkzinzow
Use case
Random sample is useful for a variety of scenarios, including creating training batches, and downsampling the dataset for