hyperparameter-optimization
Here are 589 public repositories matching this topic...
Describe the issue:
During computing Channel Dependencies reshape_break_channel_dependency does following code to ensure that the number of input channels equals the number of output channels:
in_shape = op_node.auxiliary['in_shape']
out_shape = op_node.auxiliary['out_shape']
in_channel = in_shape[1]
out_channel = out_shape[1]
return in_channel != out_channel
This is correct
-
Updated
Mar 3, 2022 - Python
Expected behavior
GridSampler should stop the optimization when all grids are evaluated.
Environment
- Optuna version: 3.0.0b1.dev
- Python version: 3.8.6
- OS: macOS-10.16-x86_64-i386-64bit
- (Optional) Other libraries and their versions:
Error messages, stack traces, or logs
See steps to reproduce.Steps to reproduce
In the following code, optimize s
Can Autosklearn handle Multi-Class/Multi-Label Classification and which classifiers will it use?
I have been trying to use AutoSklearn with Multi-class classification
so my labels are like this
0 1 2 3 4 ... 200
1 0 1 1 1 ... 1
0 1 0 0 1 ... 0
1 0 0 1 0 ... 0
1 1 0 1 0 ... 1
0 1 1 0 1 ... 0
1 1 1 0 0 ... 1
1 0 1 0 1 ... 0
I used this code
`
y = y[:, (65,67,54,133,122,63,102
Related: awslabs/autogluon#1479
Add a scikit-learn compatible API wrapper of TabularPredictor:
- TabularClassifier
- TabularRegressor
Required functionality (may need more than listed):
- init API
- fit API
- predict API
- works in sklearn pipelines
-
Updated
May 19, 2022 - Python
-
Updated
Jan 3, 2022
-
Updated
Feb 3, 2022 - Python
-
Updated
Nov 19, 2021 - Python
In order to reduce overfitting, I would like to ask for a new parameter: "n_repetitions". This parameter sets the number of complete sets of folds to compute for repeated k-fold cross-validation.
Cross-validation example:
{
"validation_type": "kfold",
"k_folds": 5,
"n_repetitions": 3, # new
"shuffle": True,
"stratify": True,
"random_seed": 123
}
The indentation of fit_kwargs_by_estimator is not consistent with custom_hp:
https://github.com/microsoft/FLAML/blob/main/flaml/automl.py#L2203
which appears inconsistent in:
-
Updated
May 19, 2022 - Python
-
Updated
Feb 10, 2021 - Python
-
Updated
Apr 23, 2022 - Python
-
Updated
Jun 6, 2018 - Python
-
Updated
Apr 24, 2022 - Jupyter Notebook
-
Updated
Feb 6, 2021 - Python
-
Updated
Apr 4, 2022 - Jupyter Notebook
-
Updated
Feb 27, 2022 - Python
-
Updated
Jun 19, 2021
-
Updated
Oct 14, 2021 - JavaScript
-
Updated
May 12, 2022 - Python
-
Updated
Jan 20, 2021 - Python
-
Updated
May 17, 2022 - Python
-
Updated
Aug 15, 2018 - Python
-
Updated
Apr 22, 2022 - Python
If enter_data() is called with the same train_path twice in a row and the data itself hasn't changed, a new Dataset does not need to be created.
We should add a column which stores some kind of hash of the actual data. When a Dataset would be created, if the metadata and data hash are exactly the same as an existing Dataset, nothing should be added to the ModelHub database and the existing
Describe the bug
Code could be more conform to pep8 and so forth.
Expected behavior
Less code st
Improve this page
Add a description, image, and links to the hyperparameter-optimization topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the hyperparameter-optimization topic, visit your repo's landing page and select "manage topics."
What happened + What you expected to happen
The shim
tune.create_scheduler()does not properly parse the keyword parameters passed in a dictionary for thepb2scheduler. For this call