Skip to content
#

hyperparameter-optimization

Here are 589 public repositories matching this topic...

jimthompson5802
jimthompson5802 commented May 15, 2022

What happened + What you expected to happen

The shim tune.create_scheduler() does not properly parse the keyword parameters passed in a dictionary for the pb2 scheduler. For this call

pb2_parm_dict = {"time_attr": "time_total_s", "metric": "metric_score", "mode": "min",
                 "hyperparam_bounds": {"param1": [0, 1]}}

pb2_scheduler = create_scheduler("pb2", **pb2_pa
bug good first issue tune P2
nni
pkubik
pkubik commented Mar 14, 2022

Describe the issue:
During computing Channel Dependencies reshape_break_channel_dependency does following code to ensure that the number of input channels equals the number of output channels:

in_shape = op_node.auxiliary['in_shape']
out_shape = op_node.auxiliary['out_shape']
in_channel = in_shape[1]
out_channel = out_shape[1]
return in_channel != out_channel

This is correct

bug help wanted good first issue model compression
not522
not522 commented May 13, 2022

Expected behavior

GridSampler should stop the optimization when all grids are evaluated.

Environment

  • Optuna version: 3.0.0b1.dev
  • Python version: 3.8.6
  • OS: macOS-10.16-x86_64-i386-64bit
  • (Optional) Other libraries and their versions:

Error messages, stack traces, or logs

See steps to reproduce.

Steps to reproduce

In the following code, optimize s

bug contribution-welcome good first issue
mljar-supervised
sbcalaff
sbcalaff commented May 16, 2022

In order to reduce overfitting, I would like to ask for a new parameter: "n_repetitions". This parameter sets the number of complete sets of folds to compute for repeated k-fold cross-validation.

Cross-validation example:

{
    "validation_type": "kfold",
    "k_folds": 5,
    "n_repetitions": 3, # new
    "shuffle": True,
    "stratify": True,
    "random_seed": 123
}
help wanted good first issue docs

Notes, programming assignments and quizzes from all courses within the Coursera Deep Learning specialization offered by deeplearning.ai: (i) Neural Networks and Deep Learning; (ii) Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization; (iii) Structuring Machine Learning Projects; (iv) Convolutional Neural Networks; (v) Sequence Models

  • Updated Apr 24, 2022
  • Jupyter Notebook
Gradient-Free-Optimizers

A list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures, 3.) Model Compression, Quantization and Acceleration, 4.) Hyperparameter Optimization, 5.) Automated Feature Engineering.

  • Updated Jun 19, 2021
bcyphers
bcyphers commented Jan 31, 2018

If enter_data() is called with the same train_path twice in a row and the data itself hasn't changed, a new Dataset does not need to be created.

We should add a column which stores some kind of hash of the actual data. When a Dataset would be created, if the metadata and data hash are exactly the same as an existing Dataset, nothing should be added to the ModelHub database and the existing

Neuraxle

Improve this page

Add a description, image, and links to the hyperparameter-optimization topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the hyperparameter-optimization topic, visit your repo's landing page and select "manage topics."

Learn more