hyper-parameters, such as partition data into training, validation, and testing, iteractions, in general have no fixed way to pick theoretically.
hyper-parameters are those that need to be fixed before learning started.
hyperparameter optimization may be done by compare a tuple of hyperparameters, based on a predefined loss function on dependent data.
No comments:
Post a Comment