It seems that "validation data sets" may be used in different ways in practice.
https://stackoverflow.com/questions/46308374/what-is-validation-data-used-for-in-a-keras-sequential-model
Qin:
See
https://www.tensorflow.org/guide/keras/train_and_evaluate#using_a_validation_dataset
model.fit(train_dataset, epochs=1, validation_data=val_dataset)
Thanks,
"After the meeting I wasn't 100% satisfied with our explanation of what the validation set is used for. I realized if we train using the training set, then applying the loss of the validation set to the training set is useless.
I found two articles to this question which sum up the answer very well:
To summarize,
You use the validation set to determine how well your model is learning during training. It is mostly used for hyperparameter training as you can retrain the model with different parameters and see how it compares. The idea is that it is also trained on so you can see how fast the model picks it up.
Overall though, we would use the Test set at the very end to gauge the accuracy of the model on completely new data it's never seen before.
To me, this seems like it can be done with the training set alone, however I understand the concept to just check a small subset of the training data to see how quickly the model will learn it. Since it isn't too difficult, I will incorporate this into the models and try to add some graphs to chart the training. This way, I can do some hyperparameter tuning once the transfer learning is set up and working.
No comments:
Post a Comment