Tuesday, May 4, 2021

multi-task deep learning training

 



which tasks should be learned together?  https://arxiv.org/abs/1905.07553

L_total = a1 L1 + a2 L2 + a3 L3

Easy task versus hard task

Multi task learning using uncertainty to weight lossess for scence geometry and sematics

GradNorm: gradient normalization for adaptive loss balancing in deep multitask networks





No comments:

Post a Comment