https://builtin.com/machine-learning/contrastive-learning
self-supervised learning.
This site is to serve as my note-book and to effectively communicate with my students and collaborators. Every now and then, a blog may be of interest to other researchers or teachers. Views in this blog are my own. All rights of research results and findings on this blog are reserved. See also http://youtube.com/c/hongqin @hongqin
https://builtin.com/machine-learning/contrastive-learning
self-supervised learning.
machine learning model attrition challenge
https://mlmac.io/#submission
Book series
Federated Learning (FL) requires an aggregator and parties to exchange model updates. (Page 285)
vulnerable to the inference of private data
System entities of the FL system
the attack surface is used to refer to the exposed parameters and data
against data leak
FL-specific attacks often take advantage of the information transmission during FL.
Differential privacy: differential privacy at the party side or the aggregator side.
For healthcare data and personal information, there are regulation and compliance requirements [14, 63]
page 285: In FL, training data is not explicitly shared.
from: https://machinelearningmastery.com/difference-test-validation-datasets/
– Training set: A set of examples used for learning, that is to fit the parameters of the classifier.
– Validation set: A set of examples used to tune the parameters of a classifier, for example to choose the number of hidden units in a neural network.
– Test set: A set of examples used only to assess the performance of a fully-specified classifier.
14:16:01 From Hong Qin to Everyone:
https://en.wikipedia.org/wiki/Lasso_(statistics)
14:24:48 From Trevor Peyton to Everyone:
https://en.wikipedia.org/wiki/Hebbian_theory
14:25:25 From Trevor Peyton to Everyone:
https://en.wikipedia.org/wiki/Generalized_Hebbian_algorithm
14:26:08 From Trevor Peyton to Everyone:
https://en.wikipedia.org/wiki/Oja%27s_rule
Oldies but goldies: A. Barron, Universal Approximation Bounds for Superpositions of a Sigmoidal Function, 1993. Proves that 1 hidden layer perceptrons break the curse of dimensionality to approximate a class of smooth functions. https://en.wikipedia.org/wiki/Universal_approximation_theorem… https://en.wikipedia.org/wiki/Multilayer_perceptron
https://twitter.com/gabrielpeyre/status/1384371246461329409
https://en.wikipedia.org/wiki/Fundamental_theorem_of_Galois_theory
我有一个直觉,工程上人们关于最优化方法的折腾路线图有点弱了,对那些年青人,我觉得可以以《群论与最优化》的主题去研究更具一般性,可以使数据科学的研究提升一个层次,一般来说,事物从某种状态任意变换为另一种状态路径的数目的庞大的,最直观的就是各种棋类问题,其实这些问题需要的思想与当年伽罗瓦解方程的思想沒本质区别(1百多年后,很少的人才能真正体会到它的真谛),如果人们能够熟练驾驭群的方法,人工智能的命运就会被改写,无需暴力方法。当年克莱因的纲领与现在朗兰兹的纲领,都逃不过伽罗瓦的群...
https://en.wikipedia.org/wiki/Langlands_program
https://en.wikipedia.org/wiki/Felix_Klein
https://en.wikipedia.org/wiki/Erlangen_program