This site is to serve as my note-book and to effectively communicate with my students and collaborators. Every now and then, a blog may be of interest to other researchers or teachers. Views in this blog are my own. All rights of research results and findings on this blog are reserved. See also http://youtube.com/c/hongqin @hongqin
Tuesday, August 27, 2019
Monday, August 26, 2019
Canvas gradebook, weighted final grade
Under "Assignments", "Groups" can be added, modified, deleted.
For weighted final grades,
"Assignment" -> "..." -> "weight final grade"
Changes in Cellular Architecture During Aging (R01)
Changes in Cellular Architecture During Aging (R01)
https://grants.nih.gov/grants/guide/pa-files/PA-16-442.html
https://grants.nih.gov/grants/guide/pa-files/PA-16-442.html
algebraic multiplicity and geometric multiplicity
https://people.math.carleton.ca/~kcheung/math/notes/MATH1107/wk10/10_algebraic_and_geometric_multiplicities.html
The algebraic multiplicity of λ is the number of times λ is repeated as a root of the characteristic
polynomial.
Let A be an n × n matrix with eigenvalue λ. The geometric multiplicity of λ is the dimension of the eigenspace of λ.
In general, the algebraic multiplicity and geometric multiplicity of an eigenvalue can differ. However, the geometric multiplicity can never exceed the algebraic multiplicity.
It is a fact that summing up the algebraic multiplicities of all the eigenvalues of an n×n matrix A gives exactly n . If for every eigenvalue of A , the geometric multiplicity equals the algebraic multiplicity, then
is said to be diagonalizable. As we will see, it is relatively easy to compute powers of a diagonalizable matrix.
yeast PIN nD
controllability for yPIN with Dang's data
```{r eigen-spectrum}
e=eig$values;
summary(e)
digits = c(1:30)
zeros = digits
debug = 0
for ( i in 1:length(digits ) ) {
tmp = sort(table(round(e, roundings[i])), decreasing = T)
if (debug > 0) { print(tmp[1:3]) }
zeros[i] = tmp[1]
}
cbind( digits , zeros)
```
Conclusion: frequency of zero eigen values stabilize between 5 and 14 at 135.
```{r eigen-spectrum}
e=eig$values;
summary(e)
digits = c(1:30)
zeros = digits
debug = 0
for ( i in 1:length(digits ) ) {
tmp = sort(table(round(e, roundings[i])), decreasing = T)
if (debug > 0) { print(tmp[1:3]) }
zeros[i] = tmp[1]
}
cbind( digits , zeros)
```
Min. 1st Qu. Median Mean 3rd Qu. Max. -59.919 -1.696 0.000 0.000 1.384 97.758 digits zeros [1,] 1 228 [2,] 2 158 [3,] 3 140 [4,] 4 136 [5,] 5 135 [6,] 6 135 [7,] 7 135 [8,] 8 135 [9,] 9 135 [10,] 10 135 [11,] 11 135 [12,] 12 135 [13,] 13 135 [14,] 14 135 [15,] 15 112 [16,] 16 45 [17,] 17 9 [18,] 18 3 [19,] 19 2 [20,] 20 1 [21,] 21 1 [22,] 22 1 [23,] 23 1 [24,] 24 1 [25,] 25 1 [26,] 26 1 [27,] 27 1 [28,] 28 1 [29,] 29 1 [30,] 30 1
Conclusion: frequency of zero eigen values stabilize between 5 and 14 at 135.
Sunday, August 25, 2019
embed youtube video in GitHub readme.md
embed youtube video in GitHub readme.md
http://sviridovserg.com/2017/05/22/embed-youtube-to-markdown/
http://embedyoutube.org/
http://sviridovserg.com/2017/05/22/embed-youtube-to-markdown/
http://embedyoutube.org/
Saturday, August 24, 2019
long term memory is a reliability model
long term memory is a reliability model
https://medicalxpress.com/news/2019-08-memories.html?fbclid=IwAR2WULqnQdMCcKaYrMYGjD0gvXwCgw18dWTL295995wtC8klh9kdHPjh5n4
https://medicalxpress.com/news/2019-08-memories.html?fbclid=IwAR2WULqnQdMCcKaYrMYGjD0gvXwCgw18dWTL295995wtC8klh9kdHPjh5n4
Friday, August 23, 2019
UTC work request
Everyone is able to put in a work request. Go here https://fpmis.utc.edu/. Put in your information and what room is having the problem.
Thursday, August 22, 2019
Thunor, HTS screen
Harris nature method, 2016, 13 . nmeth.3852
Hafner, Nature method, 2016 . nmeth.3853
https://www.nature.com/articles/nmeth.3853
mass spec is a joint distribution
mass spec can generate 28 dimension joint distribution, then we need to figure out the mixtures of probiblity distrubitons of sub populations.
Wednesday, August 21, 2019
Vanderbilt cancer heterogeneity workshop
=> Ken Lau, single cell RNAseq
https://www.mc.vanderbilt.edu/vumcdept/cellbio/laulab/research.html
scanpy
AnnData
get high-quality cells,
feature selection
dpFeature, select EDG between clusters identified by density peak clusting, Qiu 2017
SCENIC, coordianted regulated TF target, Aibar, 2018
NVR, neighborhood variance ratio, Welch 2016, Chen 2019 (Lau lab).
trajectory reconstruction, many algorithms,
phage PIN and aging simulation
first principle simulation of phage PIN: Can I regenerate the exponetial survival curves?
https://www.ncbi.nlm.nih.gov/pubmed/21943085
J Virol. 2013 Dec;87(23):12745-55. doi: 10.1128/JVI.02495-13. Epub 2013 Sep 18.
The protein interaction network of bacteriophage lambda with its host, Escherichia coli.
https://www.ncbi.nlm.nih.gov/pubmed/24049175
BMC Microbiol. 2011 Sep 26;11:213. doi: 10.1186/1471-2180-11-213.
The protein interaction map of bacteriophage lambda.
https://www.ncbi.nlm.nih.gov/pubmed/21943085
Tuesday, August 20, 2019
counter factual model
Very nice blog on counterfactural models (related to aging and forbiden interactions).
https://www.inference.vc/causal-inference-3-counterfactuals/
https://www.ssc.wisc.edu/~felwert/causality/wp-content/uploads/2013/06/1-Elwert_Causal_Intro.pdf
AI meeting notes
gradient descent is robust form of optimization. Exact optimization for training data probably do not generalize well for testing data.
gradient descent 'converge' to a global optimization point?
classification,
inverse problem in materials
transportation
mobility
AI only learn what data is (so memorization?)
AI is just a marketing term?
learning: physics based, foundational math, representation learning, reinforcement learning, adversary networks,
scalability: algorithms, convergence, parallelization, mix precision arithmetic, hardware,
Assurance: uncertainty quantification, explainability and interpretability, validation verification, causality,
workflow: edge computing, compression, online learning, federated learning, augmented intelligence,
AI learns the world is, not the way it should be. (bias)
AI leans the world from the data presented, not the way it is.
AI algorithm may needs FDA styled drug trails and approval.
AI/ML microscopy in materials: predicting crystal structure by merging data mining and quantum physics. Chemical space is non-differentiable. Chemical space is a graph. Functionalitties at the nodes are defined within the context, such biological context, water, etc. In many cases, chemical properties are hard to predict. Materials design can be thought as a search problem. Use mean field descriptors. Build precision micrscopy to map atoms?! Open data. Jupyter papers.
Localization: CNN, precision: Gaussian
theory-experiment matching.
Hypothesis driven science = forward mode P(dat/theory) P(theory)based on domain exptersize,
Q: how to get training data at atomic levels?
AI in health, Gina Tourassi
Johnson, KW, J Am Colle Cardiol, 2018, 7 23,
https://www.sciencedirect.com/science/article/pii/S0735109718344085
AI in memogram scan by MIT/MGH
basal carcinoma, by deep learning,
genes and biology are responsbble for 10% of our health and well being.
modeling health instead of disease.
current opintin in biotechnology, 2019, v58, by Eberhart Voit
https://www.sciencedirect.com/science/article/pii/S0958166918301915
Thursday, August 8, 2019
Wednesday, August 7, 2019
Yuan method on adjacency matrix controllability
Yuan clearly used weighted matrix, and j->i as direction. So, column to row indicate direction?
Wikipedia, “In directed graphs, the in-degree of a vertex can be computed by summing the entries of the corresponding column, and the out-degree can be computed by summing the entries of the corresponding row.” The question now is does i->j and j->i matters? It can be tested using a star shaped network with outward and inwarding arrows. A quick exam on these show both star networks should have the same number of minimal control nodes. Well, eigen values of a matrix and its transpose are the same, see https://yutsumura.com/eigenvalues-of-a-matrix-and-its-transpose-are-the-same/.
So, i->j and j->i does not matter! This is somewhat shocking to me.
Eigenvalues of a Matrix and its Transpose are the Same
https://yutsumura.com/eigenvalues-of-a-matrix-and-its-transpose-are-the-same/
Recall that the eigenvalues of a matrix are roots of its characteristic polynomial.
Hence if the matricesA and AT have the same characteristic polynomial, then they have the same eigenvalues.
Hence if the matrices
So we show that the characteristic polynomial pA(t)=det(A−tI) of A is the same as the characteristic polynomial pAT(t)=det(AT−tI) of the transpose AT .
We have
Therefore we obtain pAT(t)=pA(t) , and we conclude that the eigenvalues of A and AT are the same.
Remark: Algebraic Multiplicities of Eigenvalues
Remark that since the characteristic polynomials of A and the transpose AT are the same, it furthermore yields that the algebraic multiplicities of eigenvalues of A and AT are the same.
Subscribe to:
Posts (Atom)