Monday, August 26, 2019

Canvas gradebook, weighted final grade

Under "Assignments", "Groups" can be added, modified, deleted.

For weighted final grades,
"Assignment" -> "..." -> "weight final grade"

Changes in Cellular Architecture During Aging (R01)

Changes in Cellular Architecture During Aging (R01)

algebraic multiplicity and geometric multiplicity

The algebraic multiplicity of λ is the number of times λ is repeated as a root of the characteristic polynomial.

 Let A be an n × n matrix with eigenvalue λ. The geometric multiplicity of λ is the dimension of the eigenspace of λ.

In general, the algebraic multiplicity and geometric multiplicity of an eigenvalue can differ. However, the geometric multiplicity can never exceed the algebraic multiplicity.
It is a fact that summing up the algebraic multiplicities of all the eigenvalues of an n×n matrix A gives exactly nIf for every eigenvalue of A, the geometric multiplicity equals the algebraic multiplicity, then A is said to be diagonalizable. As we will see, it is relatively easy to compute powers of a diagonalizable matrix.

yeast PIN nD

controllability for yPIN with Dang's data

```{r eigen-spectrum}
digits = c(1:30)
zeros = digits
debug = 0
for ( i in 1:length(digits ) ) {
 tmp = sort(table(round(e, roundings[i])), decreasing = T)
 if (debug > 0) { print(tmp[1:3]) }
 zeros[i] = tmp[1]
cbind( digits , zeros)

   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
-59.919  -1.696   0.000   0.000   1.384  97.758 
      digits zeros
 [1,]      1   228
 [2,]      2   158
 [3,]      3   140
 [4,]      4   136
 [5,]      5   135
 [6,]      6   135
 [7,]      7   135
 [8,]      8   135
 [9,]      9   135
[10,]     10   135
[11,]     11   135
[12,]     12   135
[13,]     13   135
[14,]     14   135
[15,]     15   112
[16,]     16    45
[17,]     17     9
[18,]     18     3
[19,]     19     2
[20,]     20     1
[21,]     21     1
[22,]     22     1
[23,]     23     1
[24,]     24     1
[25,]     25     1
[26,]     26     1
[27,]     27     1
[28,]     28     1
[29,]     29     1
[30,]     30     1

Conclusion: frequency of zero eigen values stabilize between 5 and 14 at 135. 

Friday, August 23, 2019

UTC work request

Everyone is able to put in a work request. Go here Put in your information and what room is having the problem.

Wednesday, August 21, 2019

Latex submission to BMC

All Latex files and template should all be listed as Manuscript

colony counting

sectioned colony counting is difficult.

Vanderbilt cancer heterogeneity workshop

=> Ken Lau, single cell RNAseq


get high-quality cells,
feature selection

dpFeature, select EDG between clusters identified by density peak clusting, Qiu 2017
SCENIC, coordianted regulated TF target, Aibar, 2018
NVR, neighborhood variance ratio, Welch 2016, Chen 2019 (Lau lab).

trajectory reconstruction, many algorithms,

phage PIN and aging simulation

first principle simulation of phage PIN: Can I regenerate the exponetial survival curves?

 2013 Dec;87(23):12745-55. doi: 10.1128/JVI.02495-13. Epub 2013 Sep 18.

The protein interaction network of bacteriophage lambda with its host, Escherichia coli.

 2011 Sep 26;11:213. doi: 10.1186/1471-2180-11-213.

The protein interaction map of bacteriophage lambda.

Tuesday, August 20, 2019

linear dependent row in matrix

counter factual model

Very nice blog on counterfactural models (related to aging and forbiden interactions).

algebraic topology and ODE

AI meeting notes

gradient descent is robust form of optimization. Exact optimization for training data probably do not generalize well for testing data.

gradient descent 'converge' to a global optimization point?

inverse problem in materials

AI only learn what data is (so memorization?)

AI is just a marketing term?

learning: physics based, foundational math, representation learning, reinforcement learning, adversary networks,

scalability: algorithms, convergence, parallelization, mix precision arithmetic, hardware,

Assurance: uncertainty quantification, explainability and interpretability, validation verification, causality,

workflow: edge computing, compression, online learning, federated learning, augmented intelligence,

AI learns the world is, not the way it should be. (bias)
AI leans the world from the data presented, not the way it is.

AI algorithm may needs FDA styled drug trails and approval.

AI/ML microscopy in materials: predicting crystal structure by merging data mining and quantum physics. Chemical space is non-differentiable. Chemical space is a graph. Functionalitties at the nodes are defined within the context, such biological context, water, etc.  In many cases, chemical properties are hard to predict. Materials design can be thought as a search problem. Use mean field descriptors. Build precision micrscopy to map atoms?! Open data. Jupyter papers.
Localization: CNN, precision: Gaussian
theory-experiment matching.
Hypothesis driven science = forward mode P(dat/theory) P(theory)based on domain exptersize,
Q: how to get training data at atomic levels?

AI in health, Gina Tourassi
 Johnson, KW, J Am Colle Cardiol, 2018, 7 23,
AI in memogram scan by MIT/MGH
basal carcinoma, by deep learning,

genes and biology are responsbble for 10% of our health and well being.

modeling health instead of disease.

current opintin in biotechnology, 2019, v58, by Eberhart Voit

Wednesday, August 7, 2019

Yuan method on adjacency matrix controllability

Yuan clearly used weighted matrix, and j->i as direction. So, column to row indicate direction? 
Wikipedia, “In directed graphs, the in-degree of a vertex can be computed by summing the entries of the corresponding column, and the out-degree can be computed by summing the entries of the corresponding row.” The question now is does i->j and j->i matters? It can be tested using a star shaped network with outward and inwarding arrows. A quick exam on these show both star networks should have the same number of minimal control nodes. Well, eigen values of a matrix and its transpose are the same, see  
So, i->j and j->i does not matter! This is somewhat shocking to me. 

Eigenvalues of a Matrix and its Transpose are the Same

Recall that the eigenvalues of a matrix are roots of its characteristic polynomial.
Hence if the matrices A and AT have the same characteristic polynomial, then they have the same eigenvalues.
So we show that the characteristic polynomial pA(t)=det(AtI) of A is the same as the characteristic polynomial pAT(t)=det(ATtI) of the transpose AT.
We have
pAT(t)=det(ATtI)=det(ATtIT)since IT=I=det((AtI)T)=det(AtI)since det(BT)=det(B) for any square matrix B=pA(t).

Therefore we obtain pAT(t)=pA(t), and we conclude that the eigenvalues of A and AT are the same.

Remark: Algebraic Multiplicities of Eigenvalues

Remark that since the characteristic polynomials of A and the transpose AT are the same, it furthermore yields that the algebraic multiplicities of eigenvalues of A and AT are the same.