Tuesday, September 26, 2023

UTC J1 scholar visiting student

 

UTC J1 scholar visiting student

I attached a form with this email and summarized the documents I need to issue DS2019 below.

Checklist of Documents to be Submitted by the Department and J-1 Applicant with This Form
  1. Proof of financial support letter or a bank statement (in English) (J-1 scholar $1500 per month = $18,000 per year)
  2. Copy of passport (visitor and all accompanying family members), DS2019, and Visa Stamp Page.
  3. Proof of English proficiency (or a letter from the department) for non-native English speakers
  4. CV 
  5. Invitation or appointment letter from the sponsoring UTC college or department
  6. Extracurricular cultural activity plan (This is a one-page letter from the sponsoring department explaining how the visitor will engage in cultural exchange activities.)
  7. Health insurance waiver request form (J-1 visitors are required to hold health insurance. If you do not submit this form, you will be charged approximately $1,800/year for medical coverage through the UTC-sponsored insurance plan.)
Feel free to let me know if you have questions.

Additional question...Is this person only researching? Not teaching? Are Any family members (J2) joining this person?



 

Friday, September 22, 2023

Sp24 CSE course planning

 EMCS 201/101, Card Auditorium, 141 students

 EMCS 301, 302, lecture, 50 students

 ECS 404 has 58 computers

ECS 423 has 48 computers


cpsc1100 section 1, 48 students, lect MW 2pm - 3:50pm. Card Auditorium.  Lab R
                section 2, 48 students,                                                                       Lab T 2pm-4:30pm, ECS 404  


Thursday, September 21, 2023

9/21 quantum circuit

 review Bloch sphere definition, using IBM Quantum Composer to interactively explore the states, quantum state definition, 

one qubit. Tip: manually edit the script for fast editing. 

two qubit

Q-sphere view

https://learning.quantum-computing.ibm.com/tutorial/composer-user-guide#q-sphere-view 


Midterm review? 


|11> and |11> cannot be entangled states

 

No, the state \(|11\rangle \otimes |11\rangle\) is not an entangled state; it's actually a separable or product state. This means that it can be written as a tensor product of the individual states of each qubit, and the state of each qubit can be described independently of the other. 


In a truly entangled state, like the Bell states, it becomes impossible to write the joint state as a simple tensor product of individual qubit states. This results in the condition where measuring one qubit will immediately give you information about the other, regardless of the distance between them. 


In the state \(|11\rangle \otimes |11\rangle\), each qubit is already in a well-defined state of its own and measuring one doesn't provide any information about the other. This is unlike entangled states where the whole point is that the qubits are not in well-defined individual states but are correlated in such a way that the state of one depends on the state of the other.


To summarize, \(|11\rangle \otimes |11\rangle\) is not an entangled state, and the state of each qubit can be described independently of the others.

Friday, September 15, 2023

Quantum computing, Ch5, architecture

$5.1  bits and qubits

definition 5.1.2

tensor product of a 8-bit qubit. 

Eq 5.24, general state of a two-quibt system

Eq 5.26, an example of entangled two qubit. 


$5.2 classical gates

NOT gate 

AND gate, example Eq 5.30, 5.31, 5.33

Circuit Example 5.2.2


$5.3 reversible gates

NOT gate

CNOT gate


$5.4 quantum gates

Pauli gates

Measurement gate

Figure 5.6 Bloch sphere


Visiting/exchange student to UTC

 


I attached a form with this email and summarized the documents I need to issue DS2019 below.

Checklist of Documents to be Submitted by the Department and J-1 Applicant with This Form
  1. Proof of financial support letter or a bank statement (in English) (J-1 scholar $1500 per month = $18,000 per year)
  2. Copy of passport (visitor and all accompanying family members), DS2019, and Visa Stamp Page.
  3. Proof of English proficiency (or a letter from the department) for non-native English speakers
  4. CV 
  5. Invitation or appointment letter from the sponsoring UTC college or department
  6. Extracurricular cultural activity plan (This is a one-page letter from the sponsoring department explaining how the visitor will engage in cultural exchange activities.)
  7. Health insurance waiver request form (J-1 visitors are required to hold health insurance. If you do not submit this form, you will be charged approximately $1,800/year for medical coverage through the UTC-sponsored insurance plan.)
Feel free to let me know if you have questions.

Additional question...Is this person only researching? Not teaching? Are Any family members (J2) joining this person?


Transformer tutorials

 

Yes, there are several tutorials available on transformers in Python for AI. One such tutorial is available on Turing 1. This tutorial provides an introduction to transformers and their benefits compared to recurrent neural networks. It also walks you through some real-world case scenarios using Huggingface transformers.

Another tutorial available on Machine Learning Mastery 2 provides a step-by-step guide to train the transformer model for neural machine translation. The tutorial covers preparing the training dataset, applying a padding mask to the loss and accuracy computations, and plotting the training and validation loss curves.

If you’re interested in learning about the Transformer model architecture in detail, you can check out this tutorial on AAAI 2023 2. The tutorial aims to share recent developments on unified neural architectures that process different input modalities and learn to solve diverse tasks from the perspective of Transformer architectures. The goal is to equip attendees with “everything they need to know about Transformers.”

I hope this helps! Let me know if you have any other questions.

Sunday, September 10, 2023

GPT: Reinforcement Learning

 Certainly! Reinforcement Learning (RL) is a type of machine learning paradigm where an agent learns to make decisions by interacting with an environment. The fundamental idea is to learn optimal behavior or policy by trial and error, receiving feedback in the form of rewards or penalties. Here's a breakdown of the key components and the general process:


### Key Components:


1. **Agent**: The learner or decision-maker.

2. **Environment**: The context or space where the agent operates.

3. **State (s)**: A specific situation or configuration the environment can be in.

4. **Action (a)**: A move or decision the agent can make in a given state.

5. **Reward (r)**: A numerical value received by the agent as feedback after taking an action in a state.

6. **Policy (Ï€)**: A strategy that defines the mapping between states and actions.

7. **Value Function (V)**: A function that estimates the expected long-term reward for each state under a particular policy.

8. **Q-function (Q)**: A function that estimates the expected return (reward) of taking a particular action in a particular state under a specific policy.


### General Process:


1. **Initialization**: Initialize the policy (randomly or based on some heuristic), and set initial values for state and action-value functions.


2. **Exploration**: The agent explores the environment by taking actions based on the current policy or some exploration strategy (e.g., ε-greedy, where with probability ε, a random action is chosen).


3. **Observation**: After taking an action, the agent observes the new state and receives a reward from the environment.


4. **Learning**: Update the value functions (V or Q) based on the observed reward and the new state. This is usually done using algorithms like Q-Learning, SARSA, or various forms of Deep Q Networks (DQN) when neural networks are involved.


5. **Policy Update**: Optionally, update the policy based on new value function estimates. Methods like Policy Iteration or Actor-Critic models can be used for this.


6. **Loop**: Continue the process of exploration, observation, learning, and policy update until a termination condition is met (e.g., maximum number of episodes, minimal change in value function, etc.).


### Types of Reinforcement Learning:


1. **Model-Free vs. Model-Based**: In model-free RL, the agent learns directly from the rewards without understanding the environment dynamics. In model-based RL, the agent tries to learn the model of the environment.


2. **Value-Based vs. Policy-Based**: In value-based methods, the focus is on finding the optimal value function, and the policy is implicitly defined by it. In policy-based methods, the focus is directly on finding the optimal policy.


3. **Off-Policy vs. On-Policy**: In off-policy learning, the learning and behavior policies are different. In on-policy learning, the same policy is used for both.


4. **Single-Agent vs. Multi-Agent**: In single-agent RL, there's only one agent learning to interact with the environment. In multi-agent RL, multiple agents learn to interact either cooperatively or competitively.


5. **Tabular vs. Function Approximation**: In tabular RL, the value functions are represented in a tabular form. In function approximation methods like Deep RL, neural networks are used to approximate the value functions.


Reinforcement Learning has been successfully applied in various domains such as game playing (e.g., AlphaGo), robotics, natural language processing, healthcare, and finance, among others.

GPT: dynamic network and attention matrices

 The concept of using attention mechanisms, including attention matrices, for modeling dynamic networks is an area of active research. However, the search did not yield any papers specifically focused on using "flash attention matrices" for this purpose. Below are some relevant papers that discuss the use of attention mechanisms in modeling dynamic networks:


### Relevant Papers


1. **[Temporal motif-based attentional graph convolutional network for dynamic link prediction](https://dblp.org/rec/journals/ida/WuCZPH23) (2023)**

   - **Summary**: The paper proposes a temporal motif-based attentional graph convolutional network model (TMAGCN) that is superior to state-of-the-art baselines on the dynamic link prediction task. It suggests that temporal motifs can manifest the essential dynamic mechanism of the network.

   - **Citation Count**: 0


2. **[DySAT: Deep Neural Representation Learning on Dynamic Graphs via Self-Attention Networks](https://dblp.org/rec/conf/wsdm/SankarWGZY20) (2020)**

   - **Summary**: DySAT is a neural architecture that learns node representations to capture dynamic graph structural evolution. The paper validates the effectiveness of jointly modeling structural and temporal self-attention.

   - **Citation Count**: 279


3. **[Foundations and Modeling of Dynamic Networks Using Dynamic Graph Neural Networks: A Survey](https://arxiv.org/pdf/2005.07496) (2020)**

   - **Summary**: This work provides a comprehensive survey of dynamic graph neural network models and establishes a foundation for dynamic networks with consistent, detailed terminology and notation.

   - **Citation Count**: 130


4. **[DMGCRN: Dynamic Multi-Graph Convolution Recurrent Network for Traffic Forecasting](https://arxiv.org/pdf/2112.02264) (2021)**

   - **Summary**: The paper proposes a dynamic multi-graph convolution recurrent network (DMGCRN) that can model spatial and temporal correlations simultaneously.

   - **Citation Count**: 4


5. **[Global Spatiotemporal Graph Attention Network for Sea Surface Temperature Prediction](https://dblp.org/rec/journals/lgrs/GaoLYX23) (2023)**

   - **Summary**: The paper proposes a global spatiotemporal graph attention network (GSTGAT) that combines graph neural networks for modeling spatial correlations and achieves superior performance.

   - **Citation Count**: 0


While these papers do not specifically mention "flash attention matrices," they do indicate that attention mechanisms, including attention matrices, can be effectively used for modeling dynamic networks. Would you like to know more about any of these papers?

Saturday, September 9, 2023

SFS background check email

 Dear SFS applicant

To further consider your application to the SFS program, we wish to see that you could pass the background check. According to the Tennessee Bureau of Investigation, “The FBI allows members of the general public to obtain their own criminal history for record or review. However, these backgroundchecks cannot be used for employment or licensing. Click the vendor link below for more information or call 1 (877) 783-4187.” See, https://www.tn.gov/tbi/divisions/cjis-division/background-checks.html

 

So, we wish that you could voluntarily obtain your own criminal history with $50 fee through the TBI’s IdentoGo link at

https://www.identogo.com/services/history-check/fbi-history-check?filter=consumer-services

We wish that you can voluntarily share your background check report with the SFS committee. Please be aware that you have to pay the $50 fee yourself, and UTC will not reimburse this fee of yours.

 

Thanks, 

 

XYZ

Principal Investigator, SFS at UTC

Wednesday, September 6, 2023

basic quantum system, 4.2 Observable, and others, IBM quantum experience, hello world example

Chapter 4

4.1. quatum states

particle and waves

basic quantum system: a particle at discrete positions on a line; single particle spin system

state

complex amplitudes

superposition

normalized state

spin

Transition amplitudes: In quantum mechanics, the transition amplitude is a complex number that encodes the probability amplitude for a system to evolve from one state to another.  Imagine you're flipping through radio stations. Each station you might land on can be thought of as a "state." Now, the transition amplitude tells you how likely it is for your radio to transition from one specific station to another as you flip through the channels. It gives you a complex number, and the square of its magnitude tells you the probability of making that specific transition.


Start here on 9/7

4.2 Observable

hermitian operator 

expected values of observing \Omega repeatly on the same state \psi

https://docs.quantum-computing.ibm.com/start/hello-world

variance

summary: observables are represented by hermitian operators. 

expected value of observing \Omega on |psi>


Tuesday, September 5, 2023

CPEN3700 UG hourly rate justification

 

The success of CPEN 3700 greatly relies on the expertise of a competent lab assistant, particularly to guide students through intricate circuit-building exercises. Such a role requires prior exposure to the nuances and challenges of the subject matter. Unfortunately, none of our current graduate assistants possess the specialized background in circuits and computer engineering needed for this role. Therefore, it would be highly beneficial to engage an exceptional undergraduate student who has previously excelled in both the lecture and lab components of this course. This individual would require minimal training and could provide immediate, effective assistance, making a compelling case for offering a pay rate of $20 per hour for this critical function.

Chapter 4, basic quantum theory, 4.1 and 4.2.

 9/5 Tue


Chapter 4

4.1. quatum states

particle and waves

basic quantum system: a particle at discrete positions on a line; single particle spin system

state

complex amplitudes

superposition

normalized state

spin

Transition amplitudes: In quantum mechanics, the transition amplitude is a complex number that encodes the probability amplitude for a system to evolve from one state to another.  Imagine you're flipping through radio stations. Each station you might land on can be thought of as a "state." Now, the transition amplitude tells you how likely it is for your radio to transition from one specific station to another as you flip through the channels. It gives you a complex number, and the square of its magnitude tells you the probability of making that specific transition.

4.2 Observable

hermitian operator // stop here. 

expected values of observing \Omega repeatly on the same state \psi

https://docs.quantum-computing.ibm.com/start/hello-world

variance

summary: observables are represented by hermitian operators. 

expected value of observing \Omega on |psi>