Friday, December 20, 2024

2026 NIAID Omnibus Broad Agency Announcement HHS-NIH-NIAID-BAA2025-1

 2026 NIAID Omnibus Broad Agency Announcement HHS-NIH-NIAID-BAA2025-1 Now Available

Notice Number:
NOT-AI-25-017

Key Dates

Release Date:

December 19, 2024

Related Announcements

None

Issued by

National Institute of Allergy and Infectious Diseases (NIAID)

Purpose

The National Institute of Allergy and Infectious Diseases (NIAID), one of 27 institutes of the National Institutes of Health, an agency within the Department of Health and Human Services (DHHS), conducts and supports research to understand, treat, and ultimately prevent the myriad infectious, immunologic, and allergic diseases that threaten millions of human lives. Through a variety of research grants and contracts, NIAID’s Division of Microbiology and Infectious Diseases (DMID) specifically supports extramural research to develop new medical countermeasures (MCMs) against potential agents of bioterrorism, drug-resistant pathogens, and emerging and re-emerging infectious diseases. This Broad Agency Announcement (BAA) is soliciting proposals to advance the research and development of promising candidate therapeutics, vaccines, and diagnostics for biodefense and emerging infectious diseases.

The Omnibus BAA is governed by Federal Acquisition Regulation (FAR) 6.102(d)(2) and FAR 35.016, as well as the NIH Policy Manual, Manual Chapter 6035, Broad Agency Announcements. A BAA may be used as a solicitation mechanism for basic and applied research directed toward advancing the state-of-the-art or increasing knowledge or understanding and that part of development not related to the development of a specific system or hardware procurement. BAAs are general in nature, identifying areas of research interest, and shall only be used when meaningful proposals with varying technical/scientific approaches can be reasonably anticipated.

Offers submitted in response to this BAA will be required to submit separate detailed technical and business proposals designed to meet the Technical Objectives described for each Research Area and/or Topic proposed. The Statement of Work (SOW), including the specific technical requirements and performance specifications, shall be developed and proposed by the Offeror, not the Government.

Proposals received in response to this BAA are NOT evaluated against each other since they are not submitted in accordance with a common SOW issued by the Government. Instead, Research and Technical Objectives will be provided in the BAA that describe individual Research Areas in which the Government is interested. Proposals received in response to the BAA will be evaluated in accordance with the Evaluation Factors for Award specified in the announcement. The Government reserves the right to conduct discussions with all, some, one, or none of the proposals received in response to this BAA. If discussions are conducted, the Government reserves the right to suggest modifying, adding or deleting milestones, decision points, research plans, processes, schedules, budget or product. The Government also reserves the right to make awards without discussions. Additionally, the Government reserves the right to accept proposals in their entirety or to select only portions of proposals for award. Multiple awards are anticipated. Selection for award under this BAA will be based upon the evaluation factors, importance to the agency programs, and the availability of funds.

The Research Areas included in this NIAID OMNIBUS BROAD AGENCY ANNOUNCEMENT No. HHS-NIH-NIAID-BAA2025-1, as well as the projected amounts of available funding, are discussed below. Dates for receipt of proposals are identified separately for EACH Research Area within the solicitation.

Description:

Research Area 001 – Development of Candidate Therapeutics, Vaccines, and In Vitro Diagnostics for Antimicrobial-Resistant (AMR) Bacterial or Fungal Pathogens

For Research Area 001, there are three (3) separate Topics – A, B, and C. Offerors may submit a proposal in response to Topics A, B, and/or C. If proposing to multiple Topics, Offerors must submit separate technical and business proposals for each Topic. 

Topic A: Therapeutics for AMR Bacterial or Fungal Pathogens

The objective of Topic A is to develop new therapeutic products against severe infections and/or drug-resistant strains of the following bacterial and fungal pathogens:

a.        Pseudomonas aeruginosa, and/or Acinetobacter baumannii; OR
b.       Candida auris, Cryptococcus spp., Aspergillus fumigatus, and/or Mucorales.

For the purpose of this Topic, “therapeutic” activity refers to the cure of disease, by elimination or substantial reduction of infective pathogens, by administration of a pharmaceutical agent after symptoms of disease are clinically observable. An antimicrobial therapeutic candidate refers to an advanced lead series, optimized leads, or product candidate, that is a new chemical entity and either a small molecule (e.g., natural products, nucleosides, or peptides of </= 40 amino acids), monoclonal antibody or a nanobody conjugate/fusion product, or a bacteriophage product. The following are not included: proteins, other biological entities, and conjugates of such entities (except monoclonal antibodies, nanobodies and bacteriophages).

This Topic will support lead optimization, pre-clinical Investigational New Drug (IND) enabling studies, and clinical Phase I trials of lead candidates with demonstrated therapeutic activities. For some pathogens, the development of a therapeutic product under the U.S. Food and Drug Administration’s (FDA) Animal Rule will be supported.

Topic B: Vaccines for AMR Bacterial Pathogens

The objective of Topic B is to protect human health and well-being by advancing vaccine candidates for the following ESKAPE bacterial pathogens: Enterococcus faecium, Staphylococcus aureus, Klebsiella pneumoniae, Acinetobacter baumannii, Pseudomonas aeruginosa, and Enterobacter species.

For the purpose of this Topic, the definition of a lead vaccine candidate is a candidate in which the antigen(s), adjuvant (if applicable), vaccine platform (e.g., mRNA, viral vector, subunit, etc.), and delivery route have been selected and are clinically relevant (i.e., intended for the final clinical product), for which proof-of-concept immunogenicity in relevant animal model(s) has already been demonstrated.

This Topic will support the advancement of a promising lead candidate from pre-clinical testing through IND submission to the FDA, as well as Phase I clinical trial conduct.

Topic C: In Vitro Diagnostics for AMR Fungal Pathogens

The objective of Topic C is to develop innovative platform technologies to speed the identification of infection from among a broad panel of fungi and to profile the phenotypic antifungal susceptibility. This emphasis aligns with NIAID’s goal of addressing persistent challenges in adequate clinical management associated with mycological infections and alleviating the burden of antifungal resistance.

The diagnostic test system must detect analytes from at least one, and preferably several, of the following agents and markers:

  • Candida spp. and associated resistance markers
  • Aspergillus fumigatus and associated resistance markers
  • Coccidioides spp.
  • Mucorales

Funding for Research Area 001: NIAID estimates that one to two awards may be issued for this Research Area for a total cost of up to $8.5 million for the non-severable base period across all contracts (direct and indirect costs combined). The total duration of a proposed contract should be consistent with the nature and complexity of the offeror’s proposed research. The total performance period comprised of the base and any options proposed by an Offeror should not exceed five (5) years.

Proposals Due Date and Time: February 21, 2025, 3:00PM Eastern Time

Research Area 002 – Development of Direct Acting Antivirals (DAA) for Viral Families of Pandemic Potential

This Research Area aims to develop safe and effective antivirals to combat viruses of pandemic potential, as well as to build sustainable platforms for targeted drug discovery and the development of a robust pipeline of candidates. Proposals MUST focus on antivirals that:      

  • Directly modify viral target function (not through the modulation of the host responses); AND
  • Act by reducing viral burden in early stages of disease; AND
  • Act against viruses of pandemic potential (i.e., Bunyaviridae, Coronaviridae, Filoviridae, Flaviviridae, Orthopoxviridae, Paramyxoviridae, Picornaviridae, and Togaviridae); AND
  • Are new chemical entities limited to small molecules (e.g., natural products, nucleosides, or peptides of </= 40 amino acids) and nanobody conjugates/fusion products that are directly acting on viral targets and functions (not through the modulation of the host responses); AND
  • Have safety profiles and suitable routes of administration for broad outpatient use.

For the purpose of this Topic, “therapeutic” activity refers to the elimination or substantial reduction of infective pathogens by administration of a pharmaceutical agent after viral challenge. A “therapeutic” candidate refers to an advanced lead series, optimized leads, or product candidate, that is a new chemical entity and either a small molecule (e.g., natural products, nucleosides, or peptides of </= 40 amino acids) or nanobody conjugate/fusion product. The following are not included: proteins, monoclonal antibodies, other biological entities, and conjugates of such entities.

Research Area 002 will support lead optimization, pre-clinical (IND enabling) studies, and/or Phase I clinical trials. Proposed products are not required to be narrow-spectrum and may include other pathogens in their spectrum of activity, provided one of the listed pathogens is in the primary indication of the proposed Target Product Profile (TPP). Product development under the FDA’s Animal Rule (21 CFR 314 subpart I) will be supported if appropriate to the proposed pathogen target.

Funding for Research Area 002: NIAID estimates that three to four awards may be issued for this Research Area for a total cost of up to $20 million for the non-severable base period across all contracts (direct and indirect costs combined). The total duration of a proposed contract should be consistent with the nature and complexity of the offeror’s proposed research. The total performance period comprised of the base and any options proposed by an Offeror should not exceed five (5) years.

Proposals Due Date and Time: January 21, 2025, 3:00PM Eastern Time

Any responsible offeror may submit a proposal which shall be considered by the Agency. This BAA can be accessed through Sam.Gov: https://sam.gov/opp/e1e43a392c2449e6805b9300906222a2/view. This notice does not commit the Government to award a contract.

For this solicitation, the NIAID requires proposals to be submitted online via the NIAID electronic Contract Proposal Submission (eCPS) website. Submission of proposals by facsimile or e-mail is not acceptable. For directions on using eCPS, go to the website: https://ecps.nih.gov and then click on "How to Submit."

Inquiries

Please direct all inquiries to:

Swee L. Teo
Contracting Officer
National Institute of Allergy and Infectious Diseases (NIAID) 
Telephone: 240-669-5173
Email: teosl@niaid.nih.gov 

Wednesday, December 18, 2024

DASC new courses

  

  1. DASC 728/828, Deep Learning Fundamentals and Applications” (Deep Learning Fund & App) (frank)
    “This course covers key components of deep learning framework, including loss functions, regularization, training and batch normalization. The course also covers several fundamental deep learning architectures such as multilayer perceptrons, convolutional neural network, recurrent neural network and transformers, as well as some advanced topics such as graph neural network and deep reinforcement learning. The class activities include traditional lectures, paper reading and presentation, and projects.”
    Prerequisites: be: CS 422 or CS 522 or CS 480 or CS 580 or CS 722 or CS 822 or CS 733 or CS 833 or CS 620, or other equivalent courses at the discretion of the instructor. 
  2. DASC 605, “Statistical Inference and Experimental Design for Data Science” (Stat Inf & Exp Design for Data Sci) (Trent)
    description”
    Prerequisites: STAT 603 and instructor approval
  3. DASC 715/815 Generative AI (3 credits)
  4. ·         Course Description: This course provides a deep dive into the foundations and current advancements in generative AI. It covers key concepts such as transformer models, GANs, VAEs, LLMs, and their applications across various fields, emphasizing both theory and hands-on learning, including ethical considerations such as fairness and bias mitigation. Students will develop a comprehensive understanding of generative AI and gain practical experience.
  5. ·         Grading: Normal/Letter, Pass/Fail, Audit allowed.
  6. ·         Prerequisite courses: Prior programming experience are expected.
  7.  
  8. DASC 717/817 AI for Health Sciences (3 credits)
  9. ·         Course Description: This course explores the application of AI in health sciences, focusing on machine learning, NLP, computer vision, generative AI techniques for diagnostics, treatment planning, patient monitoring, and biomedical research. It covers precision medicine, ethical AI, and the integration of AI into practice. Students will gain a deep understanding and practical skills to develop innovative AI solutions that address real-world challenges in health sciences.
  10. ·         Grading: Normal/Letter, Pass/Fail, Audit allowed.
  11. ·         Prerequisite courses: Prior programming experience are expected.
  12. DASC 7xx/8xx, “Data-Driven Computational Imaging” (Dushan)
    “please update course number, title and description after coordination with CS”
  13. DASC 600 (Sampath)
    “please update title and description”
  14. DASC 699  Thesis Research  (3 Credit Hours) 
    Prerequisites: Departmental permission required
  15. DASC 697 Independent Study in Data Science  (1-3 Credit Hours)
    Independent study under the direction of an instructor.
    Prerequisites: permission of the instructor 
  16. DASC 668 Internship (1-3 credits) (P/F only)
    Requirements will be established by the School of Data Science and Career Development Services and will vary with the amount of credit desired. Allows students an opportunity to gain a short duration career-related experience.

Actually submitted 
CS 781 AI for Health Science, 

Cross-listed and/orEquivalent Courses

CS 881, DASC 781, DASC 881



CS 782 Generative AI , cross listed with 
CS 882, DASC 782, DASC 882


Monday, December 16, 2024

Alzheimer’s Disease Sequencing Project (ADSP)

 The Alzheimer’s Disease Sequencing Project (ADSP) is a comprehensive, multi-phase national consortium aimed at understanding the genetic basis of Alzheimer’s disease and related dementias. Here are the key aspects of the ADSP:

https://www.nia.nih.gov/research/dn/alzheimers-disease-sequencing-project-consortia

NO gene expression?!

## Genomic Data

- The ADSP involves whole-genome sequencing (WGS) and whole exome sequencing (WES) of samples from various cohorts.

  - **Discovery Phase**: Includes WGS for 584 samples from 113 multiplex families, WES for 5,096 AD cases and 4,965 controls, and WES of an enriched sample set comprising 853 AD cases from multiply affected families and 171 Hispanic controls[2][5][6].

  - **Follow-Up Study Phases**: The project has progressed through several phases, including the Discovery Extension Phase, Follow-Up Study Phase, and Follow-Up Study 2.0 Diversity Initiative Phase, which focus on expanding the genetic data to include more diverse populations, such as African Americans, Hispanics, and Asians[1][5].


## Phenotypic Data

- While the primary focus of the ADSP is on genomic data, it also incorporates rich phenotypic data.

  - **Clinical and Cognitive Data**: The project includes clinical cognitive data such as memory, language, and executive function scores. However, it does not directly collect neuroimaging data like T1 MRI, Amyloid-beta, or tau PET scans as part of its core sequencing efforts. Instead, these data are often integrated from other studies and consortia[1][3][6].

  - **Longitudinal and Autopsy-Confirmed Data**: The project emphasizes the use of well-phenotyped participants with autopsy-confirmed diagnoses and longitudinal data[2][5].


## Harmonized Data

- The ADSP Phenotype Harmonization Consortium (ADSP-PHC) plays a crucial role in harmonizing phenotypic data across different cohorts.

  - **ADSP-PHC**: Established to harmonize endophenotype data, including cognitive, imaging, longitudinal clinical, neuropathological, cardiovascular risk, and biomarker data. This harmonization enables modern genomic analyses and generates a perpetually curated and shared legacy dataset[3][6].


## Study Design and Objectives

- The ADSP uses both case-control and family-based study designs.

  - **Objectives**: The overarching goals include identifying new genes involved in Alzheimer’s disease, identifying gene alleles contributing to increased risk or protection against the disease, understanding why individuals with known risk factor genes do not develop AD, and identifying potential therapeutic approaches and prevention strategies[1][4][5].


## Diversity and Global Collaboration

- The ADSP places a high priority on racial/ethnic diversity, recognizing that most genetic studies have been conducted in non-Hispanic white populations.

  - **Diverse Population Initiative**: The Follow-Up Study 2.0 phase aims to conduct whole-genome sequencing on 18,500 AD cases and 18,500 controls from African American, Hispanic, and Asian populations, ensuring a more diverse sample set[1][2][5].


The ADSP is a collaborative effort involving over 350 investigators from global institutions, funded under several cooperative agreements and research grant awards, and is part of the NIA Alzheimer’s Disease Genetics Portfolio.


Citations:

[1] https://www.nia.nih.gov/research/dn/alzheimers-disease-sequencing-project-consortia

[2] https://dss.niagads.org/studies/sa000001/

[3] https://www.vumc.org/cnt/harmonization-initiative

[4] https://www.ncbi.nlm.nih.gov/projects/gap/cgi-bin/study.cgi?study_id=phs000572.v1.p1

[5] https://adsp.niagads.org/about/adsp-phases/

[6] https://adsp.niagads.org/funded-programs/phenotype-harmonization/

[7] https://alz-journals.onlinelibrary.wiley.com/doi/10.1002/alz.13705

[8] https://adsp.niagads.org/adsp-and-affiliates-whole-genome-sequencing-report/

Sunday, December 15, 2024

free tools that can detect AI-generated content

 Here are some free tools that can detect AI-generated content with a limit of up to 5,000 words:


1. **SEO.ai**:

   - **Word Limit**: 5,000 characters.

   - **Features**: Provides a probability score indicating whether the content is AI-generated. It uses an ensemble model for more stable results, making it a reliable choice for detecting AI content[2].


2. **Smodin**:

   - **Word Limit**: No specific character limit mentioned, but it supports document uploads (PDF, DOC, DOCX).

   - **Features**: Offers a simple interface for pasting text or uploading documents. It can handle multiple languages and provides a probability score for AI detection[1][3].


3. **QuillBot**:

   - **Word Limit**: No strict limit mentioned; however, it typically processes smaller texts effectively.

   - **Features**: Offers an overall percentage likelihood of AI generation and categorizes the text into different classifications (AI-generated, human-written, etc.). It does not require sign-up for use[1][4].


4. **GPTZero**:

   - **Word Limit**: Up to 5,000 characters.

   - **Features**: Allows users to input text directly or upload documents. It analyzes the text quickly and provides insights on whether the content is likely human or AI-generated[3][7].


5. **Leap AI**:

   - **Word Limit**: Not specified, but allows document uploads.

   - **Features**: Provides a percentage score estimating AI involvement and highlights sentences with high scores for AI generation[1].


These tools vary in their specific capabilities and user interfaces, but they all provide free options for detecting potential AI-generated content effectively.


Citations:

[1] https://surferseo.com/blog/best-ai-content-detection-tools/

[2] https://seo.ai/blog/free-ai-content-detectors

[3] https://zapier.com/blog/ai-content-detector/

[4] https://www.scribbr.com/ai-tools/best-ai-detector/

[5] https://originality.ai/blog/best-ai-content-detection-tools-reviewed

[6] https://contentdetector.ai

[7] https://www.twixify.com/post/best-ai-content-detectors

Friday, December 13, 2024

EU software legislations

 

What the EU’s new software legislation means for developers

https://github.blog/open-source/maintainers/what-the-eus-new-software-legislation-means-for-developers/


Everything you never wanted to know about the R vulnerability ...but shouldn't be afraid to ask

 

Everything you never wanted to know about the R vulnerability

...but shouldn't be afraid to ask

https://aitap.github.io/2024/05/02/unserialize.html


R-bitrary Code Execution: Vulnerability in R’s Deserialization

  R-bitrary Code Execution: Vulnerability in R’s Deserialization

https://hiddenlayer.com/innovation-hub/r-bitrary-code-execution/


NIST National Vulnerability Database

 

https://nvd.nist.gov/

NIST National Vulnerability Database

CWE-502: Deserialization of Untrusted Data

CWE-502: Deserialization of Untrusted Data
https://cwe.mitre.org/data/definitions/502.html

Thursday, December 12, 2024

New R programming vulnerability exposes projects to supply chain attacks:

 

https://thehackernews.com/2024/04/new-r-programming-vulnerability-exposes.html

How about citing the following R risk issue before R.4.3.1. 

 

New R programming vulnerability exposes projects to supply chain attacks: 

https://thehackernews.com/2024/04/new-r-programming-vulnerability-exposes.html

A critical security vulnerability, CVE-2024-27322, has been identified in R versions 1.4.0 through 4.3.1. This flaw allows attackers to execute arbitrary code by exploiting the deserialization process of untrusted data, particularly through maliciously crafted RDS (R Data Serialization) files or R packages. The issue stems from R's handling of promise objects and lazy evaluation, enabling an attacker to embed arbitrary R code within an RDS file that executes upon loading and accessing the associated object. This vulnerability poses significant risks in environments where R packages are shared, potentially leading to widespread supply chain attacks. 

 

This issue was fixed in R4.4.0.

Genome-wide association study between SARS-CoV-2 single nucleotide polymorphisms and virus copies during infections

 

https://journals.plos.org/ploscompbiol/article?id=10.1371%2Fjournal.pcbi.1012469&utm_source=chatgpt.com#sec008


https://doi.org/10.1371/journal.pcbi.1012469


  • Published: September 17, 2024
  • https://doi.org/10.1371/journal.pcbi.1012469

Monday, December 9, 2024

Attention-based fusion methods

 Attention-based fusion methods are sophisticated techniques used to combine information from multiple modalities or features, and they do not necessarily require token and vocabulary matching in the traditional sense. Here’s a detailed explanation based on the provided sources:


## Attention Mechanism

The core of attention-based fusion is the attention mechanism, which dynamically adjusts the relative importance of different modalities or features based on the context. This is achieved by computing attention weights that reflect how relevant each modality or feature is to the current task or state.


## Multimodal Attention Fusion

In the context of multimodal fusion, such as in video description or vision-language tasks, attention-based methods allow the model to selectively focus on different modalities (e.g., image, audio, text) when generating outputs. For example:

- The method proposed by Hori et al. uses an attention model to handle the fusion of multiple modalities, where each modality has its own sequence of feature vectors. The attention weights are computed based on the decoder state and the feature vectors, allowing the model to dynamically adjust the importance of each modality[1].


## Channel Fusion and Compound Tokens

In vision-language tasks, methods like Compound Tokens fusion use cross-attention to align visual and text tokens. Here, the model does not require exact token matching but instead uses cross-attention to retrieve compatible tokens from different modalities. The visual and text tokens are then concatenated along the channel dimension to form compound tokens, which are fed into a transformer encoder. This approach does not necessitate a direct match between tokens but rather aligns them through cross-attention[2].


## Attentional Feature Fusion

For feature fusion within neural networks, attention-based methods can be applied across different layers and scales. For instance, the Attentional Feature Fusion (AFF) framework generalizes attention-based feature fusion to cross-layer scenarios, including short and long skip connections. This method uses multi-scale channel attention to address issues arising from feature inconsistency across different scales, without requiring token or vocabulary matching[3].


## Multi-criteria Token Fusion

In the context of vision transformers, Multi-criteria Token Fusion (MCTF) optimizes token fusion by considering multiple criteria such as similarity, informativeness, and size. This method uses one-step-ahead attention to measure the informativeness of tokens and does not require a direct match between tokens. Instead, it aggregates tokens based on their relevance and informativeness, minimizing information loss[4].


## Conclusion

Attention-based fusion methods are highly flexible and do not require explicit token or vocabulary matching. They dynamically adjust the importance of different modalities or features based on the context, allowing for more effective and adaptive fusion of information. These methods are applicable across various domains, including multimodal fusion, vision-language tasks, and feature fusion within neural networks.


Citations:

[1] https://openaccess.thecvf.com/content_ICCV_2017/papers/Hori_Attention-Based_Multimodal_Fusion_ICCV_2017_paper.pdf

[2] https://openreview.net/pdf?id=J9Z3MlnPU_f

[3] https://openaccess.thecvf.com/content/WACV2021/papers/Dai_Attentional_Feature_Fusion_WACV_2021_paper.pdf

[4] https://openaccess.thecvf.com/content/CVPR2024/papers/Lee_Multi-criteria_Token_Fusion_with_One-step-ahead_Attention_for_Efficient_Vision_Transformers_CVPR_2024_paper.pdf

[5] https://www.nature.com/articles/s41598-023-50408-6

[6] https://pmc.ncbi.nlm.nih.gov/articles/PMC9462790/

[7] https://openaccess.thecvf.com/content/CVPR2024/papers/Marcos-Manchon_Open-Vocabulary_Attention_Maps_with_Token_Optimization_for_Semantic_Segmentation_in_CVPR_2024_paper.pdf

newer AI techniques and trends that have emerged or gained significant traction after 2023:

 Here are some of the newer AI techniques and trends that have emerged or gained significant traction after 2023:


## Multimodal AI

Multimodal AI combines multiple modalities such as text, images, audio, and video to create more versatile and effective AI models. This approach allows models like ChatGPT-4 to generate text from various inputs, including images and audio, and to convert between different modalities seamlessly. This trend is expected to enhance applications in fields like financial services, customer analytics, and marketing[1][5][7].


## Small Language Models (SLMs)

SLMs are smaller versions of large language models (LLMs) that can operate efficiently on fewer computing resources, making them accessible on devices like smartphones. These models, such as Microsoft's Phi and Orca, offer similar or sometimes better performance than LLMs in certain areas, democratizing AI use and reducing the need for significant financial investments[1].


## Customizable Generative AI

Customizable AI models are designed to cater to specific industries and user needs, offering more personalization and control over data. This is particularly beneficial in sectors like healthcare, legal, and financial services, where specialized terminology and practices are crucial. Customizable models also enhance privacy and security by reducing reliance on third-party data processing[1].


## Decoupled Contrastive Learning (DCL)

DCL is a new approach to contrastive learning that improves learning efficiency by removing the negative-positive-coupling (NPC) effect present in traditional contrastive learning methods like InfoNCE. This method requires fewer computational resources, smaller batch sizes, and shorter training epochs, yet achieves competitive performance with state-of-the-art models[2][4].


## Explainable AI (XAI)

XAI focuses on making AI models more transparent and interpretable by providing insights into how the models arrive at their decisions. Techniques such as decision trees, linear models, and rule-based systems are used to ensure that AI-driven decisions align with human values and expectations. This trend is gaining popularity as it builds trust and understanding in AI-generated outcomes[3].


## Agentic AI

Agentic AI represents a shift from reactive to proactive AI systems. These AI agents exhibit autonomy, proactivity, and the ability to act independently, setting goals and taking actions without direct human intervention. Applications include environmental monitoring, financial portfolio management, and other areas where autonomous decision-making is beneficial[7].


## Multi-view Graph Contrastive Learning

This approach adapts contrastive learning to recommendation systems by incorporating multiple views of user data. Techniques such as node dropout, edge dropout, and random walks are used to generate diverse views, enhancing the model's ability to capture underlying preferences and behaviors[6].


## Language-Image Contrastive Learning with Efficient Large Language Model and Prompt Fine-Tuning (CLEFT)

CLEFT is a novel method that combines efficient large language models with prompt fine-tuning for language-image contrastive learning. This approach reduces the need for extensive GPU resources and prolonged training times, making it suitable for applications with limited datasets, such as medical imaging[9].


## Retrieval-Augmented Generation

This trend involves combining generative AI models with retrieval systems to enhance the accuracy and relevance of generated content. By retrieving relevant information from a database and integrating it into the generation process, models can produce more informed and contextually accurate outputs[7].


These techniques and trends highlight the rapid evolution and diversification of AI, enabling more efficient, versatile, and interpretable AI applications across various domains.


Citations:

[1] https://khoros.com/blog/ai-trends

[2] https://ai.meta.com/research/publications/decoupled-contrastive-learning/

[3] https://devabit.com/blog/top-11-new-technologies-in-ai-exploring-the-latest-trends/

[4] https://www.amazon.science/blog/new-contrastive-learning-methods-for-better-data-representation

[5] https://www.ibm.com/think/insights/artificial-intelligence-trends

[6] https://www.nature.com/articles/s41598-024-73336-5

[7] https://www.techtarget.com/searchenterpriseai/tip/9-top-AI-and-machine-learning-trends

[8] https://gram-blogposts.github.io/blog/2024/contrast-learning/

[9] https://arxiv.org/abs/2407.21011

Tuesday, November 26, 2024

scRNA seq on detecting early Alzheimer's disease (AD).

 

Single-cell RNA sequencing of blood cells shows promise in detecting early Alzheimer's disease (AD). Several studies have demonstrated that this technique can identify specific cellular and molecular changes associated with AD:


1. A study using single-cell RNA sequencing on peripheral blood mononuclear cells (PBMCs) from AD patients and controls revealed significant alterations in immune cell subsets, particularly a decrease in B cells that correlated with disease progression[1].


2. Another study profiling 36,849 PBMCs from AD patients with amyloid-positive status identified changes in cell subset proportions and gene expression patterns, as well as alterations in T and B cell receptor diversity[5].


3. Researchers found 31 cell type-specific key genes, including human leukocyte antigen genes, and multiple immune-related pathways that were altered in AD patients' blood cells[5].


4. Single-cell sequencing has uncovered B cell-related molecular biomarkers in PBMCs that could potentially be used for early AD detection[1].


5. The technique has also revealed changes in adaptive immune responses, particularly those mediated by T cells, which may play a role in AD pathogenesis[5].


While these findings are promising, it's important to note that further research and validation are needed before single-cell sequencing of blood cells can be used as a reliable early detection method for Alzheimer's disease.


Citations:

[1] https://www.nature.com/articles/s12276-021-00714-8

[2] https://www.frontiersin.org/journals/immunology/articles/10.3389/fimmu.2021.645666/full

[3] https://pmc.ncbi.nlm.nih.gov/articles/PMC9955959/

[4] https://www.nature.com/articles/s41467-020-19737-2

[5] https://pubmed.ncbi.nlm.nih.gov/34447367/

[6] https://alz-journals.onlinelibrary.wiley.com/doi/abs/10.1002/alz.060629

[7] https://molecularneurodegeneration.biomedcentral.com/articles/10.1186/s13024-022-00517-z

[8] https://alz-journals.onlinelibrary.wiley.com/doi/full/10.1002/alz.13691


Yes, there are relevant single-cell datasets available for early Alzheimer's disease (AD) detection. Several studies and databases have been developed to facilitate research in this area:


1. ssREAD (single-cell and spatial RNA-seq database for Alzheimer's disease) is a comprehensive repository that includes:

   - 1,053 samples from 67 AD-related scRNA-seq & snRNA-seq studies, totaling 7,332,202 cells

   - 381 spatial transcriptomics datasets from 18 human and mouse brain studies

   - Samples annotated with details such as species, gender, brain region, disease/control status, age, and AD Braak stages[2][3]


2. A study profiling 36,849 peripheral blood mononuclear cells (PBMCs) from AD patients with amyloid-positive status and normal controls using single-cell transcriptome and immune repertoire sequencing[5]


3. A large-scale single-cell transcriptomic atlas covering 1.3 million cells from 283 post-mortem human brain samples across 48 individuals with and without Alzheimer's disease, examining six different brain regions[6]


4. A dataset (GEO GSE157827) composed of single-nucleus RNA-sequencing of prefrontal cortex from AD patients and matched healthy controls, including 169,496 nuclei from 12 AD patients and 9 neurological control subjects[4]


These datasets provide valuable resources for researchers to investigate transcriptomic alterations in AD compared to controls at various resolutions: sub-cellular, cellular, and spatial levels. They enable the exploration of early molecular changes associated with AD, potentially leading to the development of early detection methods.


Citations:

[1] https://elifesciences.org/articles/90214

[2] https://www.biorxiv.org/content/10.1101/2023.09.08.556944v2

[3] https://www.nature.com/articles/s41467-024-49133-z

[4] https://pmc.ncbi.nlm.nih.gov/articles/PMC9955959/

[5] https://www.frontiersin.org/journals/immunology/articles/10.3389/fimmu.2021.645666/full

[6] https://www.nature.com/articles/s41586-024-07606-7

[7] https://www.frontiersin.org/journals/aging-neuroscience/articles/10.3389/fnagi.2023.1157792/full

[8] https://www.nature.com/articles/s12276-021-00714-8



Friday, November 22, 2024

multivew cost funciton

 Multi-view cost functions are used in multi-view stereo (MVS) and multi-view learning algorithms to aggregate information from multiple views or data sources. The cost function typically aims to optimize the matching or reconstruction process across different views.

Key Components of Multi-view Cost Functions

  1. Photo-consistency: This measures the similarity of corresponding pixels or features across different views
    1
    2
    .
  2. Spatial regularization: This enforces smoothness and coherence in the reconstructed 3D geometry or learned representations
    2
    .
  3. View aggregation: Costs from individual views are combined, often using weighted sums or other fusion strategies
    1
    3
    .

Examples of Multi-view Cost Functions

  1. MVS Cost Aggregation:
    S(p,j)=iwi(p)Si(p,j)
    Where S(p,j) is the aggregated cost for pixel p at depth hypothesis jwi(p) is the weight for view i, and Si(p,j) is the cost from view i
    1
    .
  2. Multi-view Tracking Cost:
    ci1i2...iN=lnp(Zi1i2...iNa)p(Zi1i2...iN)
    This cost function compares the likelihood of measurements belonging to an object versus being false positives
    3
    .
  3. Multi-view Learning Reconstruction Error:
    i=1nxig(vi)F2+i=1nvif(xi)F2
    Used in auto-encoders to minimize reconstruction error across multiple views
    4
    .
Multi-view cost functions often incorporate terms for data fidelity, regularization, and view consistency. They are designed to leverage complementary information from different views while handling challenges such as occlusions and varying viewpoints.