chatgpt output, un-edited. Some wrong links and erorrs are apparent.
1. Fine-Tuning
- Definition: Adjusting pre-trained models by retraining them on a specific dataset to tailor them to a particular task or domain.
- References:
2. Prompt Engineering
- Definition: Crafting specific inputs (prompts) to guide the behavior of large language models without altering their parameters.
- References:
3. Model Editing via Retrieval-Augmented Generation (RAG)
- Definition: Integrating external databases or retrieval systems to improve or adapt the model's outputs without direct parameter changes.
- References:
4. Knowledge Injection
- Definition: Incorporating domain-specific knowledge into a model post-training.
- References:
5. Soft Prompt Tuning
- Definition: Learning a set of prompt tokens that adjust the behavior of pre-trained models without altering core weights.
- References:
6. Modular Transfer Learning
- Definition: Dividing models into modules (e.g., embeddings, encoders, decoders) and only updating or replacing specific components.
- References:
7. Dynamic Reweighting
- Definition: Adjusting the influence of certain parts of the model during inference based on specific tasks or inputs.
- References:
8. Model Surgery
- Definition: Directly modifying neural network weights, layers, or architectures post-training.
- References:
9. Continual Learning
- Definition: Allowing a model to learn new information over time without forgetting prior knowledge.
- References:
10. Gradient Editing
- Definition: Directly modifying gradients during training to induce specific behaviors or rectify known issues.
- References:
11. Reinforcement Learning from Human Feedback (RLHF)
- Definition: Using human evaluations to fine-tune models, particularly for aligning AI with desired ethical or stylistic outcomes.
- References:
12. Model Patching
- Definition: Adding or replacing specific components in a model with updated or improved modules.
- References:
13. Parameter-Free Updating
- Definition: Techniques like black-box optimization or external decision systems that modify behavior without changing core parameters.
- References:
14. Memory Editing
- Definition: Directly modifying or updating specific "memories" in a model, allowing it to adjust responses to certain inputs or queries without retraining.
- Techniques:
- MEMIT (Model Editing Made Informed by Targeting)
- ROME (Rank-One Model Editing)
- References:
15. Multi-Modal Model Editing
- Definition: Modifying models trained on multi-modal data (e.g., text and images) for domain-specific applications.
- References:
16. Federated Learning Adjustments
- Definition: Decentralized learning where updates are based on data from multiple users without directly sharing datasets.
- References:
17. Meta-Learning (Learning to Learn)
- Definition: Training models to quickly adapt to new tasks with minimal data by leveraging meta-learning algorithms.
- References:
This version now includes Memory Editing with references to emerging techniques like MEMIT and ROME for direct manipulation of model-specific knowledge.
No comments:
Post a Comment