Monday, May 1, 2023

Prompt-based training and fine-tuning training

 

Prompt-based training and fine-tuning training are two different approaches to customizing a pre-trained language model for a specific task.


Fine-tuning involves training a pre-trained model on a new dataset to improve its performance on a specific task. This customization step will let you get more out of the service by providing higher quality results than what you can get just from prompt design, the ability to train on more examples than can fit into a prompt, lower-latency requests and token savings due to shorter prompts². In particular, while prompts for base models often consist of multiple examples (few-shot learning), for fine-tuning, each training example generally consists of a single input example and its associated output, without the need to give detailed instructions or include multiple examples in the same prompt¹.


On the other hand, prompt-based training involves designing prompts that elicit the desired behavior from a pre-trained model without updating its weights. The main difference between pretrain-finetuning and prompt-tuning is that the former makes the model fit the downstream task, while the latter elicits the knowledge from the model by prompting³.


Source: Conversation with Bing, 5/1/2023

(1) How to customize a model with Azure OpenAI Service - Azure OpenAI. https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/fine-tuning.

(2) Fine-tuning - OpenAI API. https://platform.openai.com/docs/guides/fine-tuning.

(3) Brief Introduction to NLP Prompting | Finisky Garden. https://finisky.github.io/briefintrotoprompt.en/.

(4) Can prompt engineering methods surpass fine-tuning performance ... - Medium. https://medium.com/@lucalila/can-prompt-engineering-surpass-fine-tuning-performance-with-pre-trained-large-language-models-eefe107fb60e.

No comments:

Post a Comment