Parameter-efficient fine-tuning (PEFT) is a new approach that allows for cost-effective fine-tuning of large language models (LLMs). One popular choice for fine-tuning is low-rank adaptation (LoRA). However, a common issue with fine-tuned LLMs is that they often become overconfident, especially when trained on smaller datasets. Bayesian methods, which can estimate uncertainty, are useful in addressing this problem and improving calibration. In this study, we propose Laplace-LoRA, a simple yet effective Bayesian method that applies the Laplace approximation to the LoRA parameters. This method significantly enhances the calibration of fine-tuned LLMs.
Large Language Model Adaptation with Bayesian Low-Rank Approximation (arXiv:2308.13111v1 [cs.LG])
by instadatahelp | Aug 28, 2023 | AI Blogs