Please use this identifier to cite or link to this item:
http://dspace.bits-pilani.ac.in:8080/jspui/handle/123456789/16336
Title: | Impact of Decoding Methods on Human Alignment of Conversational LLMs |
Authors: | Sharma, Yashvardhan |
Keywords: | Computer Science Chatbot systems Large Language Models (LLMs) Human Alignment |
Issue Date: | Jul-2024 |
Abstract: | To be included into chatbot systems, Large language models (LLMs) must be aligned with human conversational conventions. However, being trained mainly on web-scraped data gives existing LLMs a voice closer to informational text than actual human speech. In this paper, we examine the effect of decoding methods on the alignment between LLM-generated and human conversations, including Beam Search, Top K Sampling, and Nucleus Sampling. We present new measures of alignment in substance, style, and psychometric orientation, and experiment with two conversation datasets. Our results provide subtle insights: better alignment is attributed to fewer beams in Beam Search and lower values of P in Nucleus Sampling. We also find that task-oriented and open-ended datasets perform differently in terms of alignment, indicating the significance of taking into account the context of the interaction. |
URI: | https://arxiv.org/abs/2407.19526 http://dspace.bits-pilani.ac.in:8080/jspui/handle/123456789/16336 |
Appears in Collections: | Department of Computer Science and Information Systems |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.