DSpace logo

Please use this identifier to cite or link to this item: http://dspace.bits-pilani.ac.in:8080/jspui/handle/123456789/16336
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSharma, Yashvardhan-
dc.date.accessioned2024-11-12T07:05:14Z-
dc.date.available2024-11-12T07:05:14Z-
dc.date.issued2024-07-
dc.identifier.urihttps://arxiv.org/abs/2407.19526-
dc.identifier.urihttp://dspace.bits-pilani.ac.in:8080/jspui/handle/123456789/16336-
dc.description.abstractTo be included into chatbot systems, Large language models (LLMs) must be aligned with human conversational conventions. However, being trained mainly on web-scraped data gives existing LLMs a voice closer to informational text than actual human speech. In this paper, we examine the effect of decoding methods on the alignment between LLM-generated and human conversations, including Beam Search, Top K Sampling, and Nucleus Sampling. We present new measures of alignment in substance, style, and psychometric orientation, and experiment with two conversation datasets. Our results provide subtle insights: better alignment is attributed to fewer beams in Beam Search and lower values of P in Nucleus Sampling. We also find that task-oriented and open-ended datasets perform differently in terms of alignment, indicating the significance of taking into account the context of the interaction.en_US
dc.language.isoenen_US
dc.subjectComputer Scienceen_US
dc.subjectChatbot systemsen_US
dc.subjectLarge Language Models (LLMs)en_US
dc.subjectHuman Alignmenten_US
dc.titleImpact of Decoding Methods on Human Alignment of Conversational LLMsen_US
dc.typePreprinten_US
Appears in Collections:Department of Computer Science and Information Systems

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.