Please use this identifier to cite or link to this item:
http://dspace.bits-pilani.ac.in:8080/jspui/handle/123456789/14955
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Mitra, Satanik | - |
dc.date.accessioned | 2024-05-21T09:09:13Z | - |
dc.date.available | 2024-05-21T09:09:13Z | - |
dc.date.issued | 2022 | - |
dc.identifier.uri | https://ieeexplore.ieee.org/abstract/document/10037677 | - |
dc.identifier.uri | http://dspace.bits-pilani.ac.in:8080/jspui/xmlui/handle/123456789/14955 | - |
dc.description.abstract | Suicidal intention or ideation detection is one of the evolving research fields in social media. People use this platform to share their thoughts, tendencies, opinions, and feelings toward suicide. Therefore, this task becomes a challenging one due to the unstructured and noisy texts. In this paper, we propose five BERT-based pre-trained transformer models, namely, BERT, DistilBERT, ALBERT, RoBERTa, and DistilRoBERTa, for the task of suicidal intention detection. The performance of these models evaluated using the standard classification metrics. Specifically, we use the one-cycle learning rate policy to train all models. Our results show that the RoBERTa model achieves a better performance than other BERT-based models. The model gains 99.23%, 96.35%, and 95.39% accuracy for training, validation, and testing, respectively. | en_US |
dc.language.iso | en | en_US |
dc.publisher | IEEE | en_US |
dc.subject | Management | en_US |
dc.subject | Suicidal intention | en_US |
dc.subject | Suicidal ideation | en_US |
dc.subject | Transformers | en_US |
dc.subject | Pre-trained models | en_US |
dc.title | Suicidal Intention Detection in Tweets Using BERT-Based Transformers | en_US |
dc.type | Article | en_US |
Appears in Collections: | Department of Management |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.