DSpace logo

Search


Current filters:
Start a new search
Add filters:

Use filters to refine the search results.


Results 431-440 of 447 (Search time: 0.003 seconds).
Item hits:
Issue DateTitleAuthor(s)
2022-07Multi-Period EOQ Model for Multi-Generation Technology Products With Short Product Life CyclesChanda, Udayan; Nagpal, Gaurav; Jasti, Naga Vamsi Krishna
2022Does Cross-Functional Pedagogy of Teaching a Course Help in Management Education?: Evidence From a Supply Chain Management CourseNagpal, Gaurav; Jasti, Naga Vamsi Krishna
2023-01Challenges in Adoption of Business Analytics by Small Retailers: An Empirical Study in the Indian ContextNagpal, Gaurav; Nagpal, Ankita; Jasti, Naga Vamsi Krishna
2023Drivers and the role of chief information officer (CIO) in digital transformation: An exploratory studyChawla, Raghu Nandan
2021-11Experiments on Fraud Detection use case with QML and TDA MapperMitra, Satanik
2020-06OBIM: A computational model to estimate brand image from online consumer reviewMitra, Satanik
2022Sarcasm Detection in News Headlines using Supervised Learning Publisher: IEEE PDFMitra, Satanik
2022Suicidal Intention Detection in Tweets Using BERT-Based TransformersMitra, Satanik
2023A study of the mango supply chain with an emphasis on orchard operationsManasvi, Jagarlapudi Krishna
2020-03In recent times, word embeddings are taking a significant role in sentiment analysis. As the generation of word embeddings needs huge corpora, many applications use pretrained embeddings. In spite of the success, word embeddings suffers from certain drawbacks such as it does not capture sentiment information of a word, contextual information in terms of parts of speech tags and domain-specific information. In this work we propose HIDE a Hybrid Improved Document level Embedding which incorporates domain information, parts of speech information and sentiment information into existing word embeddings such as GloVe and Word2Vec. It combine improved word embeddings into document level embeddings. Further, Latent Semantic Analysis (LSA) has been used to represent documents as a vectors. HIDE is generated, combining LSA and document level embeddings, which is computed from improved word embeddings. We test HIDE with six different datasets and shown considerable improvement over the accuracy of existing pretrained word vectors such as GloVe and Word2Vec. We further compare our work with two existing document level sentiment analysis approaches. HIDE performs better than existing systemsMitra, Satanik