American Journal of Information Science and Technology

Submit a Manuscript

Publishing with us to make your research visible to the widest possible audience.

Propose a Special Issue

Building a community of authors and readers to discuss the latest research and develop new ideas.

Group Emotion Recognition for Weibo Topics Based on BERT with TextCNN

Social media platforms, including Weibo, have become an integral part of people's daily lives, where users engage in discussions, share opinions, and express their emotions regarding trending topics. However, as the volume of information and content continues to increase, individuals face challenges in accessing relevant information. To address this issue, sentiment analysis has been employed in this study to focus on group sentiment identification for Weibo topics. Due to the potential involvement of multiple sentiment categories in Weibo topics, the main algorithm used in this research combines BERT and TextCNN for text multi-label classification. This approach aims to predict the possible collective emotional reactions of the public. Macro-F1 has been chosen as the evaluation criterion, with the baseline algorithm achieving a score of 0.3339, while our model achieved a slightly improved score of 0.3514. This improvement demonstrates the efficacy of the proposed algorithm. This paper makes full use of the self-attentive mechanism of BERT combined with the convolutional layer and pooling operation of TextCNN to extract local features. The generalization ability and sentiment classification accuracy of the model are improved. The results of text multi-label classification for group sentiment recognition of microblog topics demonstrate the superiority of the model algorithm in this paper. This study carries significant implications for understanding the public's emotional responses to popular topics on social media. It provides valuable insights for further exploration and advancement in the field of sentiment analysis within the realm of social media.

BERT, TextCNN, Text Classification, NLP

Donghong Shan, Huili Li. (2023). Group Emotion Recognition for Weibo Topics Based on BERT with TextCNN. American Journal of Information Science and Technology, 7(3), 95-100.

Copyright © 2023 Authors retain the copyright of this article.
This article is an open access article distributed under the Creative Commons Attribution License ( which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1. Harwit E. The rise and influence of Weibo (microblogs) in China [J]. Asian Survey, 2014, 54 (6): 1059-1087.
2. Devlin, J., Chang, M. W., Lee, K., Toutanova, K., 2018. BERT: Pretraining of deep bidirectional transformers for language understanding. arXiv preprint arXiv: 1810.04805.
3. Kim, Y., 2014. Convolutional neural networks for sentence classification. Eprint Arxiv.
4. Cambria E, White B. Jumping NLP curves: A review of natural language processing research [J]. IEEE Computational intelligence magazine, 2014, 9 (2): 48-57.
5. Strubell E, Ganesh A, McCallum A. Energy and policy considerations for deep learning in NLP [J]. arXiv preprint arXiv: 1906.02243, 2019.
6. Kang Y, Cai Z, Tan C W, et al. Natural language processing (NLP) in management research: A literature review [J]. Journal of Management Analytics, 2020, 7 (2): 139-172.
7. Sadeghian A, Sharafat A R. Bag of words meets bags of popcorn [J]. CS224N Proj, 2015: 4-9.9.
8. Xiao, K., Wang, C., Zhang, Q., Qian, Z., 2019. Food safety event detection based on multi-feature fusion. Symmetry 11, 1222.
9. Qaiser S, Ali R. Text mining: use of TF-IDF to examine the relevance of words to documents [J]. International Journal of Computer Applications, 2018, 181 (1): 25-29.
10. Yun-tao Z, Ling G, Yong-cheng W. An improved TF-IDF approach for text classification [J]. Journal of Zhejiang University-Science A, 2005, 6: 49-55.
11. Zaremba, W., Sutskever, I., Vinyals, O., 2014. Recurrent neural network regularization. Eprint Arxiv.
12. Hochreiter, S., Schmidhuber, J., 1997. Long short-term memory. Neural computation 9, 1735–1780.
13. Liu, P., Qiu, X., Huang, X., 2016. Recurrent neural network for text classification with multi-task learning. arXiv preprint arXiv: 1605.05101.
14. Mikolov T, Sutskever I, Chen K, et al. Distributed representations of words and phrases and their compositionality [J]. Advances in neural information processing systems, 2013, 26.
15. Goldberg Y, Levy O. word2vec Explained: deriving Mikolov et al.'s negative-sampling word-embedding method [J]. arXiv preprint arXiv: 1402.3722, 2014.
16. Ling W, Dyer C, Black A W, et al. Two/too simple adaptations of word2vec for syntax problems [C]//Proceedings of the 2015 conference of the North American chapter of the association for computational linguistics: human language technologies. 2015: 1299-1304.