NECOSAn annotated corpus to identify constructive news comments in Spanish

  1. Pilar López Úbeda
  2. Flor Miriam Plaza del Arco
  3. Manuel Carlos Díaz Galiano
  4. María Teresa Martín Valdivia
Revue:
Procesamiento del lenguaje natural

ISSN: 1135-5948

Année de publication: 2021

Número: 66

Pages: 41-51

Type: Article

D'autres publications dans: Procesamiento del lenguaje natural

Résumé

In this paper, we present the NEws and COmments in Spanish (NECOS) corpus, a collection of Spanish comments posted in response to newspaper articles. Following a robust annotation scheme, three annotators labeled the comments as constructive and non-constructive. The articles were published in the newspaper El Mundo between April 3rd and April 30th, 2018. The corpus is composed of a total of 10 news articles and 1,419 comments. Three annotators manually labeled NECOS with an average Cohen's kappa of 78.97. Our current focus is the study of constructiveness and the evaluation of the Spanish NECOS corpus. In order to address this goal, we propose a benchmark testing different machine learning systems based on Natural Language Processing: a traditional system and the novel Transformer-based models. Specifically, we compare multilingual models with a monolingual model trained on Spanish in order to highlight the need to create resources trained on a specific language. The monolingual model fine-tuning on NECOS obtain the best result by achieving a macro-average F1 score of 77.24%.

Information sur le financement

This work has been partially supported by a grant from European Regional Development Fund (ERDF), LIVING-LANG project [RTI2018-094653-B-C21], and the Ministry of Science, Innovation and Universities (scholarship [FPI-PRE2019-089310]) from the Spanish Government.

Financeurs

Références bibliographiques

  • Aulamo, M. and J. Tiedemann. 2019. The OPUS resource repository: An open package for creating parallel corpora and machine translation services. In Proceedings of the 22nd Nordic Conference on Computational Linguistics, pages 389– 394, Turku, Finland, September–October. Linköping University Electronic Press. Chatterjee, S., P. G. Jose, and D. Datta.
  • Pilar López-Úbeda, Flor Miriam Plaza-del-Arco, Manuel Carlos Díaz-Galiano, M.Teresa Martín-Valdivia 2019. Text classification using svm enhanced by multithreading and cuda. International Journal of Modern Education & Computer Science, 11(1).
  • Cohen, J. 1960. A coefficient of agreement for nominal scales. Educational and psychological measurement, 20(1):37–46.
  • Conneau, A., K. Khandelwal, N. Goyal, V. Chaudhary, G. Wenzek, F. Guzmán, E. Grave, M. Ott, L. Zettlemoyer, and V. Stoyanov. 2019. Unsupervised crosslingual representation learning at scale. arXiv preprint arXiv:1911.02116.
  • Devlin, J., M.-W. Chang, K. Lee, and K. Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding.
  • Etim, B. 2017. The times sharply increases articles open for comments, using google’s technology. The New York Times, 13.
  • Fujita, S., H. Kobayashi, and M. Okumura. 2019. Dataset creation for ranking constructive news comments. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2619–2626.
  • Instituto Cervantes. 2018. El español: una lengua viva. https://cvc.cervantes. es/lengua/espanol_lengua_viva/pdf/ espanol_lengua_viva_2018.pdf.
  • Kolhatkar, V. and M. Taboada. 2017a. Constructive language in news comments. In Proceedings of the First Workshop on Abusive Language Online, pages 11–17.
  • Kolhatkar, V. and M. Taboada. 2017b. Using new york times picks to identify constructive comments. In Proceedings of the 2017 EMNLP Workshop: Natural Language Processing meets Journalism, pages 100–105.
  • Kolhatkar, V., N. Thain, J. Sorensen, L. Dixon, and M. Taboada. 2020. Classifying constructive comments. arXiv preprint arXiv:2004.05476.
  • Kolhatkar, V., H. Wu, L. Cavasso, E. Francis, K. Shukla, and M. Taboada. 2019. The sfu opinion and comments corpus: A corpus for the analysis of online news comments. Corpus Pragmatics, pages 1–36.
  • Lample, G. and A. Conneau. 2019. Crosslingual language model pretraining. arXiv preprint arXiv:1901.07291.
  • Liu, Y., M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
  • McHugh, M. L. 2012. Interrater reliability: the kappa statistic. Biochemia medica: Biochemia medica, 22(3):276–282.
  • Napoles, C., A. Pappu, and J. Tetreault. 2017. Automatically identifying good conversations online (yes, they do exist!). In Eleventh International AAAI Conference on Web and Social Media.
  • Napoles, C., J. Tetreault, A. Pappu, E. Rosato, and B. Provenzale. 2017. Finding good conversations online: The yahoo news annotated comments corpus. In Proceedings of the 11th Linguistic Annotation Workshop, pages 13–23.
  • Pedregosa, F., G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830.
  • Puri, S. and S. P. Singh. 2019. An efficient hindi text classification model using svm. In Computing and Network Sustainability. Springer, pages 227–237.
  • Swanson, R., B. Ecker, and M. Walker. 2015. Argument mining: Extracting arguments from online dialogue. In Proceedings of the 16th annual meeting of the special interest group on discourse and dialogue, pages 217–226.
  • Vaswani, A., N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008