Agrupamiento de textos basado en la generación de Embeddings
No hay miniatura disponible
Fecha
2022-08-19
Autores
Título de la revista
ISSN de la revista
Título del volumen
Editor
Pontificia Universidad Católica del Perú
DOI
Resumen
Actualmente, gracias a los avances tecnológicos, principalmente en el mundo de la
informática se logra disponer de una gran cantidad de información, que en su mayoría son
una composición de signos codificados a nivel computacional que forman una unidad de
sentido, como son los textos. Debido a la variabilidad y alta volumetría de información
navegable en internet hace que poder agrupar información veraz sea una tarea complicada.
El avance computacional del lenguaje de procesamiento natural está creciendo cada día
para solucionar estos problemas.
El presente trabajo de investigación estudia la forma como se agrupan los textos con
la generación de Embeddings. En particular, se centra en usar diferentes métodos para
aplicar modelos supervisados y no supervisados para que se puedan obtener resultados
eficientes al momento de toparse con tareas de agrupamiento automático.
Se trabajó con cinco Datasets, y como resultado de la implementación de los modelos
supervisados se pudo determinar que el mejor Embedding es FastText implementado con
Gensim y aplicado en modelos basados en boosting. Para los modelos no supervisados el
mejor Embedding es Glove aplicado en modelos de redes neuronales con AutoEncoder y
capa K-means.
Nowadays, thanks to technological advances, mainly in the world of information technology, a large amount of information is available, most of which is a composition of signs encoded at a computational level that form a unit of meaning, such as texts. Due to the variability and high volume of navigable information on the Internet, grouping truthful information is a complicated task. The computational advance of natural language processing is growing every day to solve these problems. The present research work studies the way texts are clustered with the generation of Embeddings. In particular, it focuses on using different methods to apply supervised and unsupervised models so that efficient results can be obtained when encountering automatic clustering tasks. Five Datasets were worked with, and as a result of the implementation of the supervised models it was determined that the best Embedding is FastText implemented with Gensim and applied in models based on boosting. For the unsupervised models the best Embedding is Glove applied in neural network models with AutoEncoder and K-means layer.
Nowadays, thanks to technological advances, mainly in the world of information technology, a large amount of information is available, most of which is a composition of signs encoded at a computational level that form a unit of meaning, such as texts. Due to the variability and high volume of navigable information on the Internet, grouping truthful information is a complicated task. The computational advance of natural language processing is growing every day to solve these problems. The present research work studies the way texts are clustered with the generation of Embeddings. In particular, it focuses on using different methods to apply supervised and unsupervised models so that efficient results can be obtained when encountering automatic clustering tasks. Five Datasets were worked with, and as a result of the implementation of the supervised models it was determined that the best Embedding is FastText implemented with Gensim and applied in models based on boosting. For the unsupervised models the best Embedding is Glove applied in neural network models with AutoEncoder and K-means layer.
Descripción
Palabras clave
Procesamiento en lenguaje natural (Informática), Inteligencia artificial, Sistemas embebidos (Computadoras)
Citación
item.page.endorsement
item.page.review
item.page.supplemented
item.page.referenced
Licencia Creative Commons
Excepto se indique lo contrario, la licencia de este artículo se describe como info:eu-repo/semantics/openAccess