graviti logo产品公开数据集关于我们
登录
622
0
28
THUCNews
创建来自Data Decorators / AChenQ
概要
活动

Overview

THUCTC (THU Chinese Text Classification) is a Chinese text classification toolkit launched by the Natural Language Processing Laboratory of Tsinghua University, which can automatically and efficiently implement user-defined text classification corpus training, evaluation, and classification functions. Text classification usually includes three steps: feature selection, feature dimensionality reduction, and classification model learning. How to select appropriate text features and reduce dimensionality is a challenging problem for Chinese text classification. Based on years of research experience in Chinese text classification, my group selected two-character string bigram as the feature unit in THUCTC, the feature reduction method is Chi-square, the weight calculation method is tfidf, and the classification model uses LibSVM or LibLinear. THUCTC has good universality for long texts in the open field, does not depend on the performance of any Chinese word segmentation tools, and has the advantages of high accuracy and fast test speed.

Data Collection

THUCNews is generated by filtering and filtering historical data of Sina News RSS subscription channels from 2005 to 2011. It contains 740,000 news documents (2.19 GB), all in UTF-8 plain text format. On the basis of the original Sina news classification system, we re-integrated and divided 14 candidate classification categories: finance, lottery, real estate, stocks, home furnishing, education, technology, society, fashion, current affairs, sports, horoscope, games, entertainment. Using THUCTC toolkit to evaluate on this data set, the accuracy rate can reach 88.6%.

Instruction

We provide two ways to run the toolkit:

  1. Use a java development tool, such as eclipse, to import the packages in the lib folder including lib\THUCTC_java_v1.jar into your own project, and then call the function by imitating the Demo.java program.

  2. Use THUCTC_java_v1_run.jar in the root directory to run the toolkit.

    Use command java -jar THUCTC_java_v1_run.jar + 程序参数

Operating parameters

  • [-c CATEGORY_LIST_FILE_PATH] Read category information from the file. Each line in the file contains only one category name.
  • [-train TRAIN_PATH] Perform training and set the path to the training corpus folder. The name of each subfolder under this folder corresponds to a category name, and contains training corpus belonging to that category. If not set, no training will be performed.
  • [-test EVAL_PATH] Perform evaluation and set the path to the evaluation corpus folder. The name of each subfolder under this folder corresponds to a category name, and contains evaluation corpus belonging to that category. If not set, no evaluation will be performed. You can also use -eval.
  • [-classify FILE_PATH] Classify a file.
  • [-n topN] Set the number of returned candidate categories, sorted by score. The default is 1, which means that only the most probable category is returned.
  • [-svm libsvm or liblinear] Choose whether to use libsvm or liblinear for training and testing, and liblinear is used by default.
  • [-l LOAD_MODEL_PATH] Set the path to read the model.
  • [-s SAVE_MODEL_PATH] Set the path to save the model.
  • [-f FEATURE_SIZE] Set the number of retained features, the default is 5000.
  • [-d1 RATIO] Set the proportion of the training set to the total number of files, the default is 0.8.
  • [-d2 RATIO] Set the proportion of the test set to the total number of files, the default is 0.2.
  • [-e ENCODING] Set the encoding of training and test files, the default is UTF-8.
  • [-filter SUFFIX] Set file suffix filtering. For example, if you set "-filter .txt", only files with a file name suffix of .txt will be considered during training and testing.

Citation

@inproceedings{chen2015joint,
  title={Joint learning of character and word embeddings},
  author={Chen, Xinxiong and Xu, Lei and Liu, Zhiyuan and Sun, Maosong and Luan, Huanbo},
  booktitle={Twenty-Fourth International Joint Conference on Artificial Intelligence},
  year={2015}
}
@inproceedings{inproceedings,
  author = {Li, Jingyang and Sun, Maosong and Zhang, Xian},
  year = {2006},
  month = {01},
  pages = {},
  title = {A Comparison and Semi-Quantitative Analysis of Words and Character-Bigrams as Features
in Chinese Text Categorization.},
  volume = {1},
  doi = {10.3115/1220175.1220244}
}

License

Custom

数据预览
查看数据
数据集信息
应用场景NLP
标注类型ClassificationText
LicenseCustom
更新时间2021-03-24 23:07:59
数据概要
数据格式Text
数据数量836.08k
文件大小2GB
标注数量836075
版权归属方
Tsinghua University
标注方
未知
了解更多和支持
相关数据集
立即开始构建AI
免费开始联系我们