查看“WikiEdge:ArXiv速递/2025-02-25”的源代码
←
WikiEdge:ArXiv速递/2025-02-25
跳转到导航
跳转到搜索
因为以下原因,您没有权限编辑该页面:
您请求的操作仅限属于该用户组的用户执行:
用户
您可以查看和复制此页面的源代码。
== 摘要 == * '''原文标题''':Is OpenAlex Suitable for Research Quality Evaluation and Which Citation Indicator is Best? * '''中文标题''':OpenAlex 是否适合研究质量评估?哪种引用指标最佳? * '''发布日期''':2025-02-25 18:21:30+00:00 * '''作者''':Mike Thelwall, Xiaorui Jiang * '''分类''':cs.DL *'''原文链接''':http://arxiv.org/abs/2502.18427v1 '''原文摘要''':This article compares (1) citation analysis with OpenAlex and Scopus, testing their citation counts, document type/coverage and subject classifications and (2) three citation-based indicators: raw counts, (field and year) Normalised Citation Scores (NCS) and Normalised Log-transformed Citation Scores (NLCS). Methods (1&2): The indicators calculated from 28.6 million articles were compared through 8,704 correlations on two gold standards for 97,816 UK Research Excellence Framework (REF) 2021 articles. The primary gold standard is ChatGPT scores, and the secondary is the average REF2021 expert review score for the department submitting the article. Results: (1) OpenAlex provides better citation counts than Scopus and its inclusive document classification/scope does not seem to cause substantial field normalisation problems. The broadest OpenAlex classification scheme provides the best indicators. (2) Counterintuitively, raw citation counts are at least as good as nearly all field normalised indicators, and better for single years, and NCS is better than NLCS. (1&2) There are substantial field differences. Thus, (1) OpenAlex is suitable for citation analysis in most fields and (2) the major citation-based indicators seem to work counterintuitively compared to quality judgements. Field normalisation seems ineffective because more cited fields tend to produce higher quality work, affecting interdisciplinary research or within-field topic differences. '''中文摘要''':本文比较了(1)使用[[OpenAlex]]和[[Scopus]]进行[[引文分析]],测试它们的引用计数、文档类型/覆盖范围和学科分类,以及(2)三种基于引文的指标:原始计数、(领域和年份)标准化引文得分([[NCS]])和标准化对数转换引文得分([[NLCS]])。方法(1&2):通过对28.6百万篇文章的指标计算,并在两个黄金标准上进行了8,704次相关性比较,这些标准涉及97,816篇英国[[研究卓越框架]]([[REF]])2021的文章。主要黄金标准是[[ChatGPT]]评分,次要标准是提交文章的部门的平均REF2021专家评审得分。结果:(1)OpenAlex提供的引用计数优于Scopus,其包容性文档分类/范围似乎不会导致显著的领域标准化问题。最广泛的OpenAlex分类方案提供了最佳指标。(2)与直觉相反,原始引用计数至少与几乎所有领域标准化指标一样好,并且在单一年份中表现更好,而NCS优于NLCS。(1&2)存在显著的领域差异。因此,(1)OpenAlex适用于大多数领域的引文分析,(2)主要的基于引文的指标似乎与质量判断相反。领域标准化似乎无效,因为引用较多的领域往往产生更高质量的工作,影响跨学科研究或领域内主题差异。
返回
WikiEdge:ArXiv速递/2025-02-25
。
导航菜单
个人工具
创建账号
登录
命名空间
项目页面
讨论
不转换
不转换
简体
繁體
大陆简体
香港繁體
澳門繁體
大马简体
新加坡简体
臺灣正體
查看
阅读
查看源代码
查看历史
更多
搜索
导航
首页
最近更改
随机页面
MediaWiki帮助
工具
链入页面
相关更改
特殊页面
页面信息