A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 197

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3165
Function: getPubMedXML

File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 317
Function: require_once

Promises and perils of using Transformer-based models for SE research. | LitMetric

Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Many Transformer-based pre-trained models for code have been developed and applied to code-related tasks. In this paper, we analyze 519 papers published on this topic during 2017-2023, examine the suitability of model architectures for different tasks, summarize their resource consumption, and look at the generalization ability of models on different datasets. We examine three representative pre-trained models for code: CodeBERT, CodeGPT, and CodeT5, and conduct experiments on the four topmost targeted software engineering tasks from the literature: Bug Fixing, Bug Detection, Code Summarization, and Code Search. We make four important empirical contributions to the field. First, we demonstrate that encoder-only models (CodeBERT) can outperform encoder-decoder models for general-purpose coding tasks, and showcase the capability of decoder-only models (CodeGPT) for certain generation tasks. Second, we study the most frequently used model-task combinations in the literature and find that less popular models can provide higher performance. Third, we find that CodeBERT is efficient in understanding tasks while CodeT5's efficiency is unreliable on generation tasks due to its high resource consumption. Fourth, we report on poor model generalization for the most popular benchmarks and datasets on Bug Fixing and Code Summarization tasks. We frame our contributions in terms of promises and perils, and document the numerous practical issues in advancing future research on transformer-based models for code-related tasks.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neunet.2024.107067DOI Listing

Publication Analysis

Top Keywords

models
9
tasks
9
promises perils
8
transformer-based models
8
pre-trained models
8
models code
8
code-related tasks
8
resource consumption
8
bug fixing
8
code summarization
8

Similar Publications