Read Time:58 Second

When GitHub launched the code autocomplete tool Copilot in June 2021, many developers were in awe, saying it reads their minds and helps them write code faster. Copilot looks at the variable names and comments someone writes and suggests what should come next. It provides lines of code or even entire functions the developer might not know how to write.

However, developers using unknown suggestions without verifying them can lead to security weaknesses. Researchers at the New York University’s Tandon School of Engineering put Copilot to the test and saw that 40% of the code it generated in security-relevant contexts had vulnerabilities.

“Copilot’s response to our scenarios is mixed from a security standpoint, given the large number of generated vulnerabilities,” the researchers wrote in a paper. They checked the code using GitHub’s CodeQL, which automatically looks for known weaknesses, and found that developers often get SQL-injection vulnerabilities or flaws included on the 2021 CWE Top 25 Most Dangerous Software Weaknesses list. Also, when it comes to domain-specific languages, such as Verilog, it struggles to generate code that’s “syntactically correct and meaningful.”

To read this article in full, please click here

Read More