[AccessD] Opinions?

Stuart McLachlan stuart at lexacorp.com.pg
Thu Apr 14 15:49:30 CDT 2022


Review forom August last year:
https://www.theregister.com/2021/08/25/github_copilot_study/

Academics have put GitHub's Copilot to the test on the security front, and said they 
found that roughly 40 per cent of the time, code generated by the programming 
assistant is, at best, buggy, and at worst, potentially vulnerable to attack.
Copilot arrived with several caveats, such as its tendency to generate incorrect 
code, its proclivity for exposing secrets, and its problems judging software licenses. 
But the AI programming helper, based on OpenAI's Codex neural network, also 
has another shortcoming: just like humans, it may produce flimsy code.
That's perhaps unsurprising given that Copilot was trained on source code from 
GitHub and ingested all the bugs therein. Nonetheless, five boffins affiliated with 
New York University's Tandon School of Engineering felt it necessary to quantify 
the extent to which Copilot fulfills the dictum "garbage in, garbage out."


On 14 Apr 2022 at 12:22, Rocky Smolin wrote:

> https://www.wired.com/story/openai-copilot-autocomplete-for-code/
> 
> r <https://www.wired.com/story/openai-copilot-autocomplete-for-code/>
> -- AccessD mailing list AccessD at databaseadvisors.com
> https://databaseadvisors.com/mailman/listinfo/accessd Website:
> http://www.databaseadvisors.com
> 




More information about the AccessD mailing list