Both GitHub and OpenAI say that, on the contrary, their AI coding tools are only likely to become less error prone. OpenAI says it vets projects and code both manually and using automated tools.

De Moor at GitHub says recent updates to Copilot should have reduced the frequency of security vulnerabilities. But he adds that his team is exploring other ways of improving the output of Copilot. One is to remove bad examples that the underlying AI model learns from. Another may be to use reinforcement learning, an AI technique that has produced some impressive results in games and other areas, to automatically spot bad output, including previously unseen examples. “Enormous improvements are happening,” he says. “It’s almost unimaginable what it will look like in a year.”