A recent study conducted by cybersecurity startup Snyk Ltd. has shed light on the growing reliance on artificial intelligence coding assistants within the software development community, raising concerns about security and accuracy.
The report, which surveyed 537 individuals involved in software engineering and security, indicates a staggering 96% of teams are now utilizing AI coding tools, with over half relying on them most or all the time. This widespread adoption, particularly over the last year, has notably accelerated the pace of software code production and hastened new code deployment.
Snyk Ltd. Study Raises Security and Accuracy Concerns in Software Development
However, the report points to a significant issue: a misplaced trust in the security of AI-suggested code. Despite frequent concerns about its accuracy, many developers seem to have developed a “herd mentality,” assuming AI-generated code to be inherently safe. This assumption persists even though 92% of respondents acknowledge that AI coding tools often generate insecure code. Interestingly, 76% still believe AI-generated code to be more secure than human-written code.
The study found that the rapid adoption of AI tools in coding has not been accompanied by equivalent advancements in security practices. Only a small fraction (less than 10%) of surveyed teams have implemented automated security checks. This gap in automated testing is particularly concerning regarding open-source components, which are commonly used in AI-generated code. Only a quarter of respondents reported using automated tools to check the security of these components.
Despite 86% of respondents expressing concerns about the security implications of using AI code completion tools, there remains a cognitive dissonance between these concerns and the actual use of the tools. Many developers seem to believe that the widespread use of AI coding tools equates to trustworthiness.
More than half of those surveyed view AI coding tools as an integral part of their software supply chain. However, this acknowledgment has not significantly altered application security processes. There is a noted lack of a comprehensive strategy for integrating AI tools securely into the development pipeline.
The report concludes with a stark observation: there is a clear contradiction in the perception of developers regarding the security of AI coding suggestions. Most respondents, including security professionals, believe that AI code suggestions are secure, yet they also admit that insecure AI code suggestions are common. This contradiction underscores a critical tension within the field, highlighting the need for more rigorous security measures and awareness in the use of AI coding assistants.