Skip to content

Insecure coding practices permeate numerous businesses, potentially leading to significant future difficulties in ensuring system safety.

Majority of businesses unknowingly deploy insecure software, caution raised

Insecure software code is being unwittingly dispatched by numerous businesses, posing potential...
Insecure software code is being unwittingly dispatched by numerous businesses, posing potential recovery issues.

Insecure coding practices permeate numerous businesses, potentially leading to significant future difficulties in ensuring system safety.

In a recent study involving 1,500 CISOs, AppSec Managers, and developers, Checkmarx has highlighted the growing trend of using AI in coding and the need for organizations to establish policies for AI tool usage to secure AI-generated code.

The study found that around one in two respondents are already using AI security code assistance, and a significant number (98%) of the companies have experienced a breach due to vulnerable code in the past year. This highlights the increased vulnerabilities that could have been avoided with proper coding expertise.

Checkmarx suggests utilizing agentic AI to analyze and fix issues across projects. Eran Kinsbruner, Checkmarx VP of Portfolio Marketing, predicts that AI-generated code will continue to proliferate, making secure software a competitive differentiator in the coming years.

To mitigate the new attack surfaces introduced by AI coding assistants, organizations should implement comprehensive security controls. These policies aim to address code privacy, vulnerability management, and compliance without hindering developer productivity.

Key policy components include:

  1. Zero-tolerance for secrets exposure: Ban the inclusion of secrets like API keys or passwords in AI prompts to prevent accidental leaks.
  2. Mandatory human review: Require manual code review on every AI-generated pull request to catch potential vulnerabilities or intellectual property risks.
  3. Security scanning integration: Embed automated static analysis and Software Composition Analysis (SCA) tools within CI/CD pipelines to detect flaws, vulnerabilities, and risky third-party dependencies early.
  4. Vendor agreements and data handling: Use enterprise contracts that explicitly forbid AI vendors from training models on proprietary code and ensure data privacy compliance.
  5. Controlled access and usage: Limit AI tool access to non-sensitive or isolated codebases initially, and use internal proxies or middle layers to sanitize or monitor input prompts.
  6. Continuous audit trails: Maintain records of AI usage and generated code for compliance and forensic analysis.
  7. AI-driven remediation: Incorporate AI-powered fixes and developer training to address security issues promptly and improve secure coding practices.

Before widespread AI tool adoption, organizations should perform a comprehensive risk assessment covering data leakage, model transparency, telemetry concerns, and cost implications. These policies aim to balance mitigating new attack surfaces with preserving software development velocity and innovation advantages.

However, the expanded use of AI in coding has been blamed on artificial intelligence, with the trend towards using AI in coding, including vibe coding and editing AI-generated code, largely attributed to AI. Despite this, the report suggests that generative AI has eroded developer ownership, making code less likely to be affiliated with individuals.

Despite Checkmarx not specifying any official internal guidance on using AI for coding, the company urges organizations to establish policies for AI tool usage to secure AI-generated code in software development. Fewer than half of the respondents use foundational security tools like DAST and IaC scanning, indicating a need for more comprehensive security measures in the industry.

[1] Checkmarx, (2022), Secure AI-Generated Code: Best Practices for DevSecOps, Retrieved from https://checkmarx.com/resources/blog/secure-ai-generated-code-best-practices-for-devsecops/ [2] Checkmarx, (2022), The State of AI Security: A Checkmarx Report, Retrieved from https://checkmarx.com/resources/reports/the-state-of-ai-security-a-checkmarx-report/ [3] Checkmarx, (2022), The State of DevSecOps 2022, Retrieved from https://checkmarx.com/resources/reports/the-state-of-devsecops-2022/ [4] Checkmarx, (2022), The State of DevSecOps: A Checkmarx Report, Retrieved from https://checkmarx.com/resources/reports/the-state-of-devsecops-a-checkmarx-report/ [5] Checkmarx, (2022), The State of DevSecOps: A Checkmarx Report, Retrieved from https://checkmarx.com/resources/reports/the-state-of-devsecops-a-checkmarx-report/

Read also:

Latest

Concurrent Process Execution in Operating Systems

Concurrent Processing in Computer Systems

Comprehensive Educational Hub: Our platform caters to various learning fields, encompassing computer science and programming, school curriculum, professional development, commerce, software tools, and competitive exam preparation, among others.