Over the past year, generative artificial intelligence (AI) rapidly emerged as a game-changing technology, similar to the disruptive force of cloud computing in the 2000s. As often happens during the initial phases of disruptive technologies, we marvel at the wide-ranging impact of its sudden popularity. Generative AI aligns with that narrative, igniting debates and fostering discussions within both the tech community and beyond.
In the landscape of software supply chains, every software component and code snippet plays a critical role in the delivery of secure and reliable software. The integration of generative AI holds immense potential to influence your unique software supply chain and software development life cycle (SDLC).
Within this context, we wanted to specifically better understand how generative AI was affecting two distinct but interrelated groups:
- Software engineers and DevOps professionals who are utilizing generative AI to expedite coding, enhance security, and fortify supply chains.
- Security professionals or SecOps who are leveraging generative AI for precise code analysis, ensuring swift issue detection in an era of heightened open source software (OSS) security concerns.
To explore the profound influence of generative AI in software development, Sonatype conducted a comprehensive survey involving 400 DevOps and 400 SecOps leaders. The survey delved into various facets of generative AI, and its results offer intriguing insights and reveal both promises and pitfalls.
What we learned: Key findings for navigating generative AI
The survey results tell a compelling narrative:
- An overwhelming majority (97%) of respondents are already actively leveraging generative AI in their workflows.
- However, this adoption is not without nuance, as three-quarters (74%) of these leaders reported feeling pressured to embrace generative AI despite their reservations related to security.
- Notably, security risks emerged as the primary concern regarding generative AI among 52% of these professionals.
- In a close second, 48% of respondents expressed anxieties about job losses related to generative AI adoption.
Surprising sentiments: SecOps see more benefits with generative AI than DevOps
Our survey revealed distinct differences in how DevOps and SecOps leaders perceive generative AI. While both groups recognized its transformative potential, they varied in their degrees of optimism.
DevOps leaders maintain a more critical stance, while SecOps leaders exhibit greater enthusiasm. This divergence is possibly driven by the scalability benefits generative AI offers to security experts, which is surprising considering we also found security issues related to generative AI as one of the top concerns.
Harnessing generative AI: Diverse use cases
Generative AI’s adoption has been extensive, demonstrating its widespread utility among the survey respondents. Engineers and security teams leverage generative AI for a range of tasks, employing it for various use cases such as testing, analysis, and vulnerability identification.
Notably, SecOps leaders have emerged as early adopters, actively integrating this technology into their software development processes, highlighting their keen interest in leveraging its numerous advantages.
Balancing act: Impact and challenges
Generative AI is delivering tangible benefits, such as faster software development and increased security. However, it does present challenges, such as data sprawl and concerns about code governance.
According to our survey findings, respondents are experiencing significant time savings through the adoption of generative AI, with SecOps leads benefiting the most from this transformative technology.
Highlighting security concerns
Security issues related to generative AI usage remain a top concern among software engineers. While DevOps leaders expressed more worry about security vulnerabilities and complexity, SecOps leaders are less concerned, possibly due to the efficiency gains generative tools offer to security practitioners.
Responsible integration: Charting a path with generative AI
Organizations are actively addressing concerns about generative AI through internal policies and are awaiting potential regulation. The copyright ownership of AI-generated content is a contentious issue, with both DevOps and SecOps groups advocating for creator ownership and compensation for developers whose code is used in large language models (LLMs).
These survey insights not only deepen our comprehension of the present landscape but also establish the groundwork for responsible AI integration, optimizing both software innovation and security simultaneously.
Exploring the impact of generative AI
Although still in its early stages, generative AI has already sparked a remarkable transformation in software development and security, propelling operational changes across many industries.
Generative AI’s overwhelming adoption regardless of security concerns underscores the growing demand for speed, efficiency, and automation in software development and security. The pressure to accelerate processes while maintaining security remains paramount, urging organizations to prioritize automation as a pivotal tool for navigating a landscape of growing software supply chain attacks.
Despite lingering concerns about security risks and job displacement, many industry leaders believe that this technology will complement the creativity and innovation of software engineers and security professionals. Generative AI represents a remarkable milestone, poised to enhance the capabilities available to software engineers and security practitioners in unprecedented ways.
We have only begun to scratch the surface of generative AI’s immense potential, and its impact on software development continues to unfold. For a more comprehensive understanding of the current landscape, check out the full report.