AI code security, reliability, and data privacy are the biggest challenges facing software development this year, a new survey of software development organizations has found.
In the sixth annual study by Infragistics/Reveal, more than half (51%) of the 250 respondents, all technology leaders, identified security threats as one of their biggest challenges of 2025. That's a significant increase over 2024, when 34.5% of the survey respondents cited security threats as one of their biggest challenges.
A key contributor to their concern about security in the coming year is artificial intelligence. Infragistics' senior vice president for developer tools, Jason Beres, said that recent advancements in AI, particularly the proliferation of large language models (LLMs), has resulted in escalating cybersecurity risks. The threats come from both generative AI tools used by the business and code in the software ecosystem generated by AI coding assistants.
Here are key takeaways from the new survey — and expert insights on application security in the AI age.
[ Get White Paper: How the Rise of AI Will Impact Software Supply Chain Security ]
Software trust is in doubt
Beres said new risks from LLMs are increasing as AI takes hold across the enterprise.
“These models, which can mimic human-like communication, have the potential to inadvertently expose sensitive data or be exploited to generate harmful code. As LLMs become more powerful and widespread, the danger of sensitive corporate or personal information being unintentionally or maliciously leaked to publicly accessible models increases. There is also a growing concern that these models could introduce unsafe code or compromised data into corporate systems.”
—Jason Beres
Another factor contributing to today's cybersecurity risks is the shortage of skilled professionals, Beres said. “As cybersecurity threats increase at a rapid pace, the workforce is struggling to keep up. The ever-evolving nature of these threats demands that cybersecurity experts constantly refresh their expertise, making it difficult to stay ahead,” he said.
“This growing demand for talent, coupled with the high-pressure environment of the job, leads to burnout, causing many professionals to exit the field. In fact, our Reveal survey found that cybersecurity engineer was the third-hardest technical job to fill.”
—Jason Beres
Akhil Mittal, a senior manager at Black Duck Software, said security challenges caused by the rise of AI require more than just stopping attacks. “Companies can’t even be sure their software is safe,” he said. Black Duck's "Open Source Security and Risk Analysis" report for 2025 revealed that most codebases include libraries that are four years or older, which means a large number of software flaws go unpatched.
“Last year, security teams focused on patching vulnerabilities and keeping up with AI-driven attacks. Now, hackers have shifted tactics — they’re planting backdoors directly inside trusted software. The XZ Utils backdoor attack proved that even widely trusted open-source software can be compromised. If that can happen, how do developers know what’s actually secure?”
—Akhil Mittal
While some companies are using AI-powered security to fight back, "right now, hackers are outpacing defenders,” Mittal said. “If this keeps up, companies won’t have a choice — they’ll have to rethink how they vet open source or even start limiting its use.”
Darren Guccione, chief executive of Keeper Security, said government actions on privacy is another factor causing security to percolate to the top of the challenges list. “Regulatory scrutiny is reaching unprecedented levels, with global privacy laws tightening compliance requirements and imposing steeper financial and legal penalties for security lapses,” he said.
“Organizations can no longer afford a reactive approach; security must be deeply embedded into every phase of the SDLC to mitigate risks at scale. This requires a zero-trust mindset, where privileged access management, least-privilege access, strong authentication, and continuous real-time threat detection form the foundation of a resilient security strategy.”
—Darren Guccione
Eric Schwake, director of cybersecurity strategy at Salt Security, said the growing complexity of software development — and the increasing dependence on APIs and microservices — is also elevating security concerns.
“While the swift adoption of cloud-native technologies and DevOps practices enhances agility, [those practices] also create new vulnerabilities and necessitate continuous adjustments to security protocols. To tackle these issues, organizations must focus on establishing a substantial governance strategy for API security posture, ensuring APIs are designed, implemented, and managed with security as a primary consideration."
—Eric Schwake
Schwake said the rise of AI and machine learning in software development, while promising agility, also introduces security challenges regarding the reliability and safety of AI-generated code, as indicated in the Infragistics/Reveal survey.
The reliability of AI-generated code
The survey found that 45% of the participants said AI code reliability was among the biggest challenges in software development for 2025, Beres noted.
“One of the issues impacting AI code reliability is the complexity of AI models, especially deep learning systems, which often operate as black boxes. This makes it difficult for developers to predict or explain specific behaviors, complicating debugging and reliability.”
—Jason Beres
AI models also heavily depend on the quality of the data they are trained on. If the data is biased or incomplete, it can lead to unreliable outputs, Beres said. “AI systems must adapt to dynamic environments, often facing unforeseen situations that weren’t included in their training, further complicating decision making and reliability.”
Complexity is the enemy of secure software. And it highlights a problem facing organizations trying to minimize risk today: Legacy application security testing (AST) tools are not up to the challenge of AI code and the rise of software supply chain attacks.
“Traditional testing methods often fall short in assessing AI's reliability, especially when dealing with uncertainties and evolving environments. These concerns create a complex landscape that makes ensuring AI code reliability a difficult challenge.”
—Jason Beres
Iftach Ian Amit, founder and CEO of Gomboc.AI, said that as much as Gen AI is a force multiplier for developers to accomplish tasks faster, it also brings inaccuracy, data and intellectual property issues, and reliability concerns into play.
“AI lacks the context of most coding efforts — especially when dealing with an existing codebase. AI can be used as a shortcut for creating code from scratch that doesn’t need to interact or depend on other code, but when it comes to more complex, real-world challenges, developers find themselves still heavily needed to produce accurate and functional results.”
—Iftach Ian Amit
Amit said that when using AI in a development environment, there are no assurances that your code and data aren’t being used as part of the LLM for training, which means it may show up in other organizations using the same AI.
A development shift means testing must shift with it
ML models often rely on vast amounts of data from a range of sources, making them potentially prone to errors and biases outside of a system implementor's expectations. That makes ensuring the accuracy and consistency of AI-generated code difficult, especially when the models are trained on diverse and sometimes unverified data sources.
“The problem with AI-generated code is that it looks right — even when it’s completely wrong. And because AI presents its suggestions so confidently, developers trust it more than they should.”
—Akhil Mittal
Andrew Bolster, senior R&D manager at Black Duck Software, said the continued use of jQuery — a big security risk — highlights the problem because, even though modern JavaScript frameworks have replaced jQuery, AI coding assistants are still using it. "AI coding assistants continue to recommend it because these models learned from billions of lines of public code — including a lot of outdated, insecure practices,” he said. "Instead of selecting the best option, AI tends to favor what appeared most often in older codebases, even if that code is no longer safe."
Bolster said software development is shifting, and that means application security (AppSec) teams need to shift their approach as well.
“Developers have always been trained to write code, but now, AI is doing more of the writing. The real skill isn’t just coding anymore — it’s code review. Developers need to know how to spot AI mistakes, catch security risks, and optimize performance. But most developers haven’t been trained for that shift, and AI coding assistants don’t come with built-in accountability.”
—Andrew Bolster
If companies don’t rethink how they test AI-generated code, they will end up shipping unreliable software at scale, Bolster said. “They need to start training developers on AI oversight, improving validation processes, and making sure AI-generated code gets tested just as rigorously as human-written code,” he said.
Big data privacy challenges present risk
Data privacy is another worrisome issue for tech leaders, the survey found. More than 41% of those surveyed said it will be a big challenge in 2025, Beres said. “Stricter regulations, such as the GDPR in Europe and CCPA in California, require developers to design software that complies with increasingly complex data handling and transparency standards. This has made it harder for developers to balance functionality with compliance as laws demand greater scrutiny over how data is collected, stored, and used,” he said.
Beres added that the rise in sophisticated cyberattacks, including ransomware and data exfiltration, has escalated the risk of data breaches, forcing developers to implement stronger security measures to protect sensitive information. The expanding volume and complexity of data, driven by big data, IoT devices, and AI, layered on additional risk. “Managing and safeguarding such a vast and decentralized data ecosystem requires new strategies like advanced encryption, anonymization, and access control,” Beres said.
And data privacy is a core issue with users, making tackling the data privacy issue key for the enterprise. “As users become more aware of their privacy rights, they demand more control over their personal data, requiring developers to offer clear options for privacy management,” Beres said. “Developers must continuously navigate these evolving challenges to maintain privacy and trust while meeting both regulatory and user expectations.”
Melody "MJ" Kaufmann, an author and instructor with O'Reilly Media, said that today many organizations don’t have as much control over their data as they used to.
“The growing reliance on cloud services and third-party APIs has made data privacy harder to control. Sensitive information is often processed and stored outside an organization’s direct oversight, expanding the risk of data exposure.”
—Melody (MJ) Kaufmann
Modern AI capabilities threaten today's privacy controls
Feng Li, a computer information technology professor at Purdue University in Indianapolis, said one of the biggest privacy challenges for developers is the capability of AI models — especially large-scale deep learning models — to reverse anonymization.
“Techniques like model inversion attacks and membership inference attacks allow AI to reconstruct sensitive details from supposedly anonymized data. This makes true anonymization an almost impossible task for developers, who now have to rethink data protection at a fundamental level.”
—Feng Li
At the same time as AI is advancing, privacy laws have become much stricter, Li noted. "The EU’s AI Act and new U.S. state regulations no longer accept simple compliance checklists. They now demand rigorous, end-to-end cryptographic proof that personal data remains private,” he said.
“Developers I’ve spoken with, especially those working with real-time health care applications, struggle to integrate methods like homomorphic encryption and zero-knowledge proofs without severely impacting system performance. These privacy-preserving techniques are powerful, but they come with significant computational costs and scalability challenges.”
—Feng Li
One answer to AI’s threat to privacy may lie in localized solutions, said Timothy Bates, a clinical professor of cybersecurity at the University of Michigan-Flint’s College of Innovation and Technology. “AI doesn’t have to run in the cloud. It can be a personal thing," he said.
“OpenAI is trying to boil the ocean with 300, 400 billion parameters, but the average individual only needs 30 to 70 billion parameters, if that, to do normal daily administration-type stuff. Cheaper AI running on smaller hardware can address the privacy issue.”
—Timothy Bates
Fight back with AI — and advanced AppSec tooling
Black Duck's Mittal predicted that two major challenges will be reshaping software development in 2025: the legal uncertainty around AI-generated code, and the growing speed of AI-powered cyberattacks. “AI-generated code is creating legal headaches that didn’t exist at this scale before. [More] than half of audited codebases had licensing conflicts, often because AI-generated code introduced dependencies with unclear or improper licensing,” he said.
If companies don’t get ahead of this, they could end up in legal battles over who owns their AI-generated code, Mittal said. “Some may even have to rewrite entire AI-assisted codebases just to avoid violating open-source licenses.”
At the same time, AI-powered cyberattacks are advancing faster than security teams can keep up. Mittal said attackers are using AI to automate vulnerability discovery, generate malware that adapts in real time, and launch deepfake-driven phishing attacks that are nearly impossible to detect.
“If companies aren’t using AI-driven security, they will fall behind. Attackers are already automating exploits and finding weaknesses faster than human teams can react. The only way to keep up is to fight AI with AI.”
—Akhil Mittal
Dhaval Shah, senior director of product management for ReversingLabs, said the next generation of AI, agentic AI, will present a whole new challenge that security leaders need to step up to.
The main challenge with agentic AI is that it will serve to abstract security problems even further. Organizations will need to build safeguards and governance around how the agents operate, the security of the code, and the security of the models that run them, all while maintaining and improving the traditional guardrails for the security and quality of code and logic that’s produced either by humans or AI.
"Securing AI in development is like playing chess where the pieces move by themselves. With AI in development, not everything that can be secured can be seen, and not everything that can be seen can be secured.”
—Dhaval Shah
In particular, agentic AI ratchets up the risks of software supply chain security, Dhaval said, explaining that the addition of AI agents to the development workflow challenges traditional models in two big ways.
"First, AI agents blur traditional trust boundaries by seamlessly mixing proprietary, open-source, and generated code, making traditional software composition analysis ineffective. Second, they introduce new dependencies we can't easily track or verify, from model weights to training data, creating blind spots in our security monitoring.”
—Dhaval Shah
Shah said there are three major risks that AppSec pros will need to stay ahead of as agentic AI takes hold within their development organizations: dependency chain opacity, an expanded attack surface, and emergent behaviors.
Keep learning
- Get up to speed on securing AI/ML with our white paper: AI Is the Supply Chain. Plus: See RL's research on nullifAI and join our Webinar to learn how RL discovered the novel threat.
- Upgrade your software security posture with RL's new essential guide, Software Supply Chain Security for Dummies.
- Learn how commercial software risk is under-addressed: Download the white paper — and see our related Webinar for more insights.
Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.