Quote for the day:
"Leaders are more powerful role models
when they learn than when they teach." -- Rosabeth Moss Kantor

Software development teams face mounting challenges as security vulnerabilities pile up faster than they can be fixed. That's the key finding of Veracode's 15th annual State of Software Security (SoSS) report. ... According to Wysopal, several factors contribute to this prolonged remediation timeline. Growing Codebases and Complexity: As applications become larger and incorporate more third-party components, the scope for potential flaws increases, making it more time-consuming to isolate and remediate issues. Shifting Priorities: Many teams are under pressure to roll out new features rapidly. Security fixes are often deprioritized unless they are absolutely critical. Distributed Architectures: Modern microservices and container-based deployments can fragment responsibility and visibility. Coordinating fixes across multiple teams prolongs remediation. Shortage of Skilled AppSec Staff: Finding developers or security specialists with both security expertise and domain knowledge is challenging. Limited capacity can push out or delay fix timelines. ... "Many are using AI to speed up development processes and write code, which presents great risk," Wysopal said. "AI-generated code can introduce more flaws at greater velocity, unless they are thoroughly reviewed."

From a business perspective, generative AI cannot operate in a technical vacuum -- AI-savvy subject matter experts are needed to adapt the technology to specific business requirements -- that's the domain expertise career track. "As AI models become more commoditized, specialized domain knowledge becomes increasingly valuable," Challapally said. "What sets true experts apart is their deep understanding of their specific industry combined with the ability to identify where and how gen AI can be effectively applied within it." Often, he warned, bots alone cannot relay such specific knowledge. ... Business leaders cite the most intense need at this time "is for professionals who bridge both worlds -- those who deeply understand business requirements while also grasping the technical fundamentals of AI," he said. Rather than pure technologists, they seek individuals who combine traditional business acumen with technical literacy. These are the type of people who can craft product visions, understand basic coding concepts, and gather sophisticated requirements that align technology capabilities with business goals." For those on the technical side, it's important "to master the art of prompting these tools to deliver accurate results," said Challapally.
Incident response has historically been a reactive process, often hampered by
time-consuming manual procedures and a lack of historical and real-time
visibility. When a breach is detected, security teams scramble to piece together
what happened, often working with fragmented information from multiple sources.
This approach is not only slow but also prone to errors, leading to extended
downtime, increased costs, and sometimes, the loss of crucial data. ... The
quicker an enterprise or MSSP organization can respond to an incident, the lower
the risk of disruption and the less damage it incurs. An innovative approach
that automates and streamlines the collection and analysis of data in near
real-time during a breach allows security teams to quickly understand the scope
and impact, enabling faster decision-making and minimizing downtime. ...
Automation reduces the risk of human error, which is often a significant factor
in traditional incident response processes – riddled with fragmented
methodologies. By centralizing and correlating data from multiple sources, an
automated investgation system provides a more accurate, consistent and
comprehensive view of the incident, leading to better informed, more effective
containment and remediation efforts.

“Data governance” has become synonymous in some areas of academic study and
industry publication with the development of legislation, regulation, and
standards setting out rules and common requirements for how data should be
processed or put to use. It is also still considered synonymous with or a
sub-category of IT Governance in much of the academic literature. And let’s not
forget our friends in records and information management and their offshoot of
data governance. ... While there is extensive discussion in academia and in
practitioner literature about the need for people to lead on data and the
importance of people performing data stewardship-type roles, there is nothing
that has dug deeper to identify what we mean by “the right people.” ... In the
organizations of today, however, we are dealing with business leadership and
technology leadership for whom these topics simply did not exist when they were
engaged in study before entering the workforce. Therefore, they operate within
the mode of thinking and apply the mental models that were taught to them, or
which have dominated the cultures of the organizations where they have cut their
teeth and the roles they have had as they moved from entry-level to management
functions to leadership roles.

In 2025, resilience is the cornerstone of effective cybersecurity. The shift
from a defensive mindset to a proactive approach is evident in strategies such
as advanced attack surface analytics, continuous threat modeling and offensive
security testing. I’ve seen many penetration testing as a service (PTaaS)
providers place an emphasis on integrating continuous penetration testing with
attack surface management (ASM) as an example of how organizations can stay one
step ahead of adversaries. Organizations using continuous pentesting reported
30% fewer breaches in 2024 compared to those relying solely on annual
assessments, showcasing the value of a proactive approach. The adoption of
cybersecurity frameworks such as NIST and ISO 27001 provides a structured
approach to managing risks, but these frameworks must be tailored to the unique
needs of each enterprise. For example, enterprises operating in regulated
industries such as healthcare, finance and critical infrastructure must
prioritize compliance while addressing sector-specific vulnerabilities. CISOs
are focusing on data-driven decision making to quantify risks and justify
investments. By tying cybersecurity initiatives to financial outcomes, such as
reduced downtime and lower breach costs, CISOs can secure buy-in from
stakeholders and ensure long-term sustainability.
AI is now moving from training to inference, helping you quickly make sense of
or create a plan from the information you have. This is made possible based on
improvements to how AI understands massive amounts of semi-structured data. New
AI can figure out the signal from the noise, a critical step in framing the
cyber resilience problem. The power of AI as a programming language combined
with its ability to ingest semi-structured data opens up a new world of network
operations use cases. AI becomes an intelligent helpline, using the criteria you
feed it to provide guidance to troubleshoot, remediate, or resolve a network
security or availability problem. You get a resolution in hours or days – not
the weeks or months it would have taken to do it manually. ... AI is not the
same as automation; instead, it enhances automation by significantly speeding up
iteration, learning, and problem-solving processes. New AI allows you to
understand the entire scope of a problem before you automate and then automate
strategically. Instead of learning on the job – when you have a cyber resilience
challenge, and the clock is ticking – you improve your chances of getting it
right the first time. As the effectiveness of network automation increases, so
too will its adoption.

We are led to consider ‘systems thinking’ to address cyber risk. This approach
examines how all the systems we oversee interact on a larger scale, uncovering
valuable insights to quantify and mitigate cyber risk. This perspective
encourages a paradigm shift and rethinking of traditional risk management
practices, emphasizing the need for a more integrated and holistic approach. The
evolving and sophisticated cyber risk has heightened both awareness and
expectations around cybersecurity. Nowadays, businesses are being evaluated
based on their preparedness, resilience and how effectively they respond to
cyber risk. Moreover, it's crucial for companies to understand their disclosure
obligations across market and industry levels. Consequently, regulators and
investors demand that boards prioritize cybersecurity through strong governance.
... The CISO's role has evolved to include viewing cybersecurity not merely as
an IT issue but as a strategic and business risk. This shift demands that CISOs
possess a combination of technical expertise and strong communication skills,
enabling them to bridge the gap between technology and business leaders. They
should leverage predictive analytics or AI-based threat detection tools to
proactively manage emerging cyber risks.

Mobile apps run on specific devices and operating systems, which means that
certain operations are standard across every app instance. For example, in an
iOS app built on UIKit, the didFinishLaunchingWithOptions method informs the app
developer that a freshly launched app is almost ready to run. Listening for this
method in any app would in turn let you observe and learn more about the
completion of app launch automatically. Quick, out-of-the-box instrumentation
like this is easy to use. By importing an auto-instrumentation library to your
app, you can hook into the activity of your application without writing custom
code. Using auto-instrumentation provides standardized signals for actions that
should be recognized in a prescribed way. You could listen for app launch, as
described above, but also for the loading of views, for the beginning and ends
of network requests, crashes and so on. Observability would be great if imported
libraries did all the work. ... However, making sense of your mobile app
requires more than just monitoring the ubiquitous signals of mobile app
development. For one, mobile telemetry collection and transmission can be
limited by the operating system that the app user chooses, which is not designed
to transmit every signal of its own.

Understanding the full inventory of components involved in the data migration is
crucial. However, it is equally essential to have a clearly defined target and
to communicate this target to all stakeholders. This includes outlining the
potential implications of the migration for each stakeholder. The impact of the
migration will vary significantly depending on the nature of the project. For
example, a simple infrastructure refresh will have a much smaller impact than a
complete overhaul of the database technology. In the case of an infrastructure
refresh, the primary impact might be a brief period of downtime while the new
hardware is installed and the data is transferred. Stakeholders may need to
adjust their workflows to accommodate this downtime, but the overall impact on
their day-to-day operations should be minimal. On the other hand, a complete
change of database technology could have far-reaching implications. Stakeholders
may need to learn new skills to interact with the new database, and existing
applications may need to be modified or even completely rewritten to be
compatible with the new technology. This could result in a significant
investment of time and resources, and there may be a period of adjustment while
everyone gets used to the new system.

With AI, this problem gets exponentially worse. Let’s say a machine writes a
million lines of code – it can hold all of that in its head and figure things
out. But a human? Even if you wanted to address a problem, you couldn’t do so.
It’s impossible to sift through all that amount of code you’ve never seen before
just to find where the problem might be. In our case, what made it particularly
tricky was that the AI-generated code had these very subtle logical flaws: not
even syntactic issues, just small problems in the execution logic that you
wouldn’t notice at a glance. The volume of technical debt increases not just
because of complexity, but simply because of the sheer amount of code being
shipped. It’s a natural law. Even as humans, if you ship more code, you will
have more bugs and you will have more debt. If you are exponentially increasing
the amount of code you’re shipping with AI, then yes, maybe you catch some
issues during review, but what slips through just gets shipped. The volume
itself becomes the problem. ... the solution lies in far better communication
throughout the whole organisation, coupled with robust processes and tooling.
... The tooling side is equally important. We’ve customised our AI tools’
settings to align with our tech stack and standards. Things like prompt
templates that enforce our coding style, pre-configured with our preferred
libraries and frameworks.
No comments:
Post a Comment