Quote for the day:
"Listen with curiosity, speak with honesty act with integrity." -- Roy T Bennett
What does aligning security to the business really mean?
“Alignment to me means that information security supports the strategy of the
organization,” says Sattler, who also serves as a board director with the
governance association ISACA. ... “It’s not enough to say it; you actually
have to do it,” she explains. “There is a contingent of cybersecurity that
sees itself as an island, implementing defense in depth in every corner of the
organization, adopting all these frameworks and standards, but there is
diminishing returns in doing that. So instead of saying, ‘This is our
cybersecurity discipline and we’re doing all these things because the
benchmarks tell us to,’ CISOs have to align their efforts to their
organization’s business model.” ... To align, she says, security leaders must
“know the objectives the business has and use those to shape strategy, whether
it’s cost containment, going into new markets, adopting cloud. The playbook
starts from understanding the organizational priorities and then layering in
what threat actors are doing in that industry and what could go wrong, what is
the risk we can live with, and understanding and articulating the business
impact of security incidents.” ... “When security is not aligned, security is
reacting to changes rather than shaping changes,” says Matt Gorham. “But when
security isn’t chasing the business it’s because it’s at the table from the
beginning and is saying, ‘Here’s how I can help the business grow and grow
securely.’”CISO Burnout – Epidemic, Endemic, or Simply Inevitable?
“Burnout and PTSD are different conditions, though they can coexist and share
some symptoms,” says Ventura. “The constant hypervigilance required in our
roles can mirror PTSD symptoms, and some cyber security professionals do
experience what could be considered secondary trauma from constantly dealing
with the aftermath of cyber-attacks.” Experiencing trauma can make you more
susceptible to burnout, and burnout can exacerbate existing trauma responses.
“Both conditions are serious and treatable, but they require different
approaches,” she suggests. And both are further complicated by
neurodivergence, a characteristic that is particularly prevalent in
cybersecurity, and especially among CISOs. ... “From my experience working
with senior cyber security leaders,” she continues, “burnout also affects
their ability to lead their teams effectively. They become less empathetic,
more prone to micromanaging, and, ironically, more likely to create the very
conditions that lead to burnout in their staff. The strategic thinking that
makes a great CISO (the ability to see the big picture, anticipate threats,
and balance risk with business needs) gets clouded by exhaustion and cynicism.
Perhaps most dangerously, burned-out CISOs often develop tunnel vision,
focusing obsessively on certain threats while missing others entirely. When
the person responsible for an organization’s entire security posture is
running on empty, everyone is at risk.”Uncovering the risks of unmanaged identities
Unmanaged AI agents often operate independently, making it difficult to track
and monitor their activities without a centralized management system. These
agents can adapt and change their behavior autonomously, which complicates
efforts to predict and control their actions. While performing their duties,
AI agents can even spin up other models and agents that have access to
valuable data. ... Unmanaged identities significantly expand the attack
surface, providing more entry points for attackers. They are prime targets for
credential theft, which can lead to lateral movement within an organization’s
network. Forgotten or over-permissioned accounts can facilitate privilege
escalation, allowing attackers to gain unauthorized access to sensitive data.
Real-world breaches have been linked to unmanaged identities, underscoring the
critical need for effective identity management. ... Inefficient access
management due to unmanaged identities increases IT overhead and complexity.
Unauthorized access or accidental deletions can disrupt business operations,
leading to breaches, financial losses, and diminished customer trust. ...
Unmanaged identities present a clear and present danger to organizations. They
increase the risk of security breaches, compliance failures, and operational
disruptions. It is imperative for organizations to prioritize identity
discovery and management as a core security practice.Empowering Teams: Decentralizing Architectural Decision-Making
The Fractured Cloud: How CIOs Can Navigate Geopolitical and Regulatory Complexity
Initially, cloud environments were largely interchangeable from a governance,
compliance, and security perspective. It didn't really matter exactly which
cloud data center hosted an organization's workloads, or which jurisdiction
the data center was located in. IT leaders had the luxury of choosing cloud
platforms and regions based primarily on factors such as pricing and latency,
without having to consider geopolitics or the global regulatory environment.
Fast forward to the present, however, and planning a cloud architecture -- let
alone evolving an existing cloud strategy in response to changing needs -- has
become much more complex. ... During the past decade or so, a host of
regulations have emerged that apply to specific jurisdictions, including the
GDPR and California Public Records Act (CPRA). Regulations dealing with AI,
which are just now coming online, are likely to add even more diversity as
different states or countries introduce varying laws. ... A related issue is
the increasing pressure organizations face surrounding data localization,
which refers to the practice of keeping data within a certain country or
jurisdiction. Regulations require this in some cases. Even if they don't,
businesses may voluntarily choose to ensure data localization for the purposes
of improving workload performance, or to assure customers that their data
never leaves their home region.
Let's Get Physical: A New Convergence for Electrical Grid Security
Power plants and transmission/distribution system operators (TSOs and DSOs) have
long focused on maintaining uptime and enhancing the resilience of their
services; keeping the lights on is always the goal. That's especially true as
the past few years have seen the rise of OT/OT convergence, wherein formerly
siloed equipment that runs physical processes for critical infrastructure
(operational technology, or OT) has been hooked up to the IT network and the
Internet in some cases, exposing it to more cyberthreats. Now, another type of
convergence been forcing a new conversation. ... In this new world, both
industry regulators and analysts, like those at Black & Veatch, are arguing
the same point: that where once keeping the lights on might have just meant
maintaining equipment and avoiding fallen trees, today's grid operators need a
robust, integrated physical and cybersecurity strategy to maintain continuous
service. ... an IT operation might primarily concern itself with
firewalls, or network monitoring; but "in many cases, cyberattacks can often
involve physical access to sites, whether by malicious insiders or unwitting
employees and contractors. Understanding who is present on-site, when and why,
is critical to investigating and mitigating attacks on operations," Bramson
explains.
Was data mesh just a fad?
Data mesh architecture promised to solve these problems. A polar opposite
approach from a data lake, a data mesh gives the source team ownership of the
data and the responsibility to distribute the dataset. Other teams access the
data from the source system directly, rather than from a centralized data lake.
The data mesh was designed to be everything that the data lake system wasn’t.
... But the excitement around data mesh didn’t last. Many users became
frustrated. Beneath the surface, almost every bottleneck between data providers
and data consumers became an implementation challenge. The thing is, the data
mesh approach isn’t a once-and-done change, but a long-term commitment to
prepare a data schema in a certain way. Although every source team owns their
dataset, they must maintain a schema that allows downstream systems to read the
data, rather than replicating it. ... No, data mesh is not a fad, nor is it the
next big thing that will solve all of your data challenges. But data mesh can
dramatically reduce data management overhead, and at the same time improve data
quality, for many companies. In essence, data mesh is a shift in mindset, one
that completely changes the way you view data. Teams must envision data as a
product, continuously showing commitment for the source team to own the data set
and discouraging duplication.
8 ways to make responsible AI part of your company's DNA
"Responsible AI is a team sport," the report's authors explain. "Clear roles and
tight hand-offs are now essential to scale safely and confidently as AI adoption
accelerates." To leverage the advantages of responsible AI, PwC recommends
rolling out AI applications within an operating structure with three "lines of
defense." First line: Builds and operates responsibly. Second line: Reviews and
governs. Third line: Assures and audits. ... "For tech leaders and managers,
making sure AI is responsible starts with how it's built," Rohan Sen, principal
for cyber, data, and tech risk with PwC US. "To build trust and scale AI safely,
focus on embedding responsible AI into every stage of the AI development
lifecycle, and involve key functions like cyber, data governance, privacy, and
regulatory compliance," said Sen. ... "Start with a value statement around
ethical use," said Logan. "From here, prioritize periodic audits and consider a
steering committee that spans privacy, security, legal, IT, and procurement.
Ongoing transparency and open communication are paramount so users know what's
approved, what's pending, and what's prohibited. Additionally, investing in
training can help reinforce compliance and ethical usage." ... Make it a
priority to "continually discuss how to responsibly use AI to increase value for
clients while ensuring that both data security and IP concerns are addressed,"
said Tony Morgan, senior engineer at Priority Designs.
Context Engineering: The Next Frontier in AI-Driven DevOps
Context Engineering represents a significant evolution from the early days of prompt engineering, which focused on crafting the perfect, isolated instruction for an AI model. Context engineering, in contrast, is about orchestrating the entire information ecosystem around the AI. It’s the difference between giving someone a map (prompt engineering) and providing them with a real-time GPS that has traffic updates, road closures, and understands your personal driving preferences. ... The core components of context engineering in a DevOps environment include: Dynamic Information Assembly: Aggregating data from a multitude of DevOps tools, including monitoring platforms, CI/CD pipelines, and infrastructure as code (IaC) repositories. Multi-Source Integration: Connecting to APIs, databases, and internal documentation to create a comprehensive view of the entire system. Temporal Awareness: Understanding the history of changes, incidents, and performance to identify patterns and predict future outcomes. ... In a traditional setup, the CI/CD pipeline would run a standard set of tests. But with context engineering, a context-aware AI agent analyzes the change. It recognizes the high-risk nature of the code, cross-references it with a recent security audit that flagged a related library, and automatically triggers an extended security testing suite. It also notifies the security team for a priority review. This is a far cry from the old days of one-size-fits-all pipelines.Drowning in Data? Here’s Why You Need to Ditch the Rowboat for an Aircraft Carrier
In an effort to stay afloat, many enterprises are trying to patch their systems
with incremental upgrades. They add more cloud instances. They layer on external
tools. They spin up new teams to manage increasingly fragmented stacks. But
scaling up a fragile system doesn’t make it strong. It just makes the cracks
bigger. ... The deeper issue is this: the dominant architecture most
enterprises still rely on was designed over a decade ago. It served a world
where workloads operated in gigabytes or single-digit terabytes. Today,
companies are navigating hundreds of petabytes, yet many are still using
infrastructure built for a far smaller scale. It’s no wonder the systems are
buckling under the weight. ... As organizations reevaluate their data
architectures, several priorities are coming into sharper focus: Reducing
fragmentation by moving toward more unified environments, where systems work in
concert rather than in silos. Improving performance and cost-efficiency not just
through hardware, but through smarter architecture and workload optimization.
Lowering latency for high-demand workloads like geospatial, AI, and real-time
analytics, where speed directly impacts decision-making. Managing the energy
consumption bottleneck in ways that align with both financial and sustainability
goals. Ultimately, this shift is about enabling teams to go from playing defense
(maintaining systems and containing cost) to playing offense with faster, more
actionable insights.
No comments:
Post a Comment