Quote for the day:
"The level of morale is a good barometer of how each of your people is experiencing your leadership." -- Danny Cox
The culture you can’t see is running your security operations
Non-observable culture is everything happening inside people’s heads. Their
beliefs about cyber risk. Their attitudes toward security. Their values and
priorities when security conflicts with convenience or speed. This is where
the real decisions get made. You can’t see someone’s belief that “we’re too
small to be targeted” or “security is IT’s job, not mine.” You can’t measure
their assumption that compliance equals security. You can’t audit their gut
feeling that reporting a mistake will hurt their career. But these invisible
forces shape every security decision your people make. Non-observable culture
includes beliefs about the likelihood and severity of threats. It includes how
people weigh security against productivity. It includes their trust in
leadership and their willingness to admit mistakes. It includes all the
cognitive biases that distort risk perception. ... Implicit culture is the
stuff nobody talks about because nobody even realizes it’s there. The unspoken
assumptions. The invisible norms. The “way things are done here” that everyone
knows but nobody questions. This is the most powerful layer because it
operates below conscious awareness. People don’t choose to follow implicit
norms. They do. Automatically. Without thinking. Implicit culture includes
unspoken beliefs like “security slows us down” or “leadership doesn’t really
care about this.” It contains hidden power dynamics that determine who can
challenge security decisions and who can’t.The top 6 project management mistakes — and what to do instead
Project managers are trained to solve project problems. Scope creep. Missed
deadlines. Resource bottlenecks. ... Start by helping your teams understand
the business context behind the work. What problem are we trying to solve? Why
does this project matter to the organization? What outcome are we aiming for?
Your teams can’t answer those questions unless you bring them into the
strategy conversation. When they understand the business goals, not just the
project goals, they can start making decisions differently. Their
conversations change to ensure everyone knows why their work matters. ...
Right from the start of the project, you need to define not just the business
goal but how you’ll measure it was successful in business terms. Did the
project reduce cost, increase revenue, improve the customer experience? That’s
what you and your peers care about, but often that’s not the focus you ask the
project people to drive toward. ... People don’t resist because they’re lazy
or difficult. They resist because they don’t understand why it’s happening or
what it means for them. And no amount of process will fix that. With an
accelerated delivery plan designed to drive business value, your project teams
can now turn their attention to bringing people with them through the change
process. ... To keep people engaged in the project and help it keep
accelerating toward business goals, you need purpose-driven communication
designed to drive actions and decisions. AI has static identity verification in its crosshairs. Now what?
Identity models based on “joiner–mover–leaver” workflows and static permission
assignments cannot keep pace with the fluid and temporary nature of AI agents.
These systems assume identities are created carefully, permissions are assigned
deliberately, and changes rarely happen. AI changes all of that. An agent can be
created, perform sensitive tasks, and terminate within seconds. If your
verification model only checks identity at login, you’re leaving the entire
session vulnerable. ... Securing AI-driven enterprises requires a shift similar
to what we saw in the move from traditional firewalls to zero-trust
architectures. We didn’t eliminate networks; we elevated policy and verification
to operate continuously at runtime. Identity verification for AI must follow the
same path. This means building a system that can: Assign verifiable identities
to every human and machine actor; Evaluate permissions dynamically based on
context and intent; Enforce least privilege at high velocity; Verify
actions, not just entry points; ... This is why frameworks like SPIFFE and
modern workload identity systems are receiving so much attention. They treat
identity as a short-lived, cryptographically verifiable construct that can be
created, used, and retired in seconds, exactly the model AI agents require.
Human activity is becoming the minority as autonomous systems that can act
faster than we can are being spun up and terminated before governance can keep
up. That’s why identity verification must shift from a checkpoint to a real-time
trust engine that evaluates every action from every actor, human or AI.
AWS European cloud service launch raises questions over sovereignty
AWS established a new legal entity to operate the European Sovereign Cloud under
a separate governance and operational model. The new company is incorporated in
Germany and run exclusively by EU residents, AWS said. ... “This is the elephant
in the room,” said Rene Buest, senior director analyst at Gartner. There are two
main concerns regarding the operation of AWS’s European Sovereign Cloud for
businesses in Europe. The first relates to the 2018 US Cloud Act, which could
require AWS to disclose customer data stored in Europe to the United States, if
requested by US authorities. The second involves the possibility of US
government sanctions: If a business that uses AWS services is subject to such
sanctions, AWS may be compelled to block that company’s access to its cloud
services, even if its data and operations are based in Europe. ... It’s an open
question at this stage, said Dario Maisto, senior analyst at Forrester. “Cases
will have to be tested in court before we can have a definite answer,” he said.
“The legal ownership does matter, and this is one of the points that may not be
addressed by the current setup of the AWS sovereign cloud.” AWS’s European
Sovereign Cloud represents one of several ways that European business can
approach the challenge of digital sovereignty. Gartner identifies a spectrum
that ranges from global hyperscaler public cloud services through to regional
cloud services that are based on non-hyperscaler technology.
Why peripheral automation is the missing link in end-to-end digital transformation?
While organisations have successfully modernized their digital cores, the “last
mile” of business operations often remains fragmented, manual, and surprisingly
analogue. This gap is why Peripheral Automation is emerging not merely as a
tactical correction but as the critical missing link in achieving true,
end-to-end digital transformation. ... Peripheral Automation offers a strategic
resolution to this paradox. It’s an architectural philosophy that advocates
“differential innovation.” Rather than disrupting stable cores to accommodate
fleeting business needs, organisations build agile, tailored applications and
workflows that sit on top of the core systems. This approach treats the
enterprise as a layered ecosystem. The core remains the single source of truth,
but the periphery becomes the “system of engagement”. By leveraging modern
low-code platforms and composable architecture, leaders can deploy lightweight,
purpose-built automation tools that address specific friction points without
altering the underlying infrastructure. ... Peripheral automation reduces
process latency, manual effort, and rework. By addressing specific pain points
rather than attempting broad, multi-year system redesigns, companies unlock
measurable efficiency in weeks. This precision improves throughput, reduces
cycle times, and frees teams to focus on high-value work.
How does agentic ops transform IT troubleshooting?
AI Canvas introduces a fundamentally different user experience for network troubleshooting. Rather than navigating through multiple dashboards and CLI interfaces, engineers interact with a dynamic canvas that populates with relevant widgets as troubleshooting progresses. You could say that the ‘canvas’ part of the name AI Canvas is the most important part of it. That is, AI Canvas is actually a blank canvas every time you start troubleshooting. It fills the canvas with boxes and on the fly widgets, among other things, during the troubleshooting. Sampath confirms this: “When you ask a question, it’s using and picking the right types of tools that it can go and execute on a specific task and calls agents to be able to effectively take a task to completion and returns a response back.” The system can spin up monitoring agents that continuously provide updated information, creating a living troubleshooting environment rather than static reports. ... AI Canvas doesn’t exist in isolation. It builds on Cisco’s existing automation foundation. The company previously launched Workflows, a no-code network automation engine, and AI assistants with specific skills for network operations. “All of the automations that are already baked into the workflows, the skills that were built inside of the assistants, now manifest themselves inside of the canvas,” Sampath details. This creates a continuum from deterministic workflows to semi-autonomous assistants to fully autonomous agentic operations.UK government launches industry 'ambassadors' scheme to champion software security improvements
"By acting as ambassadors, signatories are committing to a process of
transparency, development and continuous improvement. The implementation of this
code of practice will take time and, in doing so, may bring to light issues that
need to be addressed," DSIT said in a statement confirming the announcement.
"Signatories and policymakers will learn from these issues as well as the
successes and challenges for each organization and, where appropriate, will
share information to help develop and strengthen this government policy." ...
The Software Security Code of Practice was unveiled by the NCSC in May last
year, setting out a series of voluntary principles defining what good software
security looks like across the entire software lifecycle. Aimed at technology
providers and organizations that develop, sell, or procure software, the code
offers best practices for secure design and development, build-environment
security, and secure deployment and maintenance. The code also emphasizes the
importance of transparent communication with customers on potential security
risks and vulnerabilities. ... “The code moves software security beyond
narrow compliance and elevates it to a board-level resilience priority. As
supply chain attacks continue to grow in scale and impact, a shared baseline is
essential and through our global community and expertise, ISC2 is committed to
helping professionals build the skills needed to put secure-by-design principles
into practice.”
Privacy teams feel the strain as AI, breaches, and budgets collide
Where boards prioritize privacy, AI use appears more frequently and follows
defined direction. Larger enterprises, particularly those with broader risk and
compliance functions, also report higher uptake. In smaller organizations, or
those where privacy has limited visibility at the leadership level, AI adoption
remains tentative. Teams that apply privacy principles throughout system
development report higher use of AI for privacy tasks. In these environments, AI
supports ongoing work rather than introducing new approaches. ... Respondents
working in organizations where privacy has active board backing report more
consistent use of privacy by design. Budget stability shows a similar pattern,
with better-funded teams reporting stronger integration of privacy into design
and engineering work. The study also shows that privacy by design on its own
does not stop breaches. Organizations that experienced breaches report similar
levels of design practice as those that did not. The data places privacy by
design mainly in a governance and compliance role, with limited connection to
incident prevention. ... Governance shapes how teams view that risk.
Professionals in organizations where privacy lacks board priority report higher
expectations of a breach in the coming year. Gaps between privacy strategy and
broader business goals also appear alongside higher breach expectations,
suggesting that structural alignment influences outlook as much as technical
controls. Confidence remains common, even among organizations that have
experienced breaches.
Cyber Insights 2026: Information Sharing
The sheer volume of cyber threat intelligence being generated today is
overwhelming. “Information sharing channels often help condense inputs and
highlight genuine signals amid industry noise,” says Caitlin Condon, VP of
security research at VulnCheck. “The very nature of cyber threat intelligence
demands validation, context, and comparison. Information sharing allows
cybersecurity professionals to more rigorously assess rising threats, identify
new trends and deviations, and develop technically comprehensive guidance.” ...
“The importance of the Cybersecurity Information Sharing Act of 2015 for U.S.
national security cannot be overstated,” says Crystal Morin, cybersecurity
strategist at Sysdig. “Without legal protections, many legal departments would
advise security teams to pull back from sharing threat intelligence, resulting
in slower, more cautious processes. ...” CISOs have developed their own closed
communities where they can discuss current incidents with other CISOs. This is
done via channels such as Slack, WhatsApp and Signal. Security of the channels
is a concern, but who better than multiple CISOs to monitor and control
security? ... “Much of today’s threat intelligence remains reactive, driven by
short-lived IoCs that do little to help agencies anticipate or disrupt
cyberattacks,” comments BeyondTrust’s Greene. “We need to modernize our
information-sharing framework to emphasize behavior-based analytics enriched
with identity-centric context,” he continues. Edge AI: The future of AI inference is smarter local compute
The bump in edge AI goes hand in hand with a broader shift in focus from AI
training, the act of preparing machine learning (ML) models with the right data,
to inference, the practice of actively using models to apply knowledge or make
predictions in production. “Advancements in powerful, energy-efficient AI
processors and the proliferation of IoT (internet of things) devices are also
fueling this trend, enabling complex AI models to run directly on edge devices,”
says Sumeet Agrawal ... “The primary driver behind the edge AI boom is the
critical need for real-time data processing,” says David. The ability to analyze
data on the edge, rather than using centralized cloud-based AI workloads, helps
direct immediate decisions at the source. Others agree. “Interest in edge AI is
experiencing massive growth,” says Informatica’s Agrawal. For him, reduced
latency is a key factor, especially in industrial or automotive settings where
split-second decisions are critical. There is also the desire to feed ML models
personal or proprietary context without sending such data to the cloud. “Privacy
is one powerful driver,” says Johann Schleier-Smith ... A smaller footprint for
local AI is helpful for edge devices, where resources like processing capacity
and bandwidth are constrained. As such, techniques to optimize SLMs will be a
key area to aid AI on the edge. One strategy is quantization, a model
compression technique that reduces model size and processing requirements.
No comments:
Post a Comment