Quote for the day:
“There is no failure except in no longer trying.” -- Chris Bradford
Google's DMARC Push Pays Off, but Email Security Challenges Remain

Large email senders are not the only groups quickening the pace of DMARC
adoption. The latest Payment Card Industry Data Security Standard (PCI DSS)
version 4.0 requires DMARC for all organizations that handle credit card
information, while the European Union's Digital Operational Resilience Act
(DORA) makes DMARC a necessity for its ability to report on and block email
impersonation, Red Sift's Costigan says. "Mandatory regulations and legislation
often serve as the tipping point for most organizations," he says. "Failures to
do reasonable, proactive cybersecurity — of which email security and DMARC is
obviously a part — are likely to meet with costly regulatory actions and the
prospect of class action lawsuits." Overall, the authentication specification is
working as intended, which explains its arguably rapid adoption, says Roger
Grimes, a data-driven-defense evangelist at security awareness and training firm
KnowBe4. Other cybersecurity standards, such as DNSSEC and IPSEC, have been
around longer, but DMARC adoption has outpaced them, he maintains. "DMARC stands
alone as the singular success as the most widely implemented cybersecurity
standard introduced in the last decade," Grimes says.
Can Your Security Measures Be Turned Against You?
Over-reliance on certain security products might also allow attackers to extend
their reach across various organizations. For example, the recent failure of
CrowdStrike’s endpoint detection and response (EDR) tool, which caused
widespread global outages, highlights the risks associated with depending too
heavily on a single security solution. Although this incident wasn’t the result
of a cyber attack, it clearly demonstrates the potential issues that can arise
from such reliance. For years, the cybersecurity community has been aware of the
risks posed by vulnerabilities in security products. A notable example from 2015
involved a critical flaw in FireEye’s email protection system, which allowed
attackers to execute arbitrary commands and potentially take full control of the
device. More recently, a vulnerability in Proofpoint’s email security service
was exploited in a phishing campaign that impersonated major corporations like
IBM and Disney. Windows SmartScreen is designed to shield users from malicious
software, phishing attacks, and other online threats. Initially launched with
Internet Explorer, SmartScreen has been a core part of Windows since version
8.
Why Zero Trust Will See Alert Volumes Rocket

As the complexity of zero trust environments grows, so does the need for tools
to handle the data explosion. Hypergraphs and generative AI are emerging as
game-changers, enabling SOC teams to connect disparate events and uncover
hidden patterns. Telemetry collected in zero trust environments is a treasure
trove for analytics. Every interaction, whether permitted or denied, is
logged, providing the raw material for identifying anomalies. The
cybersecurity industry have set standards for exchanging and documenting
threat intelligence. By leveraging structured frameworks like MITRE
ATT&CK, MITRE DEFEND, and OCSF, activities can be enriched with contextual
information enabling better detection and decision-making. Hypergraphs go
beyond traditional graphs by representing relationships between multiple
events or entities. They can correlate disparate events. For example, a
scheduled task combined with denied AnyDesk traffic and browsing to MegaUpload
might initially seem unrelated. However, hypergraphs can connect these dots,
revealing the signature of a ransomware attack like Akira. By analysing
historical patterns, hypergraphs can also predict attack patterns, allowing
SOC teams to anticipate the next steps of an attacker and defend
proactively.
Capable Protection: Enhancing Cloud-Native Security
Much like in a game of chess, anticipating your opponent’s moves and
strategizing accordingly is key to security. Understanding the value and
potential risks associated with NHIs and Secrets is the first step towards
securing your digital environment. Remediation prioritization plays a
crucial role in managing NHIs. The identification and classification process
of NHIs enables businesses to react promptly and adequately to any potential
vulnerabilities. Furthermore, awareness and education are fundamental to
minimize human-induced breaches. ... Cybersecurity must adapt. The
traditional, human-centric approach to cybersecurity is inadequate.
Integrating an NHI management strategy into your cybersecurity plan is
therefore a strategic move. Not only does it enhance an organization’s
security posture, but it also facilitates regulatory compliance. Coupled
with the potential for substantial cost savings, it’s clear that NHI
management is an investment with significant returns. For many
organizations, the challenge today lies in striking a balance between speed
and security. Rapid deployment of applications and digital services is
essential for maintaining competitive advantage, yet this can often be at
odds with the need for adequate cybersecurity.
Attackers Exploit Cryptographic Keys for Malware Deployment

Microsoft recommends developers avoid using machine keys copied from public
sources and rotate keys regularly to mitigate risks. The company also removed
key samples from its documentation and provided a script for security teams to
identify and replace publicly disclosed keys in their environments. Microsoft
Defender for Endpoint also includes an alert for publicly exposed ASP.NET
machine keys, though the alert itself does not indicate an active attack.
Organizations running ASP.NET applications, especially those deployed in web
farms, are urged to replace fixed machine keys with auto-generated values
stored in the system registry. If a web-facing server has been compromised,
rotating the machine keys alone may not eliminate persistent threats.
Microsoft said recommends conducting a full forensic investigation to detect
potential backdoors or unauthorized access points. In high-risk cases,
security teams should consider reformatting and reinstalling affected systems
to prevent further exploitation, the report said. Organizations should also
implement best practices such as encrypting sensitive configuration files,
following secure DevOps procedures and upgrading applications to ASP.NET
4.8.
The race to AI in 2025: How businesses can harness connectivity to pick up pace

When it comes to optimizing cloud workloads and migrating to available data
centers, connectivity is the “make or break” technology. This is why Internet
Exchanges (IXs) – physical platforms where multiple networks interconnect to
exchange traffic directly with one another via peering – have become
indispensable. An IX allows businesses to bypass the public Internet and find
the shortest and fastest network pathways for their data, dramatically
improving performance and reducing latency for all participants. Importantly,
smart use of an IX facility will enable businesses to connect seamlessly to
data centers outside of their “home” region, removing geography as a barrier
and easing the burden on data center hubs. This form of connectivity is
becoming increasingly popular, with the number of IXs in the US surging by
more than 350 percent in the past decade. The use of IXs itself is nothing
new, but what is relatively new is the neutral model they now employ. A
neutral IX isn’t tied to a specific carrier or data center, which means
businesses have more connectivity options open to them, increasing redundancy
and enhancing resilience. Our own research in 2024 revealed that more than 80
percent of IXs in the US are now data center and carrier-neutral, making it
the dominant interconnection model.
The hidden threat of neglected cloud infrastructure

Left unattended for over a decade, malicious actors could have reregistered
this bucket to deliver malware or launch devastating supply chain attacks.
Fortunately, researchers notified CISA, which promptly secured the vulnerable
resource. The incident illustrates how even organizations dedicated to
cybersecurity can fall prey to the dangers of neglected digital
infrastructure.This story is not an anomaly. It indicates a systemic issue
that spans industries, governments, and corporations. ... Entities attempting
to communicate with these abandoned assets include government organizations
(such as NASA and state agencies in the United States), military networks,
Fortune 100 companies, major banks, and universities. The fact that these
large organizations were still relying on mismanaged or forgotten resources is
a testament to the pervasive nature of this oversight. The researchers
emphasized that this issue isn’t specific to AWS, the organizations
responsible for these resources, or even a single industry. It reflects a
broader systemic failure to manage digital assets effectively in the cloud
computing age. The researchers noted the ease of acquiring internet
infrastructure—an S3 bucket, a domain name, or an IP address—and a
corresponding failure to institute strong governance and life-cycle management
for these resources.
DevOps Evolution: From Movement to Platform Engineering in the AI Era
After nearly 20 years of DevOps, Grabner sees an opportunity to address
historical confusion while preserving core principles. “We want to solve the
same problem – reduce friction while improving developer and operational
efficiency. We want to automate, monitor, and share.” Platform engineering
represents this evolution, enabling organizations to scale DevOps best
practices through self-service capabilities. “Platform engineering allows us
to scale DevOps best practices in an enterprise organization,” Grabner
explains. “What platform engineering does is provide self-services to
engineers so they can do everything we wanted DevOps to do for us.” At
Dynatrace Perform 2025, the company announced several innovations supporting
this evolution. The enhanced Davis AI engine now enables preventive
operations, moving beyond reactive monitoring to predict and prevent incidents
before they occur. This includes AI-powered generation of artifacts for
automated remediation workflows and natural language explanations with
contextual recommendations. The evolution is particularly evident in how
observability is implemented. “Traditionally, observability was always an
afterthought,” Grabner explains.
Bridging the IT Gap: Preparing for a Networking Workforce Evolution

People coming out of university today are far more likely to be experienced in
Amazon Web Services (AWS) and Azure than in Border Gateway Protocol (BGP) and
Ethernet virtual private network (EVPN). They have spent more time with
Kubernetes than with a router or switch command line. Sure, when pressed into
action and supported by senior staff or technical documentation, they can
perform. But the industry is notorious for its bespoke solutions, snowflake
workflows, and poor documentation. None of this ought to be a surprise. At
least part of the allure of the cloud for many is that it carries the illusion
of pushing problems to another team. Of course, this is hardly true. No
company should abdicate architectural and operational responsibility entirely.
But in our industry’s rush to new solutions, there are countless teams for
which this was an unspoken objective. Regardless, what happens to companies
when the people skilled enough to manage the complexity are no longer on call?
Perhaps you’re a pessimist and feel that the next generation of IT pros is
somehow less capable than in the past. The NASA engineers who landed a man on
the moon may have similar things to say about today’s rocket scientists who
rely heavily on tools to do the math for them.
A View on Understanding Non-Human Identities Governance
NHIs inherently require connections to other systems and services to fulfill
their purpose. This interconnectivity means every NHI becomes a node in a web
of interdependencies. From an NHI governance perspective, this necessitates
maintaining an accurate and dynamic inventory of these connections to manage
the associated risks. For example, if a single NHI is compromised, what does
it connect to, and what would an attacker be able to access to laterally move
into? Proper NHI governance must include tools to map and monitor these
relationships. While there are many ways to go about this manually, what we
actually want is an automated way to tell what is connected to what, what is
used for what, and by whom. When thinking in terms of securing our systems, we
can leverage another important fact about all NHIs in a secured application to
build that map, they all, necessarily, have secrets. ... Essentially, two
risks make understanding the scope of a secret critical for enterprise
security. First is that misconfigured or over-privileged secrets can
inadvertently grant access to sensitive data or critical systems,
significantly increasing the attack surface. Imagine accidentally giving write
privileges to a system that can access your customer's PII. That is a ticking
clock waiting for a threat actor to find and exploit it.
No comments:
Post a Comment