Quote for the day:
"Great leaders go forward without stopping, remain firm without tiring and remain enthusiastic while growing" -- Reed Markham
Cyber Insights 2025: APIs – The Threat Continues
APIs are easily written, often with low-code / no-code tools. They are often
considered by the developer as unimportant in comparison to the apps they
connect, and probably protected by the tools that protect the apps. Bad call.
“API attacks will increase in 2025 due to this over-reliance on existing
application security and API management tools, but also due to organizations
dragging their heels when it comes to protecting APIs,” says James Sherlow,
systems engineering director of EMEA at Cequence Security. “While there was
plenty of motivation to roll out APIs to stand up new services and support
revenue streams, the same incentives are not there when it comes to protecting
them.” Meanwhile, attackers are becoming increasingly sophisticated in their
attacks. “In contrast, threat actors are not resting on their laurels,” he
continued. “It’s now not uncommon for them to use multi-faceted attacks that
seek to evade detection and then dodge and feint when the attack is blocked, all
the time waiting until the last minute to target their end goal.” In short, he
says, “It’s not until the business is breached that it wakes up to the fact that
API protection and application protection are not one and the same thing. Web
Application Firewalls, Content Delivery Networks, and API Gateways do not
adequately protect APIs.”
Box-Checking or Behavior-Changing? Training That Matters
The pressure to meet these requirements is intense, and when a company finds an
“acceptable” solution, they too often just check the box knowing they are
compliant and stick with that solution in perpetuity - whether it creates a more
secure workplace and behavioral change or not. Training programs designed purely
to meet regulations are rarely effective. These initiatives tend to rely on
generic content that employees skim through and forget. Organizations may meet
the legal standard, but they fail to address the root causes of risky behavior.
... To improve outcomes, training programs must connect with people on a more
practical level. Tailoring the content to fit specific roles within the
organization is one way to do this. The threats a finance team faces, for
example, are different from those encountered by IT professionals, so their
training should reflect those differences. When employees see the relevance of
the material, they are more likely to engage with it. Professionals in security
awareness roles can distinguish themselves by designing programs that meet these
needs. Equally important is embracing the concept of continuous learning. Annual
training sessions often fail to stick. Smaller, ongoing lessons delivered
throughout the year help employees retain information and incorporate it into
their daily routines.
OpenAI opposes data deletion demand in India citing US legal constraints
OpenAI has informed the Delhi High Court that any directive requiring it to
delete training data used for ChatGPT would conflict with its legal obligations
under US law. The statement came in response to a copyright lawsuit filed by the
Reuters-backed Indian news agency ANI, marking a pivotal development in one of
the first major AI-related legal battles in India. ... This case mirrors global
legal trends, as OpenAI faces similar lawsuits in the United States and beyond,
including from major organizations like The New York Times. OpenAI maintains its
position that it adheres to the “fair use” doctrine, leveraging publicly
available data to train its AI systems without infringing intellectual property
laws. In the case of Raw Story Media v. OpenAI, heard in the Southern District
of New York, the plaintiffs accused OpenAI of violating the Digital Millennium
Copyright Act (DMCA) by stripping copyright management information (CMI) from
their articles before using them to train ChatGPT. ... In the ANI v OpenAI case,
the Delhi High Court has framed four key issues for adjudication, including
whether using copyrighted material for training AI models constitutes
infringement and whether Indian courts have jurisdiction over a US-based
company. Nath’s view aligns with broader concerns over how existing legal
frameworks struggle to keep pace with AI advancements.
Defense strategies to counter escalating hybrid attacks
Threat actor profiling plays a pivotal role in uncovering hybrid operations by
going beyond surface-level indicators and examining deeper contextual
elements. Profiling involves a thorough analysis of the actor’s history, their
strategic objectives, and their operational behaviors across campaigns. For
example, understanding the geopolitical implications of a ransomware attack
targeting a defense contractor can reveal espionage motives cloaked in
financial crime. Profiling allows researchers to differentiate between purely
financial motivations and state-sponsored objectives masked as criminal
operations. Hybrid actors often leave “behavioral fingerprints” – unique
combinations of techniques and infrastructure reuse – that, when analyzed
within the context of their history, can expose their true intentions. ...
Threat intelligence feeds enriched with historical data can help correlate
real-time events with known threat actor profiles. Additionally, implementing
deception techniques, such as industry-specific honeypots, can reveal
operational objectives and distinguish between actors based on their response
to decoys. ... Organizations must adapt by adopting a defense-in-depth
strategy that combines proactive threat hunting, continuous monitoring, and
incident response preparedness.
4 Cybersecurity Misconceptions to Leave Behind in 2025
Workers need to avoid falling into a false sense of security, and
organizations must ensure that they are frequently updating advice and
strategies to reduce the likelihood of their employees falling victim. In
addition, we found that this confidence doesn’t necessarily translate into
action. A notable portion of those surveyed (29%) admit that they don’t
report suspicious messages even when they do identify a phishing scam,
despite the presence of convenient reporting tools like “report phishing”
buttons. ... Our second misconception stems from workers’ sense of
helplessness. This kind of cyber apathy can become a dangerous
self-fulfilling prophecy if left unaddressed. The key problem is that even
if it’s true that information is already online, this isn’t equivalent to
being directly under threat, and there are different levels of risk. It’s
one thing knowing someone has your home address; knowing they have your
front door key in their pocket is quite another. Even if it’s hard to keep
all of your data hidden, that doesn’t mean it’s not worth taking steps to
keep key information protected. While it can seem impossible to stay safe
when so much personal data is publicly available, this should be the impetus
to bolster cybersecurity practices, such as not including personal
information in passwords.
Real datacenter emissions are a dirty secret
With legislation such as the EU's Corporate Sustainability Reporting
Directive (CSRD) now in force, customers and resellers alike are expecting
more detailed carbon emissions reporting across all three Scopes from
suppliers and vendors, according to Canalys. This expectation of
transparency is increasingly important in vendor selection processes because
customers need their vendors to share specific numbers to quantify the
environmental impact of their cloud usage. "AWS has continued to fall behind
its competitors here by not providing Scope 3 emissions data via its
Customer Carbon Footprint Tool, which is still unavailable," Caddy claimed.
"This issue has frustrated sustainability-focused customers and partners
alike for years now, but as companies prepare for CSRD disclosure, this lack
of granular emissions disclosure from AWS can create compliance challenges
for EU-based AWS customers." We asked Amazon why it doesn't break out the
emissions data for AWS separately from its other operations, but while the
company confirmed this is so, it declined to offer an explanation. Neither
did Microsoft nor Google. In a statement, an AWS spokesperson told us: "We
continue to publish a detailed, transparent report of our year-on-year
progress decarbonizing our operations, including across our datacenters, in
our Sustainability Report.
5 hot network trends for 2025
AI will generate new levels of network traffic, new requirements for low
latency, and new layers of complexity. The saving grace, for network
operators, is AIOps – the use of AI to optimize and automate network
processes. “The integration of artificial intelligence (AI) into IT
operations (ITOps) is becoming indispensable,” says Forrester analyst Carlos
Casanova. “AIOps provides real-time contextualization and insights across
the IT estate, ensuring that network infrastructure operates at peak
efficiency in serving business needs.” ... AIOps can deliver proactive issue
resolution, it plays a crucial role in embedding zero trust into networks by
detecting and mitigating threats in real time, and it can help network execs
reach the Holy Grail of “self-managing, self-healing networks that could
adapt to changing conditions and demands with minimal human intervention.”
... Industry veteran Zeus Kerravala predicts that 2025 will be the year that
Ethernet becomes the protocol of choice for AI-based networking. “There is
currently a holy war regarding InfiniBand versus Ethernet for networking for
AI with InfiniBand having taken the early lead,” Kerravala says. Ethernet
has seen tremendous advancements over the last few years, and its
performance is now on par with InfiniBand, he says, citing a recent test
conducted by World Wide Technology.
Building the Backbone of AI: Why Infrastructure Matters in the Race for Adoption
One of the primary challenges facing businesses when it comes to AI is
having the foundational infrastructure to make it work. Depending on the use
case, AI can be an incredibly demanding technology. Some algorithmic AI
workloads use real-time inference, which will grossly underperform without a
direct, high bandwidth, low-latency connection. ... An organization’s path
to the cloud is really the central pillar of any successful AI strategy. The
sheer scale at which organizations are harvesting and using data means that
storing every piece of information on-premises is simply no longer viable.
Instead, cloud-based data lakes and warehouses are now commonly used to
store data, and having streamlined access to this data is essential. But
this shift isn’t just about scale or storage – it’s about capability. AI
models, particularly those requiring intensive training, often reside in the
cloud, where hyperscalers can offer the power density and GPU capabilities
that on-premises data centers typically cannot support. Choosing the right
cloud provider in this context is of course vital, but the real game-changer
lies not in the who of connectivity, but the how. Relying on the public
internet for cloud access creates bottlenecks and risks, with unpredictable
routes, variable latency, and compromised security.
Why all developers should adopt a safety-critical mindset
Safety-critical industries don’t just rely on reactive measures; they also
invest heavily in proactive defenses. Defensive programming is a key
practice here, emphasizing robust input validation, error handling, and
preparation for edge cases. This same mindset can be invaluable in
non-critical software development. A simple input error could crash a
service if not properly handled—building systems with this in mind ensures
you’re always anticipating the unexpected. Rigorous testing should also be a
norm, and not just unit tests. While unit testing is valuable, it's
important to go beyond that, testing real-world edge cases and boundary
conditions. Consider fault injection testing, where specific failures are
introduced (e.g., dropped packets, corrupted data, or unavailable resources)
to observe how the system reacts. These methods complement stress testing
under maximum load and simulations of network outages, offering a clearer
picture of system resilience. Validating how your software handles external
failures will build more confidence in your code. Graceful degradation is
another principle worth adopting. If a system does fail, it should fail in a
way that’s safe and understandable. For example, an online payment system
might temporarily disable credit card processing but allow users to save
items in their cart or check account details.
Strengthening Software Supply Chains with Dependency Management
Organizations must prioritize proactive dependency management, high-quality
component selection and vigilance against vulnerabilities to mitigate
escalating risks. A Software Bill of Materials (SBOM) is an essential tool
in this approach, as it offers a comprehensive inventory of all software
components, enabling organizations to quickly identify and address
vulnerabilities across their dependencies. In fact, projects that implement
an SBOM to manage open source software dependencies demonstrate a 264-day
reduction in the time taken to fix vulnerabilities compared to those that do
not. SBOMs provide a comprehensive list of every component within the
software, enabling quicker response times to threats and bolstering overall
security. However, despite the rise in SBOM usage, it is not keeping pace
with the influx of new components being created, highlighting the need for
enhanced automation, tooling and support for open source maintainers. ...
This complacency — characterized by a false sense of security — accumulates
risks that threaten the integrity of software supply chains. The rise of
open source malware further complicates the landscape, as attackers exploit
poor dependency management.
No comments:
Post a Comment