Quote for the day:
"Leaders who make their teams successful are followed even through the hardest journeys." -- Gordon Tredgold
Agentic AI upends SaaS models & sparks valuation shock
The Software-as-a-Service market is moving away from seat-based licensing as
agentic artificial intelligence tools change how companies build and purchase
business software, according to analysts and industry executives. Investors have
already reacted to the shift. A broad sell-off in software stocks followed
recent advances in agentic technology, raising questions regarding the
durability of current business models. Concerns persist that traditional revenue
streams may be at risk as autonomous systems perform increasing volumes of work
with fewer human users. ... Not every vendor is well positioned for the
transition. Industry observers are using the term "zombie SaaS" for companies
that raised large rounds at peak valuations from 2020 to 2022 and now trade or
transact below the total capital invested. These businesses often face a
mismatch between historical expectations and current demand. They can struggle
to raise new funding and may lack the growth rate needed to justify earlier
valuations. Meanwhile, newer entrants can build competing products faster and at
lower cost, increasing pressure on incumbents with larger cost structures. ...
AI is also reshaping procurement decisions. Some companies are shifting toward
internal tools as non-technical teams gain access to systems that generate
software from natural-language prompts and templates. Industry discussion points
to Ramp building internal revenue tools and AI agents in place of third-party
software. Software developers: Prime cyber targets and a rising risk vector for CISOs
Attackers are increasingly targeting the tools, access, and trusted channels
used by software developers rather than simply exploiting application bugs.
The threats blend technical compromise — malicious packages, development
pipeline abuse, etc. — with social engineering and AI-driven attacks. ... The
tokens, API keys, cloud credentials, and CI/CD secrets held by software
developers unlock far broader access than a typical office user account,
making software engineers a prime target for cybercriminals. “They
[developers] hold the keys to the kingdom, privileged access to source code
and cloud infrastructure, making them a high-value target,” Wood adds. ...
Attackers aren’t just looking for flaws in code — they’re looking for access
to software development environments. Common security shortcomings, including
overprivileged service accounts, long-lived tokens, and misconfigured
pipelines, offer a ready means for illicit entry into sensitive software
development environments. “Improperly stored access credentials are
low-hanging fruit for even the most amateur of threat actors,” says Crystal
Morin, senior cybersecurity strategist at cloud-native security and
observability vendor Sysdig. ... AI-assisted development and “vibe coding” are
increasing exposure to risk, especially because such code is often generated
quickly without adequate testing, documentation, or traceability.How network modernization enables AI success and quantum readiness
In essence, inadequate networks limit the ability of AI “blood” to nourish the
body of an organization — weakening it and stifling its growth. Many
enterprise networks developed incrementally over time, with successive layers
of technology implemented over time. Mergers, divestitures, and one-off
projects to solve immediate problems have left organizations with a patchwork
of architectures, vendors and configurations. ... As AI traffic increases
across data centers, clouds, and the edge, blind spots multiply.
Once-manageable technical debt becomes an active security liability, expanding
the attack surface and undermining Zero Trust initiatives as AI-driven traffic
increases. ... Quantum computers could break today’s encryption standards,
exposing sensitive financial, healthcare and operational data. Worse,
attackers are already engaging in “harvest now, decrypt later” strategies —
stealing encrypted data today to exploit tomorrow. The relevance to networking
and AI issues is straightforward. Preparing for the challenges (and
opportunities) of quantum computing will be an incremental, multi-year project
that needs to start now. Enterprise IT infrastructures must be able to adapt
and scale to quantum computing developments as they evolve. Companies will
need to be able to “skate to where the puck will be,” and then skate again!
While becoming quantum-safe may seem daunting, organizations don’t have to do
it all at once. Rethinking next-generation OT SOC as IT/OT convergence reshapes industrial cyber defense
Clear gains from next-generation OT SOC innovation emerge across real-world
applications, such as OT-aware detection, AI-assisted triage, and distributed
SOC models designed to reflect the day-to-day realities of operating critical
infrastructure. ... The line between what is OT and what is IT is blurred.
Each customer, scenario, and request proposal shows a unique fingerprint of
architectural, process, and industry-related concerns. Our OT SOC development
program integrated industrial network sensors with enterprise SOC, enabling
holistic monitoring of plants and offices together. ... Risk is no longer
discussed purely from a cyber perspective, but in terms of operational impact,
safety, and reliability, which is more consequence-driven. When convergence is
implemented securely, alerts are no longer investigated in isolation;
identity, remote access activity, asset criticality, and process context are
correlated together. ... From a practical standpoint, Mashirova said that
automation delivers the most operational value in enrichment, correlation,
prioritization, and workflow orchestration. “Automating asset context,
vulnerability risk prioritization with remediation recommendations, alert
deduplication, and escalation logic dramatically improves analyst efficiency
without directly impacting the industrial process. AI agents can act as SOC
assistants by correlating large volumes of data and providing decision support
to analysts.”
Shai-hulud: The Hidden Cost of Supply Chain Attacks
In recent months, a somewhat novel supply chain threat has emerged against the
open source community; attackers are unleashing self-propagating malware on
component libraries and targeting downstream victims with infostealers. The most
famous recent example of this is Shai-hulud, a worm targeting NPM projects that
would take hold when a victim downloads a poisoned component. Once on a victim
machine, the malware used its access to infect components that the victim
maintains before self-publishing poisoned versions. ... Another consideration is
long-term, lasting damage from these incidents. Sygnia's Kidron explains that
the impact of a compromise like credential theft happens on a wider time scale.
If the issue has not been adequately contained, attackers can sell access or use
it for follow-on activity later. "In practice, damage unfolds across time
frames. Immediately — within hours to the first few days after exposure, the
primary risk is credential exposure: these campaigns are designed to execute
inside developer and CI/CD paths where tokens and secrets are accessible," he
says. "When those secrets leak, the downstream harm is not abstract — the
attacker can use them (or sell them) to authenticate as the victim and access
private repositories, pull data, tamper with code, trigger builds, publish
packages, access cloud resources, or perform actions “on behalf” of legitimate
identities."
United Airlines CISO on building resilience when disruption is inevitable
Modernization in aviation is less about speed and more about precision. Every change must measurably improve safety, reliability, or resilience. Cybersecurity must respect that bar. ... Cyber risk is assessed in terms of how it affects the ability to move aircraft, crew, and passengers safely and on time. It also means cybersecurity leaders must understand the business end-to-end. You cannot protect an airline effectively without understanding flight operations, maintenance, weather, crew scheduling, and regulatory constraints. Cybersecurity becomes an enabler of safe operations, not a separate technical function. ... Risk assessment goes beyond vendor questionnaires. It includes scenario analysis, operational impact modeling, and close coordination with partners, regulators, and industry groups. Information sharing is essential, because early awareness often matters more than perfect control. Ultimately, we assume some disruptions will originate externally. The goal is to detect them quickly, understand their operational impact, and adapt without compromising safety. Resilience and coordination are just as important as contractual controls. ... Speed matters, but clarity matters more. We also plan extensively in advance. You cannot improvise under pressure when aircraft and passengers are involved. Clear playbooks, rehearsals, and defined decision authorities allow teams to act decisively while staying aligned with safety principles.Securing IoT devices: why passwords are not enough
Traditional passwords are often not secure enough for technological devices or
systems. Many consumers use the default password that comes with the system
rather than changing it to a more secure one. When people update their
passwords, they often choose weak ones that are easy for cyberattackers to
crack. The volume of IoT devices makes manual password management inefficient
and risky. A primary threat is the lack of encryption as data travels between
networks. When multiple devices are connected, encryption is key to protecting
information. Another threat is poor network segmentation, which means connected
devices are misconfigured or less secure. ... Adopting a zero-trust methodology
is a better cybersecurity measure than traditional password-based systems. IoT
devices can still require a password, but the system may ask for additional
information to verify the user’s authorization. Users can set up passkeys,
security questions or other methods as the next step after entering a password.
... AI can be used both offensively and defensively in cybersecurity for IoT
devices. Hackers use AI to launch advanced attacks, but users can also implement
AI to detect suspicious behaviour and address threats. Consumers can purchase AI
security systems to safeguard their IoT devices beyond passwords, but they must
remain vigilant and continuously monitor their usage to prevent cyberattackers
from infiltrating them.
Creating a Top-Down and Bottom-Up Grounded Capability Model
A grounded capability model is a complete and stable set of these capabilities,
structured in levels from level 1 to sometimes level 4 so senior leaders, middle
managers, architects, and digital transformation managers can see the business
as an integrated whole. The “grounded” part matters: it means the model reflects
strategy and business design, not the quirks of today’s org chart or application
portfolio. ... Business Architecture Info emphasizes that a grounded capability
model is best built by combining top-down strategic direction with bottom-up
operational reality. The top-down view ensures the model is aligned to the
business plan and strategic goals, while the bottom-up view ensures it is
validated against real value streams, objectives, and subject-matter expertise.
... Top-down capability modeling needs the right stakeholders and the right
strategic inputs. On the stakeholder side, senior leaders are essential because
they own direction, priorities, and the definition of “what good looks like.”
The EA team, enterprise architects and business architects, translates that
direction into a structured capability view. ... Bottom-up capability modeling
grounds the model in delivery and operational truth. It relies heavily on middle
managers, subject matter experts, and business experts. In other words, people
who know how value is produced, where friction exists, and what “enablement”
really takes. The EA team remains a key facilitator and modeler, but validation
and discovery come from the business.
Secure The Path, Not The Chokepoint
The argument here is simple: baseline security policy should be enforced along the path where packets already travel. Programmable data planes, particularly P4 on programmable switching targets, make it possible to enforce meaningful guardrails at line rate, close to the workload, without redesigning the network into a set of security detours. ... When enforcement is concentrated on a few devices, the architecture depends on traffic detours or assumptions about where traffic flows. That creates three practical problems: First, important east west traffic may never traverse an inspection point. Second, response actions often depend on where a firewall sits rather than where the attacker is operating. Third, changes become slow and risky because every new workload pattern becomes another exception. ... A fabric first model succeeds when it focuses on controls that are simple, universal, and have a high impact. ... A fabric first approach does not remove the need for firewalls. Deep application inspection, proxy functions, content controls, and specialized policy workflows still make sense where rich context exists and where inspection overhead is acceptable. The shift is about default placement. Baseline guardrails and rapid containment belong in the fabric. ... A small set of metrics usually tells the story clearly: time from detection to enforced containment, reduction in unintended internal connection attempts, and time to produce a credible incident narrative during review.Banks Face Dual Authentication Crisis From AI Agents
Traditional authentication relies upon point-in-time verification like MFA and a
password, after which access is granted. Over the years, banks have analyzed
human spending patterns. But AI agents purchasing around the clock and seeking
optimal deals have rendered that model obsolete. "With autonomous agents
transacting on behalf of users, the distinction between legitimate and
fraudulent activity is blurred, and a single compromised identity could trigger
automated losses at scale," said Ajay Patel, head of agentic commerce at Prove.
... But before banks can address the authentication problem, they need to fix
their data infrastructure, said Carey Ransom, managing director at BankTech
Ventures. AI agents need clean, contextually appropriate data, banks don't yet
have standardized ways to provide it. So, when mistakes occur, who is at fault,
and who is liable for making things right? When AI agents can spawn sub-agents
that delegate tasks to other AI systems throughout a transaction chain, the
liability question gets murky. ... Layered authentication that balances security
with the speed will reduce agentic AI valuable risks, Ransom said. "Variant
transaction requests might require a new layer or type of authentication to
ensure it is legitimate and reflecting the desired activity," he said. "Checks
and balances will be a prevailing approach to protect both sides, while still
enabling the autonomy and efficiency the market desires."
No comments:
Post a Comment