Quote for the day:
"A leader's dynamic does not come from special powers. It comes from a strong belief in a purpose and a willingness to express that conviction." -- Kouzes & Posner
AI tops CEO earnings calls as bubble fears intensify
Research by Hamburg-based IoT Analytics examined around 10,000 earnings calls
from about 5,000 global companies listed in the US. The firm's latest quarterly
study found that AI rose to the top of CEO agendas for the first time in the
period, while concerns about a possible AI-related asset bubble also increased
sharply. Mentions of an "AI bubble" climbed 64% compared with the previous
quarter. IoT Analytics said executives often paired announcements of new AI
investments with comments that questioned the sustainability of current market
valuations and the pace of capital inflows into the sector. ... While the number
of AI-related references reached a new high, comments that explicitly mentioned
a "bubble" in connection with technology or financial markets grew even faster
in percentage terms. The study recorded the strongest quarter-on-quarter jump in
bubble-related language since it began tracking the metric. Executives used the
term "bubble" in several contexts. Some discussed venture funding and valuations
for private AI companies. Others raised questions about the level of spending on
compute infrastructure and the potential for overcapacity. A smaller group
linked bubble concerns to individual asset classes such as AI-related equities.
The increase in bubble-related discussion came alongside continued announcements
of long-term AI spending plans. AI governance becomes a board mandate as operational reality lags
Executives have clearly moved fast to formalize oversight. But the foundations
needed to operationalize those frameworks—processes, controls, tooling, and
skills embedded in day-to-day work—have not kept pace, according to the
report. ... Many organizations still lack a comprehensive view of where AI is
being used across their business, Singh explained. Shadow AI and unsanctioned
tools proliferate, while sanctioned projects are not always cataloged in a
central inventory. Without this map of AI systems and use cases, governance
bodies are effectively trying to manage risk they cannot fully see. The second
gap is conceptual. “There’s a myth that governance is the same as regulation,”
Singh said. “Unfortunately, it’s not.” Governance, she argued, is much
broader: It includes understanding and mitigating risk, but also proving out
product quality, reliability, and alignment with organizational values.
Treating governance as a compliance checkbox leaves major gaps in how AI
actually behaves in production. The final one is AI literacy. “You can’t
govern something you don’t use or understand,” Singh said. If only a small AI
team truly grasps the technology while the rest of the organization is buying
or deploying AI-enabled tools, governance frameworks will not translate into
responsible decisions on the ground. ... What good governance looks like,
Singh argued, is highly contextual. Organizations need to anchor governance in
what they care about most. Legal Issues for Data Professionals: Data Centers in Space
If data is processed, copied, or stored on satellites, courts may be forced to
decide whether space-based computing falls outside the scope of a “worldwide”
license. A licensor could argue that the licensee exceeded the grant by moving
data “off-planet,” creating an unintended new use. Moreover, even defining the
equivalent of “territory” as “throughout the universe” raises questions as
well as addressing them. The legal issues and regulatory rules involving data
governance and legal rights in data centers in orbit have antecedents. ...
Satellite-based data centers raise new questions: Where is an unauthorized
copy of copyrighted material made for legal purposes, and which jurisdiction’s
laws apply? A location in space complicates these legal issues and has
implications for data governance. ... On Earth, IP enforcement against
infringement relies on tools like forensic imaging, seizure of hard drives,
discovery of server logs, and on-site inspections. Space breaks these tools. A
court cannot easily order the seizure of a satellite. Inspecting hardware in
orbit is not possible without specialized spacecraft. From a user’s
perspective, retrieving logs may depend entirely on a vendor’s operation. ...
Most cloud contracts and cyber insurance policies assume all processing
happens on Earth. They do not address such things as satellite collisions,
radiation damage, solar storms, loss of access due to orbital debris, or the
failure of a satellite-to-Earth data link. DNS as a Threat Vector: Detection and Mitigation Strategies
DNS is a critical control plane for modern digital infrastructure — resolving
billions of queries per second, enabling content delivery, SaaS access, and
virtually every online transaction. Its ubiquity and trust assumptions make it
a high‑value target for attackers and a frequent root cause of outages.
Unfortunately, this essential service can be exploited as a DoS vector.
Attackers can harness misconfigured authoritative DNS servers, open DNS
resolvers, or the networks that support such activities to initiate a flood of
traffic to a target, impacting the service availability and causing
disruptions in a large scale. This misuse of DNS capabilities makes it a
potent tool in the hands of cybercriminals. ... DNS detection strategies
focus on analyzing traffic patterns and query content for anomalies (like
long/random subdomains, high volume, rare record types) to spot threats like
tunneling, Domain Generation Algorithms, or malware, using AI/ML, threat
intel, and SIEMs for real-time monitoring, payload analysis, and traffic
analysis, complemented by DNSSEC and rate limiting for prevention. Legacy
security tools often miss DNS threats. ... DNS mitigation strategies involve
securing servers, controlling access (MFA, strong passwords), monitoring
traffic for anomalies, rate-limiting queries, hardening configurations, and
using specialized DDoS protection services to prevent amplification,
hijacking, and spoofing attacks, ensuring domain integrity and availability.
The ‘chassis strategy’: How to build an innovation system that compounds value
The chassis strategy starts with a simple principle: centralize what must be
common and decentralize what should evolve. You don’t need a monolithic
innovation platform. You need a spine — a shared foundation of data, models
and governance — that everything else plugs into. That spine ensures no matter
who builds the next great idea — your team, a startup or a strategic partner —
the learning, data and IP stay inside your system. ... You don’t need five
years or an enterprise overhaul. A minimal but functional chassis can be built
in nine months. The first three months are about framing and simplification.
Pick three or four innovation domains — formulation, packaging, pricing or
supply chain. Define the shared spine: your data schema, APIs and key metrics.
Draw a bright line between what you’ll own (core) and what you’ll source
(modules). The next three months are about building the core. Set up a unified
data layer, model registry, API gateway and an experimentation sandbox. Keep
it lightweight. No monoliths, no “innovation cloud.” Just the essentials that
make reuse possible. The final three months are about plugging and proving.
Integrate a few external modules — a supplier-insight engine, a generative
packaging designer, a formulation optimizer. Track time to activation and
reuse rate. The goal isn’t more features; it’s showing that vendors can
connect fast, share data safely and strengthen the system.AI is creating more software flaws – and they're getting worse
The CodeRabbit study found 10.83 issues with AI pull requests versus 6.45 for
human-only ones, adding that AI pull requests were far more likely to have
critical or major issues. "Even more striking: high-issue outliers were much
more common in AI PRs, creating heavy review workloads," Loker said. Logic and
correctness was the worst area for AI code, followed by code quality and
maintainability and security. Because of that, CodeRabbit advised reviewers to
watch out for those types of errors in AI code. ... "These include business
logic mistakes, incorrect dependencies, flawed control flow, and
misconfigurations," Loker wrote. "Logic errors are among the most expensive to
fix and most likely to cause downstream incidents." AI code was also spotted
omitting null checks, guardrails, and other error checking, which Loker noted
are issues that can lead to outages in the real world. When it came to
security, the most common mistake by AI was improper password handling and
insecure object references, Loker noted, with security issues 2.74 times more
common in AI code than that written by humans. Another major difference
between AI code and human written-code was readability. "AI-produced code
often looks consistent but violates local patterns around naming, clarity, and
structure," Loker added.
Identity risk is changing faster than most security teams expect
Two forces are expected to influence trust systems in 2026. The first is the
rise of autonomous AI agents. These agents run onboarding attempts, learn from
rejection, and retry with improved tactics. Their speed compresses the window
for detecting weaknesses and demands faster defensive responses. The second
force comes from the long tail of quantum disruption. Growing quantum capability
is putting pressure on classical cryptographic methods, which lose strength once
computation reaches certain thresholds. Data encrypted today can be harvested
and unlocked in the future. In response, some organizations are adopting quantum
resilient hashing and beginning the transition toward post quantum cryptography
that can withstand newer forms of computational power. ... A three part
structure is emerging as a practical response. Hashing establishes integrity
that cannot be altered. Encryption protects data while standards evolve.
Predictive analysis identifies early drift and synthetic behavior before it
scales. Together these elements support a continuous trust posture that
strengthens as it absorbs more identity events. This model also addresses rising
threats such as presentation spoofing, identity drift, and credential replay.
All three are expected to increase in 2026 based on observed anomaly patterns.
Since these vectors rely on repeated behaviors, long term monitoring is
essential.
D&O liability protection rising for security leaders — unless you’re a midtier CISO
CISOs have the potential for more than one safety net, the first of which is a
company’s indemnification provisions — rules typically embedded in the company’s
articles of incorporation and bylaws. “The language of a company’s
indemnification provisions must be properly worded — typically achieved by the
general counsel and a board vote — to provide indemnification for a CISO equal
to every other director or officer of a company,” explains John Peterson of
World Insurance Associates, a provider of employment practice liability
insurance. The second safety net for a CISO is the D&O liability insurance
policy procured by the CISO’s company through an insurance broker. Even when a
company has D&O insurance in place, Peterson advises CISOs to review those
policies to make sure they are covered as an “insured person.” ... While
enterprise CISOs often have access to legal teams and crisis PR advisors to help
shield them, a midrange firm often has one or two people — possibly more —
wearing multiple hats, like compliance, IT, and security all rolled into one.
This can become an issue because “regulators, customers, and even the courts
won’t lower the expectations just because the company is smaller,” Bagnall says.
“Without legal protection, CISOs face significant personal and professional
risk,” Bagnall said.
The CIO Conundrum: Balancing Security and Innovation in the Age of AI SaaS
AI tools are now accessible, inexpensive, and often solve workflow friction that
teams have lived with for years. The business is moving fast because the barrier
to entry is low. This pace raises important questions for CIOs:Are we creating
unnecessary friction where teams expect velocity? Have we made the “right path”
faster than the workaround? Do our processes match how people work today? Shadow
IT grows when official paths feel slow or unclear. Not because teams want to
hide things, but because they feel innovation can’t wait. Governance must evolve
to match that reality. ... Security should accelerate productivity, not
constrain it. With strong identity controls, clear data boundaries, and
automated configuration standards, we can introduce new tools without adding
friction. These guardrails reduce the workload on security teams and create a
predictable environment for employees. The business moves faster. IT gains
visibility. The organization avoids the drift that creates risk and
inefficiency. ... The question isn’t whether teams will continue exploring new
tools, it’s whether we provide a responsible, scalable path forward. When intake
is transparent, vetting is calibrated, and guardrails are embedded, the
organization can innovate with confidence. The CIO’s job is to design frameworks
that keep pace with the business, not frameworks the business waits on.
No comments:
Post a Comment