Quote for the day:
"Definiteness of purpose is the starting
point of all achievement." -- W. Clement Stone

Enterprises should onboard
AI agents as deliberately as they onboard people —
with job descriptions, training curricula, feedback loops and performance
reviews. This is a cross-functional effort across
data science, security,
compliance, design, HR and the end users who will work with the system daily.
... Don’t let your AI’s first “training” be with real customers. Build
high-fidelity sandboxes and stress-test tone, reasoning and edge cases — then
evaluate with human graders. ... As onboarding matures, expect to see AI
enablement managers and
PromptOps specialists in more org charts, curating
prompts, managing retrieval sources, running eval suites and coordinating
cross-functional updates.
Microsoft’s internal Copilot rollout points to this
operational discipline: Centers of excellence, governance templates and
executive-ready deployment playbooks. These practitioners are the “teachers”
who keep AI aligned with fast-moving business goals. ... In a future where
every employee has an AI teammate, the organizations that take onboarding
seriously will move faster, safer and with greater purpose.
Gen AI doesn’t
just need data or compute; it needs guidance, goals, and growth plans.
Treating AI systems as teachable, improvable and accountable team members
turns hype into habitual value.

A
modular cloud architecture is one that makes a variety of discrete cloud
services available on demand. The services are hosted across multiple cloud
platforms, and different units within the business can pick and choose among
specific services to meet their needs. ... At a high level, the main challenge
stemming from a modular cloud architecture is that it adds complexity to an
organization's cloud strategy. The more cloud services the CIO makes
available, the harder it becomes to ensure that everyone is using them in a
secure, efficient, cost-effective way. This is why a pivot toward a modular
cloud strategy must be accompanied by governance and management practices that
keep these challenges in check. ... As they work to ensure that the business
can consume a wide selection of cloud services efficiently and securely, IT
leaders may take inspiration from a practice known as platform engineering,
which has grown in popularity in recent years. Platform engineering is the
establishment of approved IT solutions that a business's internal users can
access on a self-service basis, usually via a type of portal known as an
internal developer platform. Historically, organizations have used platform
engineering primarily to provide software developers with access to
development tools and environments, not to manage cloud services. But the same
sort of approach could help to streamline access to modular, composable cloud
solutions.

Establishing a product mindset also helps drive improvement of the platform
over time. “Start with a minimum viable platform to iterate and adapt based on
feedback while also considering the need to measure the platform’s impact,”
says
Platform Engineering’s Galante. ... Top-down mandates for new
technologies can easily turn off developers, especially when they alter
existing workflows. Without the ability to contribute and iterate, the
platform drifts from developer needs, prompting workarounds. ... “The feeling
of being heard and understood is very important,” says Zohar Einy, CEO at
Port, provider of a developer portal. “Users are more receptive to the portal
once they know it’s been built after someone asked about their problems.” By
performing user research and conducting developer surveys up front, platform
engineers can discover the needs of all stakeholders and create platforms that
mesh better with existing workflows and benefit productivity. ... Although
platform engineering case studies from large companies, like
Spotify,
Expedia,
or
American Airlines, look impressive on paper, it doesn’t mean their
strategies will transfer well to other organizations, especially those with
mid-size or small-scale environments. ... Platform engineering requires more
energy beyond a simple rebrand. “I’ve seen teams simply being renamed from
operations or infrastructure teams to platform engineering teams, with very
little change or benefit to the organization,” says Paula Kennedy
Traditional
cyber insurance risk models assume
ransomware means encrypted
files and brief business interruptions. The shift toward
data theft creates
complex claim scenarios that span multiple coverage lines and expose gaps in
traditional policy structures. When attackers steal data rather than just
encrypting it, the resulting claims can simultaneously trigger business
interruption coverage,
professional liability protection, regulatory defense
coverage and
crisis management. Each coverage line may have different limits,
deductibles and exclusions, creating complicated interactions that claims
adjusters struggle to parse. Modern business relationships are interconnected,
which amplifies complications. A data breach at one organization can trigger
liability claims from business partners, regulatory investigations across
multiple jurisdictions, and contractual disputes with vendors and customers.
Dependencies on third-party services create cascading exposures that
traditional risk models fail to capture. ... The insurance implications are
profound. Manual risk assessment processes cannot keep pace with the volume
and sophistication of
AI-enhanced attacks. Carriers still relying on
traditional underwriting approaches face a fundamental mismatch of human-speed
risk evaluation against machine-speed threat deployment.

Attackers are not trying to do the newest and greatest thing every single
day,”
watchTowr’s Harris explains. “They will do what works at scale. And
we’ve now just seen that
phishing has become objectively too expensive or too
unsuccessful at scale to justify the time investment in deploying mailing
infrastructure, getting domains and sender protocols in place, finding ways to
bypass
EDR,
AV, sandboxes, mail filters, etc. It is now easier to find a
1990s-tier vulnerability in a border device where EDR typically isn’t
deployed, exploit that, and then pivot from there.” ... “Identifying a command
injection that is looking for a command string being passed to a system in
some C or C++ code is not a terribly difficult thing to find,” Gross says.
“But I think the trouble is understanding a really complicated appliance like
these security network appliances. It’s not just like a single web application
and that’s it.” This can also make it difficult for product developers
themselves to understand the risks of a feature they add on one component if
they don’t have a full understanding of the entire product architecture. ...
Another problem? These appliances have a lot of legacy code, some that is 10
years or older. Plus, products and code bases inherited through acquisitions
often means the developers who originally wrote the code might be long gone.
Treat
OT changes as business changes (because they are). Involve plant managers,
safety managers, and maintenance leadership in risk decisions. Be sure to test
all changes in a development environment that adequately models the production
environment where possible. Schedule changes during planned downtime with
rollbacks ready. Build visibility passively with read-only collectors and
protocol-aware monitoring to create asset and traffic maps without requiring PLC
access. ... No one can predict the future. However, if the past is an indicator
of the future, adversaries will continue to increasingly bypass devices and
hijack cloud consoles,
API tokens and
remote management platforms to impact
businesses on an industrial scale. Another area of risk is the firmware supply
chain. Tiny devices often carry third-party code that we can’t easily patch.
We’ll face more “patch by replacement” realities, where the only fix is swapping
hardware. Additionally, machine identities at the edge, such as certificates and
tokens, will outnumber humans by orders of magnitude. The lifecycle and
privileges of those identities are the new perimeter. From a threat perspective,
we will see an increasing number of ransomware attacks targeting physical
disruption to increase leverage for the threat actors, as well as private
5G/smart facilities that, if misconfigured, propagate risk faster than any LAN
ever has.

As developers begin composing software instead of coding line by line, they will
need API-enabled composable components and services to stitch together. Software
engineering leaders should begin by defining a goal to achieve a composable
architecture that is based on modern multiexperience composable applications,
APIs and loosely coupled API-first services. ... Software engineering leaders
should support AI-ready data by organizing enterprise data assets for AI use.
Generative AI is most useful when the
LLM is paired with context-specific data.
Platform engineering and internal developer portals provide the vehicles by
which this data can be packaged, found and integrated by developers. The urgent
demand for AI-ready data to support AI requires evolutionary changes to data
management and upgrades to architecture, platforms, skills and processes.
Critically, Model Context Protocol (MCP) needs to be considered. ... Software
engineers can become risk-averse unless they are given the freedom,
psychological safety and environment for risk taking and experimentation.
Leaders must establish a culture of innovation where their teams are eager to
experiment with AI technologies. This also applies in software product
ownership, where experiments and innovation lead to greater optimization of the
value delivered to customers.

First, a sovereign cloud could be approached as a matter of procurement: Canada
could shift its contract from US tech companies that currently dominate the
approved list to non-American alternatives. At present, eight cloud service
providers (CSPs) are approved for use by the Canadian government, seven of which
are American. Accordingly, there is a clear opportunity to diversify
procurement, particularly towards European CSPs, as suggested by the
government’s ongoing discussions with France’s
OVH Cloud. ... Second, a
sovereign cloud could be defined as cloud infrastructure that is not only
located in Canada and insulated from foreign legal access, but also owned by
Canadian entities. Practically speaking, this would mean procuring services from
domestic companies, a step the government has already taken with
ThinkOn, the
only non-American company CSP on the government’s approved list. ... Third,
perhaps true cloud sovereignty might require more direct state intervention and
a publicly built and maintained cloud. The Canadian government could develop
in-house capacities for cloud computing and exercise the highest possible degree
of control over government data. A dedicated Crown corporation could be
established to serve the government’s cloud computing needs. ... No matter how
we approach it, cloud sovereignty will be costly.

When companies deploy AI features primarily to establish market position rather
than solve user problems, they create what might be termed ‘trust debt’ – a
technical and social liability that compounds over time. This manifests in
several ways, including degraded user experience, increased attack surfaces, and
regulatory friction that ultimately impacts system performance and scalability.
... The emerging landscape of AI governance frameworks, from the
EU AI Act to
ISO 42001, shows an attempt to codify engineering best practices for managing
algorithmic systems at scale. These standards address several technical
realities, including bias in training data, security vulnerabilities in model
inference, and intellectual property risks in data processing pipelines.
Organisations implementing robust AI governance frameworks achieve regulatory
compliance while adopting proven system design patterns that reduce operational
risk. ... The technical implementation of trust requires embedding privacy and
security considerations throughout the development lifecycle – what security
engineers call ‘shifting left’ on governance. This approach treats regulatory
compliance as architectural requirements that shape system design from
inception. Companies that successfully integrate governance into their technical
architecture find that compliance becomes a byproduct of good engineering
practices which, over time, creates a series of sustainable competitive
advantages.
From a sustainability standpoint, reusing and retrofitting
legacy infrastructure
is the single most impactful step our industry can take. Every
megawatt of IT
load that’s migrated into an existing site avoids the manufacturing, transport,
and installation of new chillers, pumps, generators, piping, conduit, and
switchgear and prevents the waste disposal associated with demolition. Sectors
like healthcare, airports, and manufacturing have long proven that, with proper
maintenance, mechanical and electrical systems can operate reliably for 30–50
years, and distribution piping can last a century. The data center industry –
known for redundancy and resilience – can and should follow suit. The good news
is that most data centers were built to last. ... When executed strategically,
retrofits can reduce capital costs by 30–50 percent compared to greenfield
construction, while accelerating time to market by months or even years. They
also strengthen
ESG reporting credibility, proving that sustainability and
profitability can coexist. ... At the end of the day, I agree with Ms. Kass –
the cleanest data center is the one that does not need to be built. For those
that are already built, reusing and revitalizing the infrastructure we already
have is not just a responsible environmental choice, it’s a sound business
strategy that conserves capital, accelerates deployment, and aligns our
industry’s growth with society’s expectations.
No comments:
Post a Comment