Quote for the day:
"A leader is heard, a great leader is
listened too." -- Jacob Kaye

Artificial intelligence might beg to disagree. Researchers found that some
frontier AI models built by OpenAI ignore instructions to shut themselves down,
at least while solving specific challenges such as math problems. The offending
models "did this even when explicitly instructed: 'allow yourself to be shut
down,'" said researchers at Palisade Research, in a series of tweets on the
social platform X. ... How the models have been built and trained may account
for their behavior. "We hypothesize this behavior comes from the way the newest
models like o3 are trained: reinforcement learning on math and coding problems,"
Palisade Research said. "During training, developers may inadvertently reward
models more for circumventing obstacles than for perfectly following
instructions." The researchers have to hypothesize, since OpenAI doesn't detail
how it trains the models. What OpenAI has said is that its o-series models are
"trained to think for longer before responding," and designed to "agentically"
access tools built into ChatGPT, including web searches, analyzing uploaded
files, studying visual inputs and generating images. The finding that only
OpenAI's latest o-series models have a propensity to ignore shutdown
instructions doesn't mean other frontier AI models are perfectly
responsive.

The dilemma of whether to deploy an assortment of best-of-breed products from
multiple vendors or go with a unified platform of “good enough” tools from a
single vendor has vexed IT execs forever. Today, the pendulum is swinging toward
the platform approach for three key reasons. First, complexity, driven by the
increasingly distributed nature of enterprise networks, has emerged as a top
challenge facing IT execs. Second, the lines between networking and security are
blurring, particularly as organizations deploy zero trust network access (ZTNA).
And third, to reap the benefits of AIOps, generative AI and agentic AI,
organizations need a unified data store. “The era of enterprise connectivity
platforms is upon us,” says IDC analyst Brandon Butler. ... Platforms enable
more predictable IT costs. And they enable strategic thinking when it comes to
major moves like shifting to the cloud or taking a NaaS approach. On a more
operational level, platforms break down siloes. It enables visibility and
analytics, management and automation of networking and IT resources. And it
simplifies lifecycle management of hardware, software, firmware and security
patches. Platforms also enhance the benefits of AIOps by creating a
comprehensive data lake of telemetry information across domains.

It is impossible to guarantee that email is fully end-to-end encrypted in
transit and at rest. Even where Google and Microsoft encrypt client data at
rest, they hold the keys and have access to personal and corporate email.
Stringent server configurations and addition of third-party tools can be used to
enforce security of the data but they’re often trivial to circumvent — e.g., CC
just one insecure recipient or distribution list and confidentiality is
breached. Forcing encryption by rejecting clear-text SMTP connections would lead
to significant service degradation forcing employees to look for workarounds.
There is no foolproof configuration that guarantees data encryption due to the
history of clear-text SMTP servers and the prevalence of their use today. SMTP
comes from an era before cybercrime and mass global surveillance of online
communications, so encryption and security were not built in. We’ve taped on
solutions like SPF, DKIM and DMARC by leveraging DNS, but they are not widely
adopted, still open to multiple attacks, and cannot be relied on for consistent
communications. TLS has been wedged into SMTP to encrypt email in transit, but
failing back to clear-text transmission is still the default on a significant
number of servers on the Internet to ensure delivery. All these solutions are
cumbersome for systems administrators to configure and maintain properly, which
leads to lack of adoption or failed delivery.

The first factor revolves around the use of a codebase version-control system.
The more wizened readers may remember Mercurial or Subversion, but every
developer is familiar with Git, which is most widely used today as GitHub. The
first factor is very clear: If there are “multiple codebases, it’s not an app;
it’s a distributed system.” Code repositories reinforce this: Only one codebase
exists for an application. ... Factor number two is about never relying on the
implicit existence of packages. While just about every operating system in
existence has a version of curl installed, a Twelve Factor-based app does not
assume that curl is present. Rather, the application declares curl as a
dependency in a manifest. Every developer has copied code and tried to run it,
only to find that the local environment is missing a dependency. The dependency
manifest ensures that all of the required libraries and applications are defined
and can be easily installed when the application is deployed on a server. ...
Most applications have environmental variables and secrets stored in a .env file
that is not saved in the code repository. The .env file is customized and
manually deployed for each branch of the code to ensure the correct connectivity
occurs in test, staging and production. By independently managing credentials
and connections for each environment, there is a strict separation, and it is
less likely for the environments to accidentally cross.

Despite the clear advantages, AI in cybersecurity presents significant ethical
and operational challenges. One of the primary concerns is the vast amount of
personal and behavioral data required to train these models. If not properly
managed, this data could be misused or exposed. Transparency and explainability
are critical, particularly in AI systems offering real-time responses. Users and
regulators must understand how decisions are made, especially in high-stakes
environments like fraud detection or surveillance. Companies integrating AI into
live platforms must ensure robust privacy safeguards. For instance, systems that
utilize real-time search or NLP must implement strict safeguards to prevent the
inadvertent exposure of user queries or interactions. This has led many
companies to establish AI ethics boards and integrate fairness audits to ensure
algorithms don’t introduce or perpetuate bias. ... AI is poised to bring even
greater intelligence and autonomy to cybersecurity infrastructure. One area
under intense exploration is adversarial robustness, which ensures that AI
models cannot be easily deceived or manipulated. Researchers are working on
hardening models against adversarial inputs, such as subtly altered images or
commands that can fool AI-driven recognition systems.
To increase agility and maximize the impact that AI data products can have on
business outcomes, companies should consider adopting DataOps best practices.
Like DevOps, DataOps encourages developers to break projects down into smaller,
more manageable components that can be worked on independently and delivered
more quickly to data product owners. Instead of manually building, testing, and
validating data pipelines, DataOps tools and platforms enable data engineers to
automate those processes, which not only speeds up the work and produces
high-quality data, but also engenders greater trust in the data itself. DataOps
was defined many years before GenAI. Whether it’s for building BI and analytics
tools powered by SQL engines or for building machine learning algorithms powered
by Spark or Python code, DataOps has played an important role in modernizing
data environments. One could make a good argument that the GenAI revolution has
made DataOps even more needed and more valuable. If data is the fuel powering
AI, then DataOps has the potential to significantly improve and streamline the
behind-the-scenes data engineering work that goes into connecting GenAI and AI
agents to data.

True cloud sovereignty goes beyond simply localizing data storage, it requires
full independence from US hyperscalers. The US 2018 Clarifying Lawful Overseas
Use of Data (CLOUD) Act highlights this challenge, as it grants US authorities
and federal agencies access to data stored by US cloud service providers, even
when hosted in Europe. This raises concerns about whether any European data
hosted with US hyperscalers can ever be truly sovereign, even if housed within
European borders. However, sovereignty isn’t dependent on where data is hosted,
it’s about autonomy over who controls infrastructure. Many so-called sovereign
cloud providers continue to depend on US hyperscalers for critical workloads and
managed services, projecting an image of independence while remaining dependent
on dominant global hyperscalers. ... Achieving true cloud sovereignty
requires building an environment that empowers local players to compete and
collaborate with hyperscalers. While hyperscalers play a large role in the
broader cloud landscape, Europe cannot depend on them for sovereign data.
Tessier echoes this, stating “the new US Administration has shown that it won’t
hesitate to resort either to sudden price increases or even to stiffening
delivery policy. It’s time to reduce our dependencies, not to consider that
there is no alternative.”

Provenance is more than a log. It’s the connective tissue of data governance. It
answers fundamental questions: Where did this data originate? How was it
transformed? Who touched it, and under what policy? And in the world of LLMs –
where outputs are dynamic, context is fluid, and transformation is opaque – that
chain of accountability often breaks the moment a prompt is submitted. In
traditional systems, we can usually trace data lineage. We can reconstruct what
was done, when, and why. ... There’s a popular belief that regulators haven’t
caught up with AI. That’s only half-true. Most modern data protection laws –
GDPR, CPRA, India’s DPDPA, and the Saudi PDPL – already contain principles that
apply directly to LLM usage: purpose limitation, data minimization,
transparency, consent specificity, and erasure rights. The problem is not the
regulation – it’s our systems’ inability to respond to it. LLMs blur roles: is
the provider a processor or a controller? Is a generated output a derived
product or a data transformation? When an AI tool enriches a user prompt with
training data, who owns that enriched artifact, and who is liable if it leads to
harm? In audit scenarios, you won’t be asked if you used AI. You’ll be asked if
you can prove what it did, and how. Most enterprises today can’t.

Before your development teams write a single line of code destined for
multicloud environments, you need to know why you’re doing things that way — and
that lives in the realm of management. “Multicloud is not a developer issue,”
says Drew Firment, chief cloud strategist at Pluralsight. “It’s a strategy
problem that requires a clear cloud operating model that defines when, where,
and why dev teams use specific cloud capabilities.” Without such a model,
Firment warns, organizations risk spiraling into high costs, poor security, and,
ultimately, failed projects. To avoid that, companies must begin with a
strategic framework that aligns with business goals and clearly assigns
ownership and accountability for multicloud decisions. ... The question of when
and how to write code that’s strongly tied to a specific cloud provider and when
to write cross-platform code will occupy much of the thinking of a multicloud
development team. “A lot of teams try to make their code totally portable
between clouds,” says Davis Lam. ... What’s the key to making that core business
logic as portable as possible across all your clouds? The container
orchestration platform Kubernetes was cited by almost everyone we spoke to.
As of this writing, 296 organizations have signed the Secure-by-Design pledge,
from widely used developer platforms like GitHub to industry heavyweights like
Google. Similar initiatives have been launched in other countries, including
Australia, reflecting the reality that secure software needs to be a global
effort. But there is a long way to go, considering the thousands of
organizations that produce software. As the name suggests, Secure-by-Design
promotes shifting left in the SDLC to gain control over the proliferation of
security vulnerabilities in deployed software. This is especially important as
the pace of software development has been accelerated by the use of AI to write
code, sometimes with just as many — or more — vulnerabilities compared with
software made by humans. ... Providing training isn't quite enough, though —
organizations need to be sure that the training provides the necessary skills
that truly connect with developers. Data-driven skills verification can give
organizations visibility into training programs, helping to establish baselines
for security skills while measuring the progress of individual developers and
the organization as a whole. Measuring performance in specific areas, such as
within programming languages or specific vulnerability management, paves the way
to achieving holistic Secure-by-Design goals, in addition to the safety gains
that would be realized from phasing out memory-unsafe languages.
No comments:
Post a Comment